source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
451,211
When we begin to learn Statistics, we learn about seemingly important class of estimators that satisfy the properties sufficiency and completeness. However, when I read recent articles in Statistics I can hardly find any papers that address complete sufficient statistics. Why would we not care about completeness, sufficiency of an estimator as much anymore?
We still care. However, a large part of statistics is now based on a data-driven approach where these concepts may not be essential or there are many other important concepts. With computation power and lots of data, a large body of statistics is devoted to provide models that solve specific problems (such as forecasting or classification) that can be tested using the given data and cross-validation strategies. So, in these applications, the most important characteristics of models are that they have a good fit to the data and claimed ability to forecast out of sample. Furthermore, some years ago, we were very interested in unbiased estimators. We still are. However, in that time, in rare situations one could consider to use an estimator that is not unbiased. In situations where we are interested in out of sample forecasts, we may accept an estimator that is clearly biased (such as Ridge Regression, LASSO and Elastic Net) if they are able to reduce the out of sample forecast error. Using these estimators actually we “pay” with bias to reduce the variance of the error or the possibility of overfitting. This new focus of the literature has also brought new concepts such as sparsistency . In statistical learning theory we study lots of bounds to understand the ability of the generalization of a model (this is crucial). See for instance the beautiful book "Learning From Data" by Abu-Mostafa et al. Related fields such as econometrics have also been suffering the impact of these changes. Since this field is strongly based on statistical inference and it is fundamental to work with unbiased estimators associated with models that come from the theory, the changes are slower. However, several attempts have been introduced and machine learning (statistical learning) is becoming essential to deal for instance high dimensional databases. Why is that? Because economists, in several situations, are interested in the coefficients and not in the predictable variable. For instance, imagine a work that tries to explain corruption-level using a regression model such as: $$\text{corruptionLevel} = \beta_0 + \beta_1 \text{yearsInPrison} + \beta_2 \text{numberConvicted} + \cdots$$ Note that the coefficients $\beta_1$ and $\beta_2$ provide information to guide the public policy. Depending on the values of the coefficients, different public policies will be carried out. So, they cannot be biased. If the idea is that we should trust in the coefficients of the econometric regression model and we are working with high dimensional databases, maybe we may accept to pay with some bias to receive in return lower variance: “Bias-variance tradeoff holds not only for forecasts (which in the case of a linear model are simply linear combinatons of the estimated coefficients) but also for individual coefficients. One can estimate individual coefficients more accurately (in terms of expected squared error) by introducing bias so as to cut variance. So in that sense biased estimators can be desirable. Remember: we aim at finding the true value. Unbiasedness does not help if variance is large and our estimates lie far away from the true value on average across repeated samples.” - @Richard_Hardy This idea has motivated researchers to look for solutions that sound good for economists as well. Recent literature has approached this problem by choosing focus variables that are not penalized. These focus variables are the ones that are important to guide public policy. In order to avoid the omitted variables bias, they also run a regression of this focus variables on all the other independent variables using a shrinking procedure (such as Lasso). The ones with coefficients different from zero are also included in the regression model as well. They ensure that asymptotics of this procedure is good. See here a paper of one of the leader of the field. See for instance this overview by leaders of the field.
{ "source": [ "https://stats.stackexchange.com/questions/451211", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/273182/" ] }
451,809
Do not vote, one vote will not reverse the election result. What's more, the probability of injury in a traffic collision on the way to the ballot box is much higher than your vote reversing the election result. What is even more, the probability that you would win grand prize of lottery game is higher than that you would reverse election result. What is wrong with this reasoning, if anything? Is it possible to statistically prove that one vote matters? I know that there are some arguments like "if everybody thought like that, it would change the election result". But everybody will not think like that. Even if 20% of electorate copy you, always a great number of people will go, and the margin of victory of winning candidate will be counted in hundreds of thousands. Your vote would count only in case of a tie. Judging it with game theory gains and costs, it seems that more optimal strategy for Sunday is horse race gambling than going to the ballot box. Update, March 3. I am grateful for providing me with so much material and for keeping the answers related to statistical part of the question. Not attempting to solve the stated problem but rather to share and validate my thinking path I posted an answer . I have formulated there few assumptions. two candidates unknown number of voters each voter can cast a random vote on either candidate I have showed there a solution for 6 voters (could be a case in choosing a captain on a fishing boat). I would be interested in knowing what are the odds for each additional milion of voters. Update, March 5. I would like to make it clear that I am interested in more or less realistic assumptions to calculating the probability of a decisive vote. More or less because I do not want to sacrifice simplicity for precision. I have just understood that my update of March 3 formulated unrealistic assumptions. These assumptions probably formulate the highest possible probability of a decisive vote but I would be grateful if you could confirm it. Yet still unknown for me thing is what is meant by the number of voters in the provided formulas. Is it a maximum pool of voters or exact number of voters. Say we have 1 milion voters, so is the probability calculated for all the cases from 1 to milion voters taking part in election? Adding more fuel to the discussion heat In the USA, because president is elected indirectly, your vote would be decisive if only one vote, your vote, were to reverse the electors of your state, and then, owing to the votes of your electors, there was a tie at Electoral College. Of course, breaking this double tie condition hampers the chances that a single vote may reverse election result, even more than discussed here so far. I have opened a separate thread about that here .
It's wrong in part because it's based on a mathematical fallacy. (It's even more wrong because it's such blatant voter-suppression propaganda, but that's not a suitable topic for discussion here.) The implicit context is one in which an election looks like it's on the fence. One reasonable model is that there will be $n$ voters (not including you) of whom approximately $m_1\lt n/2$ will definitely vote for one candidate and approximately $m_2\approx m_1$ will vote for the other, leaving $n-(m_1+m_2)$ "undecideds" who will make up their minds on the spot randomly, as if they were flipping coins. Most people--including those with strong mathematical backgrounds--will guess that the chance of a perfect tie in this model is astronomically small. (I have tested this assertion by actually asking undergraduate math majors.) The correct answer is surprising. First, figure there's about a $1/2$ chance $n$ is odd, which means a tie is impossible. To account for this, we'll throw in a factor of $1/2$ in the end. Let's consider the remaining situation where $n=2k$ is even. The chance of a tie in this model is given by the Binomial distribution as $$\Pr(\text{Tie}) = \binom{n - m_1 - m_2}{k - m_1} 2^{m_1+m_2-n}.$$ When $m_1\approx m_2,$ let $m = (m_1+m_2)/2$ (and round it if necessary). The chances don't depend much on small deviations between the $m_i$ and $m,$ so writing $N=k-m,$ an excellent approximation of the Binomial coefficient is $$\binom{n - m_1-m_2}{k - m_1} \approx \binom{2(k-m)}{k-m} = \binom{2N}{N} \approx \frac{2^{2N}}{\sqrt{N\pi}}.$$ The last approximation, due to Stirling's Formula , works well even when $N$ is small (larger than $10$ will do). Putting these results together, and remembering to multiply by $1/2$ at the outset, gives a good estimate of the chance of tie as $$\Pr(\text{Tie}) \approx \frac{1}{2\sqrt{N\pi}}.$$ In such a case, your vote will tip the election. What are the chances? In the most extreme case, imagine a direct popular vote involving, say, $10^8$ people (close to the number who vote in a US presidential election). Typically about 90% of people's minds a clearly decided, so we might take $N$ to be on the order of $10^7.$ Now $$\frac{1}{2\sqrt{10^7\pi}} \approx 10^{-4}.$$ That is, your participation in a close election involving one hundred million people still has about a $0.01\%$ chance of changing the outcome! In practice, most elections involve between a few dozen and a few million voters. Over this range, your chance of affecting the results (under the foregoing assumptions, of course) ranges from about $10\%$ (with just ten undecided voters) to $1\%$ (with a thousand undecided voters) to $0.1\%$ (with a hundred thousand undecided voters). In summary, the chance that your vote swings a closely-contested election tends to be inversely proportional to the square root of the number of undecided voters. Consequently, voting is important even when the electorate is large. The history of US state and national elections supports this analysis. Remember, for just one recent example, how the 2000 US presidential election was decided by a plurality in the state of Florida (with several million voters) that could not have exceeded a few hundred--and probably, if it had been checked more closely, would have been even narrower. If (based on recent election outcomes) it appears there is, say, a few percent chance that an election involving a few million people will be decided by at most a few hundred votes, then the chance that the next such election is decided by just one vote (intuitively) must be at least a hundredth of one percent. That is about one-tenth of what this inverse square root law predicts. But that means the history of voting and this analysis are in good agreement, because this analysis applies only to close races--and most are not close. For more (anecdotal) examples of this type, across the world, see the Wikipedia article on close election results . It includes a table of about 200 examples. Unfortunately, it reports the margin of victory as a proportion of the total. As we have seen, regardless of whether all (or even most) assumptions of this analysis hold, a more meaningful measure of the closeness of an election would be the margin divided by the square root of the total. By the way, your chance of an injury due to driving to the ballot box (if you need to drive at all) can be estimated as the rate of injuries annually (about one percent) divided by the average number of trips (or distance-weighted trips) annually, which is several hundred. We obtain a number well below $0.01\%.$ Your chance of winning the lottery grand prize? Depending on the lottery, one in a million or less. The quotation in the question is not only scurrilous, it is outright false.
{ "source": [ "https://stats.stackexchange.com/questions/451809", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/9519/" ] }
451,827
Maybe a noobs query, but recently I have seen a surge of papers w.r.t contrastive learning (a subset of semi-supervised learning). Some of the prominent and recent research papers which I read, which detailed this approach are: Representation Learning with Contrastive Predictive Coding @ https://arxiv.org/abs/1807.03748 SimCLR-v1: A Simple Framework for Contrastive Learning of Visual Representations @ https://arxiv.org/abs/2002.05709 SimCLR-v2: Big Self-Supervised Models are Strong Semi-Supervised Learners @ https://arxiv.org/abs/2006.10029 MoCo-v1: Momentum Contrast for Unsupervised Visual Representation Learning @ https://arxiv.org/abs/1911.05722 MoCo-v2: Improved Baselines with Momentum Contrastive Learning @ https://arxiv.org/abs/2003.04297 PIRL: Self-Supervised Learning of Pretext-Invariant Representations @ https://arxiv.org/abs/1912.01991 Could you guys give a detailed explanation of this approach vs transfer learning and others? Also, why it's gaining traction amongst the ML research community?
Contrastive learning is very intuitive. If I ask you to find the matching animal in the photo below, you can do so quite easily. You understand the animal on left is a "cat" and you want to find another "cat" image on the right side. So, you can contrast between similar and dissimilar things. Contrastive learning is an approach to formulate this task of finding similar and dissimilar things for a machine. You can train a machine learning model to classify between similar and dissimilar images. There are various choices to make ranging from: Encoder Architecture : To convert the image into representations Similarity measure between two images : mean squared error, cosine similarity, content loss Generating the Training Pairs : manual annotation, self-supervised methods This blog post explains the intuition behind contrastive learning and how it is applied in recent papers like SimCLR in more detail.
{ "source": [ "https://stats.stackexchange.com/questions/451827", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/269109/" ] }
453,387
I have problem to understanding the following example. (1) After the next day that the glitch discovered what can tell about the observation? $X_i\nsim N(\mu,1)$ or just $X_i\sim N(\mu_2,1)$ . Some observation are from $N(\mu,1)$ and others not. What can we tell about all observation. Am I wrong somthing? Why not using Truncated normal? (2) What did exactly bayesian inference do in this situation? I think ,by considering $\mu$ to be random variable, it control (model) the glitch, but it considering $X\sim N(\mu , 1)$ that I am doubting to accept this. (3) At All, is the conversation "3.3) Flaws in frequentist inference" valid? Is this fair? and with out exaggeration ? source:Computer Age Statistical Inference, by Bradley Efron and Trevor Hastie, Page 31.
I am a Bayesian, but I find these kinds of criticisms against "frequentists" to be overstated and unfair. Both Bayesians and classical statisticians accept all the same mathematical results to be true, so there is really no dispute here about the properties of the various estimators. Even if you are a Bayesian, it is clearly true that the sample mean is no longer an unbiased estimator (the very concept of "bias" being one that conditions on the unknown parameter). So first of all, the frequentist is correct that the sample mean is not an unbiased estimator (and any sensible Bayesian would have to agree with this given the assumed distributions). Secondly, if a frequentist actually encountered this situation, they would almost certainly update their estimator to reflect the censoring mechanism in the data. It is entirely possible for the frequentist to use an estimator that is unbiased, and which reduces down to the sample mean in the special case where there is no censored data. Indeed, most standard frequentist estimators would have this property. So, although the sample mean is indeed a biased estimator in this case, the frequentist could use an alternative estimator that is unbiased, and which happens to give the same estimate as the sample mean for this particular data. Therefore, as a practical matter, the frequentist can happily accept the estimate from the sample mean is the correct estimate from this data. In other words, there is absolutely no reason that the Bayesian needs to "come to the rescue" --- the frequentist will be able to accomodate the changed information perfectly adequately. More detail: Suppose you have $m$ non-censored data points $x_1,...,x_m$ and $n-m$ censored data points, which are known to be somewhere above the cut-off $\mu_* = 100$ . Given the underlying normal distribution for the pre-censored data values, the log-likelihood function for the data is: $$\ell_\mathbb{x}(\mu) = \sum_{i=1}^m \ln \phi (x_i-\mu) + (n-m) \ln (1 - \Phi(\mu_*-\mu)).$$ Since $\ln \phi (x_i-\mu) = - \tfrac{1}{2}(x_i-\mu)^2+\text{const}$ , differentiating gives the score function: $$\frac{d \ell_\mathbb{x}}{d \mu}(\mu) = m (\bar{x}_m - \mu) + (n-m) \cdot \frac{\phi(\mu_*-\mu)}{1 - \Phi(\mu_*-\mu)}.$$ so the MLE is the value $\hat{\mu}$ that solves: $$\bar{x}_m = \hat{\mu} + \frac{n-m}{m} \cdot \frac{\phi(\mu_*-\hat{\mu})}{1 - \Phi(\mu_*-\hat{\mu})}.$$ The MLE will generally be a biased estimator, but it should have other reasonable frequentist properties, and so it would probably be considered a reasonable estimator in this case. (Even if the frequentist is looking for an improvement, like a "bias corrected" scaled version of the MLE, it is likely to be an other estimator that is asymptotically equivalent to the MLE.) In the case where there is no censored data we have $m=n$ , so the MLE reduces to $\hat{\mu} = \bar{x}_m$ . So in this case, if the frequentist used the MLE, they will come to the same estimate for non-censored data as if they were using the sample mean. (Note here that there is a difference between an estimator (which is a function) and an estimate (which is just one or a few output values from that function).
{ "source": [ "https://stats.stackexchange.com/questions/453387", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/238429/" ] }
453,396
Assume we are trying to classify between 2 classes, each has a Gaussian conditional probability, with different means but same variance, i.e. $X | y=0 \sim N(\mu_0, \Sigma); X | y=1 \sim N(\mu_1, \Sigma)$ . Our decision rule would be $1 \iff P(y = 1 | X) > P(y=0|X)$ (and vice versa for 0). Using Bayes rule we can invert the conditional probabilities, and get: $\iff \frac{P(X|y=1)P(y=1)}{P(X)} > \frac{P(X|y=0)P(y=0)}{P(X)}$ . We can next eliminate the denominator. Now, if $P(y=1) = P(y=0)$ we could eliminate that as well, and the decision rule would be simplified to $P(X|y=1) > P(X|y=0)$ , which basically is which $\mu$ is $X$ closer to. Now intuitively this translates to $1 \iff X > c = \frac{\mu_0 +\mu_1}{2}$ . I have 2 questions: Can we prove this intuition mathematically? What happens if $P(y=1) \neq P(y=0)$ ?
I am a Bayesian, but I find these kinds of criticisms against "frequentists" to be overstated and unfair. Both Bayesians and classical statisticians accept all the same mathematical results to be true, so there is really no dispute here about the properties of the various estimators. Even if you are a Bayesian, it is clearly true that the sample mean is no longer an unbiased estimator (the very concept of "bias" being one that conditions on the unknown parameter). So first of all, the frequentist is correct that the sample mean is not an unbiased estimator (and any sensible Bayesian would have to agree with this given the assumed distributions). Secondly, if a frequentist actually encountered this situation, they would almost certainly update their estimator to reflect the censoring mechanism in the data. It is entirely possible for the frequentist to use an estimator that is unbiased, and which reduces down to the sample mean in the special case where there is no censored data. Indeed, most standard frequentist estimators would have this property. So, although the sample mean is indeed a biased estimator in this case, the frequentist could use an alternative estimator that is unbiased, and which happens to give the same estimate as the sample mean for this particular data. Therefore, as a practical matter, the frequentist can happily accept the estimate from the sample mean is the correct estimate from this data. In other words, there is absolutely no reason that the Bayesian needs to "come to the rescue" --- the frequentist will be able to accomodate the changed information perfectly adequately. More detail: Suppose you have $m$ non-censored data points $x_1,...,x_m$ and $n-m$ censored data points, which are known to be somewhere above the cut-off $\mu_* = 100$ . Given the underlying normal distribution for the pre-censored data values, the log-likelihood function for the data is: $$\ell_\mathbb{x}(\mu) = \sum_{i=1}^m \ln \phi (x_i-\mu) + (n-m) \ln (1 - \Phi(\mu_*-\mu)).$$ Since $\ln \phi (x_i-\mu) = - \tfrac{1}{2}(x_i-\mu)^2+\text{const}$ , differentiating gives the score function: $$\frac{d \ell_\mathbb{x}}{d \mu}(\mu) = m (\bar{x}_m - \mu) + (n-m) \cdot \frac{\phi(\mu_*-\mu)}{1 - \Phi(\mu_*-\mu)}.$$ so the MLE is the value $\hat{\mu}$ that solves: $$\bar{x}_m = \hat{\mu} + \frac{n-m}{m} \cdot \frac{\phi(\mu_*-\hat{\mu})}{1 - \Phi(\mu_*-\hat{\mu})}.$$ The MLE will generally be a biased estimator, but it should have other reasonable frequentist properties, and so it would probably be considered a reasonable estimator in this case. (Even if the frequentist is looking for an improvement, like a "bias corrected" scaled version of the MLE, it is likely to be an other estimator that is asymptotically equivalent to the MLE.) In the case where there is no censored data we have $m=n$ , so the MLE reduces to $\hat{\mu} = \bar{x}_m$ . So in this case, if the frequentist used the MLE, they will come to the same estimate for non-censored data as if they were using the sample mean. (Note here that there is a difference between an estimator (which is a function) and an estimate (which is just one or a few output values from that function).
{ "source": [ "https://stats.stackexchange.com/questions/453396", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/117705/" ] }
454,684
I am asking this question because I believe it would be great if the statistics community could make a contribution to solving this serious puzzle until more evidence is available. The UK Royal Office of Obstetricians and Gynecologists publishes guidelines on neonatal and prenatal treatment of Corona positive women and babies of Corona positive women. There is insufficient evidence to give an advice whether breastfeeding is safe in the sense of not passing virus from mother to child via the mother milk. There are other infection ways but mother milk could be expressed and fed while keeping distance, e.g. by father. A source is quoted in the guidelines (p. 27) that 6 Chinese women's breast milk was tested Corona negative; that is, six of out six samples were negative. If we consider these tests independent events, where the event is to sample a woman who passes the virus via the mothermilk, we have a binomial trial. Which confidence or credibility statements can we make about the probability to pass on the virus to the mother milk? This probability is estimated at zero in this trial, of course. Some statements can be made; for example if the probability was 0.50 the likelihood of the data would be 0.015. If it approaches 0, the likelihood approaches 1, but it is not differentiable at zero, so standard likelihood inference fails. Please note : The same guidelines advise in section 4.8.2 to continue breastfeeding by COVID-19 positive women: ' In the light of the current evidence, we advise that the benefits of breastfeeding outweigh any potential risks of transmission of the virus through breastmilk. The risks and benefits of breastfeeding, including the risk of holding the baby in close proximity to the mother, should be discussed with her.' (p. 27)
There is the rule of three saying if a certain event did not occur in a sample with $n$ subjects, the interval from $0$ to $3/n$ is a 95% confidence interval for the rate of occurrences in the population. You have $n=6$ , so says $[0, 3/6=0.5]$ is a 95% confidence interval for the binomial $p$ of transmission. In non-technical language: 6 non-events is way too few for any strong conclusions. This data is interesting, suggesting something, but no more. This rule of three is discussed at CV multiple times Is Rule of Three inappropriate in some cases? , When to use (and not use) the rule of three , Using Rule of Three to obtain confidence interval for a binomial population and certainly more ... A more principled approach is here: Confidence interval around binomial estimate of 0 or 1 and using code from an answer to that question: binom::binom.confint(0, 6, method=c("wilson", "bayes", "agresti-coull"), type="central") method x n mean lower upper 1 agresti-coull 0 6 0.00000000 -0.05244649 0.4427808 2 bayes 0 6 0.07142857 0.00000000 0.2641729 3 wilson 0 6 0.00000000 0.00000000 0.3903343
{ "source": [ "https://stats.stackexchange.com/questions/454684", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/24515/" ] }
455,202
We've all heard a lot about "flattening the curve". I was wondering if these curve – that look like bells – can be qualified as Gaussian despite the fact that there is a temporal dimension.
No. For example: Not in the sense of a Gaussian probability distribution: the bell-curve of a normal (Gaussian) distribution is a histogram (a map of probability density against values of a single variable), but the curves you quote are (as you note) a map of the values of one variable (new cases) against a second variable (time). (@Accumulation and @TobyBartels point out that Gaussian curves are mathematical constructs that may be unrelated to probability distributions; given that you are asking this question on the statistics SE, I assumed that addressing the Gaussian distribution was an important part of answering the question.) The possible values under a normal distribution extend from $-\infty$ to $\infty$ , but an epidemic curve cannot have negative values on the y axis, and traveling far enough left or right on the x axis, you will run out of cases altogether, either because the disease is does not exist, or because Homo sapiens does not exist. Normal distributions are continuous, but the phenomena epidemic curves measure are actually discrete not continuous: they represent new cases during each discrete unit of time. While we can subdivide time into smaller meaningful units ( to a degree ), we eventually run into the fact that individuals with new infections are count data (discrete). Normal distributions are symmetric about their mean, but despite the cartoon conveying a useful public health message about the need to flatten the curve, actual epidemic curves are frequently skewed to the right, with long thin tails as shown below. Normal distributions are unimodal, but actual epidemic curves may feature one or more bumps (i.e. may be multi-modal, they may even, as in @SextusEmpiricus' answer, be endemic where they return cyclically). Finally, here is an epidemic curve for COVID-19 in China, you can see that the curve generally diverges from the Gaussian curve (of course there are issues with the reliability of the data, given than many cases were not counted):
{ "source": [ "https://stats.stackexchange.com/questions/455202", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/124694/" ] }
455,210
Let $\sigma\in(0,1)$ $$\phi(x):=\frac1{\sqrt{2\pi\sigma^2}}e^{-\frac{x^2}{2\sigma^2}}\;\;\;\text{for }x\in\mathbb R$$ and $$\psi(x):=\sum_{k\in\mathbb Z}\phi(x+k)\;\;\;\text{for }x\in\mathbb R$$ $\beta\in[0,1]$ $d\in\mathbb N$ and $\lambda'$ denote the Lebesgue measure on $[0,1)^d$ $$u'(x',y'):=\beta+(1-\beta)\prod_{i=1}^d\psi(y'_i-x'_i)\;\;\;\text{for }x',y'\in[0,1)^d$$ and $$Q'(x',B'):=\int_{B'}\lambda'({\rm d}y')u'(x',y')\;\;\;\text{for }(x',B')\in[0,1)^d\times\mathcal B([0,1)^d)$$ I'm using $Q'$ as the proposal kernel for the Metropolis-Hastings algorithm. Since I've encountered a huge error in my estimates , I presume that something is wrong with my computation of the density $u'$ . I'm unsure what the best way to verify my implementation is, since I've never thought about this before. In the following code you find a complete (simplified) implementation (you can run the code online here: https://coliru.stacked-crooked.com/a/435e82ba145c450c ). As an example, I've build a uniformly distributed $x'\in[0,1)^d$ and then created independent samples $y'_0,\ldots,y'_{n-1}\sim Q'(x',\;\cdot\;)$ . The only sensible test which came to my mind was $$\frac1n\sum_{i=0}^{n-1}\frac1{u'(x',y'_i)}\xrightarrow{n\to\infty}1\tag1.$$ I've printed the result of this estimate to the command line. Note that the function sampler::density returns $u'(x',y'_i)$ for the $i$ of the current iteration: #include <algorithm> #include <cassert> #include <iostream> #include <random> template<typename T = double> T const pi = std::acos(-T(1)); template<typename T = double> auto normal_distribution_density(T x, T mu = 0, T sigma = 1) { auto const y = (x - mu) / sigma; return 1 / (sigma * std::sqrt(2 * pi<T>)) * std::exp(-y * y / 2); } template<typename T = double> auto wrapped_normal_distribution_density(T x, T mu = 0, T sigma = 1, T epsilon = 0) { T s = normal_distribution_density(x, mu, sigma); for (int k = 1;; ++k) { T a = normal_distribution_density(x + k, mu, sigma), b = normal_distribution_density(x - k, mu, sigma); s += a + b; if (a <= epsilon && b <= epsilon) break; } return s; } template<typename RealType = double> class sampler { public: using real_type = RealType; sampler(real_type beta, real_type sigma, std::vector<real_type> const& x) : m_beta(beta), m_sigma(sigma), m_x(x) {} template<class Generator> void begin_iteration(Generator& g) { m_large_step = m_uniform_distribution(g) < m_beta; m_sample_index = 0; m_second_density = 1; } template<class Generator> real_type generate(Generator& g) { assert(m_sample_index < m_x.size()); real_type sample; if (!m_large_step) { std::normal_distribution<real_type> normal_distribution( m_x[m_sample_index], m_sigma); auto const normal_sample = normal_distribution(g); sample = normal_sample - std::floor(normal_sample); } else sample = m_uniform_distribution(g); m_second_density *= wrapped_normal_distribution_density( sample, m_x[m_sample_index], m_sigma); ++m_sample_index; return sample; } real_type density() const { assert(m_sample_index == m_x.size()); return m_beta + (1 - m_beta) * m_second_density; } private: real_type m_beta, m_sigma; std::uniform_real_distribution<real_type> m_uniform_distribution; std::size_t m_sample_index; std::vector<real_type> const& m_x; bool m_large_step; real_type m_second_density; }; int main() { std::size_t d = 1; std::mt19937 g{ std::random_device{}() }; std::vector<double> x; x.reserve(d); {// initialize x ~ U_{[0, 1)^d} std::uniform_real_distribution<> u; std::generate_n(std::back_inserter(x), d, [&]() { return u(g); }); } double const beta = .3, sigma = .01; sampler<> s{ beta, sigma, x }; std::size_t const n = 1e6; double acc{}; for (std::size_t i = 0; i < n; ++i) { s.begin_iteration(g); std::vector<double> y; y.reserve(d); std::generate_n(std::back_inserter(y), d, [&]() { return s.generate(g); }); acc += 1 / s.density(); } acc /= n; std::cout << acc << std::endl; return 0; } While the error is not as huge as in my Metropolis-Hastings estimates, the computed result is significantly off from $1$ . Maybe there error is due to floating point imprecision. Should I compute something in a different way? And please feel free to tell me if there are other simple tests which I might want to consider.
No. For example: Not in the sense of a Gaussian probability distribution: the bell-curve of a normal (Gaussian) distribution is a histogram (a map of probability density against values of a single variable), but the curves you quote are (as you note) a map of the values of one variable (new cases) against a second variable (time). (@Accumulation and @TobyBartels point out that Gaussian curves are mathematical constructs that may be unrelated to probability distributions; given that you are asking this question on the statistics SE, I assumed that addressing the Gaussian distribution was an important part of answering the question.) The possible values under a normal distribution extend from $-\infty$ to $\infty$ , but an epidemic curve cannot have negative values on the y axis, and traveling far enough left or right on the x axis, you will run out of cases altogether, either because the disease is does not exist, or because Homo sapiens does not exist. Normal distributions are continuous, but the phenomena epidemic curves measure are actually discrete not continuous: they represent new cases during each discrete unit of time. While we can subdivide time into smaller meaningful units ( to a degree ), we eventually run into the fact that individuals with new infections are count data (discrete). Normal distributions are symmetric about their mean, but despite the cartoon conveying a useful public health message about the need to flatten the curve, actual epidemic curves are frequently skewed to the right, with long thin tails as shown below. Normal distributions are unimodal, but actual epidemic curves may feature one or more bumps (i.e. may be multi-modal, they may even, as in @SextusEmpiricus' answer, be endemic where they return cyclically). Finally, here is an epidemic curve for COVID-19 in China, you can see that the curve generally diverges from the Gaussian curve (of course there are issues with the reliability of the data, given than many cases were not counted):
{ "source": [ "https://stats.stackexchange.com/questions/455210", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/222528/" ] }
455,231
I am trying to fit data to a fourth-degree polynomial. I tried this in multiple programs (R, Origin Pro, SigmaPlot), all of which give me a polynomial of the form $ 40000 -2000x + 40x^2 -0.3x^3 + 0.001x^4 $ . This doesn't fit the data at all (the y-intercept should be close to zero). All data points are relatively near each other. However, when the programs graphically show the fitted polynomial, it looks like this: The shown polynomial clearly fits the data quite well and is (!) distinct from the one given above. For example, the y-intercepts don't match at all. Is there something about regression I don't understand or why do all of the programs plot a different polynomial than they return? Could this be an overflow problem?
No. For example: Not in the sense of a Gaussian probability distribution: the bell-curve of a normal (Gaussian) distribution is a histogram (a map of probability density against values of a single variable), but the curves you quote are (as you note) a map of the values of one variable (new cases) against a second variable (time). (@Accumulation and @TobyBartels point out that Gaussian curves are mathematical constructs that may be unrelated to probability distributions; given that you are asking this question on the statistics SE, I assumed that addressing the Gaussian distribution was an important part of answering the question.) The possible values under a normal distribution extend from $-\infty$ to $\infty$ , but an epidemic curve cannot have negative values on the y axis, and traveling far enough left or right on the x axis, you will run out of cases altogether, either because the disease is does not exist, or because Homo sapiens does not exist. Normal distributions are continuous, but the phenomena epidemic curves measure are actually discrete not continuous: they represent new cases during each discrete unit of time. While we can subdivide time into smaller meaningful units ( to a degree ), we eventually run into the fact that individuals with new infections are count data (discrete). Normal distributions are symmetric about their mean, but despite the cartoon conveying a useful public health message about the need to flatten the curve, actual epidemic curves are frequently skewed to the right, with long thin tails as shown below. Normal distributions are unimodal, but actual epidemic curves may feature one or more bumps (i.e. may be multi-modal, they may even, as in @SextusEmpiricus' answer, be endemic where they return cyclically). Finally, here is an epidemic curve for COVID-19 in China, you can see that the curve generally diverges from the Gaussian curve (of course there are issues with the reliability of the data, given than many cases were not counted):
{ "source": [ "https://stats.stackexchange.com/questions/455231", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
455,237
I want to discretize a pandas continuous column. For discretization, I'm using Freedman-Diaconis rule which computes the optimal number of bins which will be given input to KBinsDiscretizer. Freedman-Diaconis' rule states that, $$ \text{bin width}, h=2\frac{\operatorname{IQR}(x)}{n^{1/3}} $$ $$ \text{number of bins}, k = \frac{range(x)}{h} $$ The column has $32561$ values. After sorting, the first $29849$ elements are $0$ . So in turn the $IQR(x) = 0$ . So, a divide by zero occurs when calculating the number of bins. What can I do here?
No. For example: Not in the sense of a Gaussian probability distribution: the bell-curve of a normal (Gaussian) distribution is a histogram (a map of probability density against values of a single variable), but the curves you quote are (as you note) a map of the values of one variable (new cases) against a second variable (time). (@Accumulation and @TobyBartels point out that Gaussian curves are mathematical constructs that may be unrelated to probability distributions; given that you are asking this question on the statistics SE, I assumed that addressing the Gaussian distribution was an important part of answering the question.) The possible values under a normal distribution extend from $-\infty$ to $\infty$ , but an epidemic curve cannot have negative values on the y axis, and traveling far enough left or right on the x axis, you will run out of cases altogether, either because the disease is does not exist, or because Homo sapiens does not exist. Normal distributions are continuous, but the phenomena epidemic curves measure are actually discrete not continuous: they represent new cases during each discrete unit of time. While we can subdivide time into smaller meaningful units ( to a degree ), we eventually run into the fact that individuals with new infections are count data (discrete). Normal distributions are symmetric about their mean, but despite the cartoon conveying a useful public health message about the need to flatten the curve, actual epidemic curves are frequently skewed to the right, with long thin tails as shown below. Normal distributions are unimodal, but actual epidemic curves may feature one or more bumps (i.e. may be multi-modal, they may even, as in @SextusEmpiricus' answer, be endemic where they return cyclically). Finally, here is an epidemic curve for COVID-19 in China, you can see that the curve generally diverges from the Gaussian curve (of course there are issues with the reliability of the data, given than many cases were not counted):
{ "source": [ "https://stats.stackexchange.com/questions/455237", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/220993/" ] }
459,501
Dr. Raoult, who promotes Hydroxychloroquine, has some really intriguing statement about statistics in the biomedical field: It's counterintuitive, but the smaller the sample size of a clinical test, the more significant its results are. The differences in a sample of 20 people may be more significant than in a sample of 10,000 people. If we need such a sample, there is a risk of being wrong. With 10,000 people, when the differences are small, sometimes they don't exist. Is this a false statement in statistics? If so, is it therefore also false in the Biomedical field? On which basis can we refute it properly, by a confidence interval? Dr. Raoult promotes Hydroxychloroquine as a cure for Covid-19, thanks to an article about data from 24 patients. His claims have been repeated a lot , but mainly in the mainstream media, not in the scientific press. In machine learning, the SciKit workflow states that before choosing any model, you NEED a dataset with at least 50 samples, whether it be for a simple regression, or the most advance clustering technique, etc., which is why I find this statement really intriguing. EDIT: some of the answers below make the assumption of no result bias. They deal with the concept of power and effect size . However it seems there is a bias in Dr. Raoult data. The most striking being removing data for the dead, for the reason they could not provide data for the entire duration of the study. My question remains however focused on the impact of using a small sample size. Source of the statement about Statistics in a French magazine Reference to the scientific paper in question.
I agree with many of the other answers here but think the statement is even worse than they make it out to be. The statement is an explicit version of an implicit claim in many shoddy analyses of small datasets. These hint that because they have found a significant result in a small sample, their claimed result must be real and important because it is 'harder' to find a significant effect in a small sample. This belief is simply wrong, because random error in small samples means that any result is less trustworthy, whether the effect size is large or small. Large and significant effects are therefore more likely to be of the incorrect magnitude and more importantly, they can be in the wrong direction . Andrew Gelman refers to these usefully as 'Type S' errors (estimates whose sign is wrong) as opposed to 'Type M' errors (estimates whose magnitude is wrong). Combine this with the file-drawer effect (small, non-significant results go unpublished, while large, significant ones are published) and you are most of the way to the replication crisis and a lot of wasted time, effort and money. Thanks to @Adrian below for digging up a figure from Gelman that illustrates this point well: This may seem to be an extreme example but the point is entirely relevant to the argument made by Raoult.
{ "source": [ "https://stats.stackexchange.com/questions/459501", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2910/" ] }
459,503
As the title suggests, Why is p-value termed as P(Data | Hypothesis/Model) and not P(Hypothesis | Data)? Shouldn't both be the same? Why is P(Data | Hypothesis) != P(Hypothesis | Data)? Is there any logical reasoning here that I am missing?
I agree with many of the other answers here but think the statement is even worse than they make it out to be. The statement is an explicit version of an implicit claim in many shoddy analyses of small datasets. These hint that because they have found a significant result in a small sample, their claimed result must be real and important because it is 'harder' to find a significant effect in a small sample. This belief is simply wrong, because random error in small samples means that any result is less trustworthy, whether the effect size is large or small. Large and significant effects are therefore more likely to be of the incorrect magnitude and more importantly, they can be in the wrong direction . Andrew Gelman refers to these usefully as 'Type S' errors (estimates whose sign is wrong) as opposed to 'Type M' errors (estimates whose magnitude is wrong). Combine this with the file-drawer effect (small, non-significant results go unpublished, while large, significant ones are published) and you are most of the way to the replication crisis and a lot of wasted time, effort and money. Thanks to @Adrian below for digging up a figure from Gelman that illustrates this point well: This may seem to be an extreme example but the point is entirely relevant to the argument made by Raoult.
{ "source": [ "https://stats.stackexchange.com/questions/459503", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/280537/" ] }
460,698
Suppose I have a chest. When you open the chest, there is a 60% chance of getting a prize and a 40% chance of getting 2 more chests. Let $X$ be the number of prizes you get. What is its variance? Computing $E[X]$ is fairly straight forward: $E[X] = .4 \cdot 2 \cdot E[X] + .6$ which leads to $E[X] = 3$ , but I'd also like to know the variance of the number of prizes, not just the average. $Var[X] = E[X^2] - E[X]^2 = E[X^2] - 9$ , but I'm having trouble with $E[X^2]$ . Anyone have any idea if this is simple? From simulation, I know that the variance is ~30. Thanks
Call the next chests as $X_1,X_2$ . With $0.4$ probability, our new variable is $X_1+X_2$ and with $0.6$ probability, it is $1$ . So, $$\begin{align}E[X^2]&=0.4\times E[(X_1+X_2)^2]+0.6\times1^2\\&=0.4\times E[X_1^2+X_2^2+2X_1X_2]+0.6\\&=0.4\times(2E[X^2]+2E[X]^2)+0.6\\&=0.8\times E[X^2]+7.8\rightarrow E[X^2]=39\rightarrow\operatorname{var}(X)=30\end{align}$$
{ "source": [ "https://stats.stackexchange.com/questions/460698", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/281313/" ] }
463,006
I'm a rookie with statistics, and I'm struggling to understand this: it is well known that a confounding factor can cause a spurious association, leading to rejecting a true null hypothesis (i.e. due to the confounding factor Z, I could conclude that there is a causal relationship between X and Y, while one is not there) the question is: can the opposite also be true? I.e. can a confounding factor lead to failing to reject a false null hypothesis? (i.e. somehow 'masking' a possibly existent causal association.) If yes, what would be a convincing example?
Yes Rephrasing the opposite of a confounder: It is definitely possible that an unobserved variable yields the impression that there is no relationship, when there is one. Confounding usually refers to a situation where an unobserved variable yields the illusion that there exists a relationship between two variables where there is none: This is a special case of omitted-variable bias , which more generally refers to any situation wherein an unobserved variable biases the observed relationship: It's easy to imagine a scenario where this would have a canceling effect on the estimate instead: (I wrote $\rho=0$ for the illustration, but the unobserved relationship does not have to be linear.) You could call this phenomenon omitted-variable bias, cancellation, or masking. Confounding usually refers to the kind of causal relationship shown in the first figure.
{ "source": [ "https://stats.stackexchange.com/questions/463006", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/164181/" ] }
463,172
I'm a high school math teacher who is a bit stumped. A Biology student came to me with his experiment wanting to know what kind of statistical analysis he can do with his data (yes, he should have decided that BEFORE the experiment, but I wasn't consulted until after). He is trying to determine what effect insulin has on the concentration of glucose in a cell culture. There are six culture grouped into three pairs (one with insulin and one without) each under slightly different conditions. The problem is that he only took one sample from each so the there is no standard deviation (or the standard deviation is 0 since the value varies from itself by 0). Is there any statistical analysis he can perform with this data? What advice should I give him other than to redo the experiment?
Unfortunately, your student has a problem. The idea of any (inferential) statistical analysis is to understand whether a pattern of observations can be simply due to natural variation or chance, or whether there is something systematic there. If the natural variation is large, then the observed difference may be simply due to chance. If the natural variation is small, then it may be indicative of a true underlying effect. With only a single pair of observations, we have no idea of the natural variation in the data we observe. So we are missing half of the information we need. You note that your student has three pairs of observations. Unfortunately, they were collected under different conditions. So the variability we observe between these three pairs may simply be due to the varying conditions, and won't help us for the underlying question about a possible effect of insulin. One straw to grasp at would be to get an idea of the natural variation through other channels. Maybe similar observations under similar conditions have been made before and reported in the literature. If so, we could compare our observations to these published data. (This would still be problematic, because the protocols will almost certainly have been slightly different, but it might be better than nothing.) EDIT: note that my explanation here applies to the case where the condition has a potential impact on the effect of insulin, an interaction . If we can disregard this possibility and expect only main effects (i.e., the condition will have an additive effect on glucose that is independent of the additional effect of insulin), then we can at least formally run an ANOVA as per BruceET's answer . This may be the best the student can do. (And they at least get to practice writing up the limitations of their study, which is also an important skill!) Failing that, I am afraid the only possibility would be to go back to the lab bench and collect more data. In any case, this is a (probably painful, but still) great learning opportunity! I am sure this student will in the future always think about the statistical analysis before planning their study, which is how it should be. Better to learn this in high school rather than only in college. Let me close with a relevant quote attributed to Ronald Fisher : To consult the statistician after an experiment is finished is often merely to ask him to conduct a post mortem examination. He can perhaps say what the experiment died of.
{ "source": [ "https://stats.stackexchange.com/questions/463172", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/282883/" ] }
463,681
Suppose I sample $n$ times from a distribution $$ x_1, \ldots, x_n \sim p_\theta(x) $$ is the mean of the samples always a valid sample from the target distribution? I.e. is $\overline{x}$ a valid sample from $p_\theta(x)$ $$ \overline{x} = \frac{1}{n}\sum_{i=1}^n x_i $$
No, $\bar x$ has its own sampling distribution. Take, for example, the variances of $\bar x$ and $x_i$ , in which the former is always lower ( $\leq$ ) than the latter, which means $\bar x$ is not sampled from $p_\theta(x)$ .
{ "source": [ "https://stats.stackexchange.com/questions/463681", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/146552/" ] }
465,536
After executing the code RNGkind(kind="Mersenne-Twister") # the default anyway set.seed(123) n = 10^5 x = runif(n) print(x[22662] == x[97974]) TRUE is output! If I use, e.g., RNGkind(kind="Knuth-TAOCP-2002") similarly happens: I get "only" 99 995 different values in x . Given the periods of both random generators, the results seem highly unlikely. Am I doing something wrong? I need to generate at least one million random numbers. I am using Windows 8.1 with R version 3.6.2; Platform: x86_64-w64-mingw32/x64 (64-bit) and RStudio 1.2.5033. Additional findings: Having a bag with $n$ different balls, we choose a ball $m$ times and put it back every time. The probability $p_{n, m}$ that all chosen balls are different is equal to ${n\choose m} / (n^m m!)$ . R documentation points to a link where the implementation of Mersenne-Twister for 64-bit machines is available: http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/emt64.html The uniform sampling from $[0, 1]$ interval is obtained via choosing a random 64-bit integer first, so I computed the above probabilities for the 64-bit and (when $p_{2^{64}, 10^5}$ turned out to be rather low) 32-bit case: $$ p_{2^{64}, 10^5}\doteq 0.9999999999972... \qquad p_{2^{32}, 10^5} \doteq 0.3121... $$ Then, I tried 1000 random seeds and compute the proportion of the cases when all generated numbers are different: 0.303. So, currently, I assume that for some reason, 32-bit integers are actually used.
The documentation of R on random number generation has a few sentences at its end, that confirm your expectation of 32-bit integers being used and might explain what you are observing: Do not rely on randomness of low-order bits from RNGs. Most of the supplied uniform generators return 32-bit integer values that are converted to doubles, so they take at most 2^32 distinct values and long runs will return duplicated values (Wichmann-Hill is the exception, and all give at least 30 varying bits.) So the implementation in R seems to be different to what is explained on the website of the authors of the Mersenne Twister. Possibly combining this with the Birthday paradox, you would expect duplicates with only 2^16 numbers at a probability of 0.5, and 10^5 > 2^16. Trying the Wichmann-Hill algorithm as suggested in the documentation: RNGkind(kind="Wichmann-Hill") set.seed(123) n = 10^8 x = runif(n) length(unique(x)) # 1e8 Note that the original Wichmann-Hill random number generator has the property that its next number can be predicted by its previous, and therefore does not meet non-predictability requirements of a valid PRNG. See this document by Dutang and Wuertz, 2009 (section 3)
{ "source": [ "https://stats.stackexchange.com/questions/465536", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/81971/" ] }
466,434
STATEMENT Three events E, F , and G cannot occur simultaneously. Further it is known that P(E ∩ F ) = P(F ∩ G) = P(E ∩ G) = 1/3. Can you determine P(E)? I made this diagram: $P(E \cup F \cup G) = P(E) + P(F) + P(G) - P(E \cap F) - P(E \cap G) - P(F \cap G)$ $\implies$ $P(E) = P(E \cup F \cup G) - P(F) - P(G) + P(E \cap F) + P(E \cap G) + P(F \cap G)$ $\implies$ $P(E) = P(E \cup F \cup G) - P(F) - P(G) + \frac 13 + \frac 13 + \frac 13$ $\implies$ $P(E) = P(E \cup F \cup G) - P(F) - P(G) + 1$ Now what to do next? Looks like this diagram matches better with the problem description:
This Venn diagram displays a situation where the chance of mutual intersection is zero: From $\Pr(E\cap F) = 1/3$ we deduce all this probability lies in the overlap of the $E$ and $F$ disks, but not in the mutual overlap of all three disks. That permits us to update the diagram: Applying the same reasoning to $\Pr(F\cap G) = \Pr(E\cap G) = 1/3,$ we obtain a Venn diagram displaying all the information in the question: The Axiom of Total Probability asserts the sum of all the probabilities (including the probability for the complement of $E\cup F\cup G,$ shown at the bottom left) is $1.$ An even more basic probability axiom asserts all probabilities must be non-negative. But since $1/3+1/3+1/3+0=1,$ all the possible probability already appears. The remaining probabilities must be zero, meaning the picture can be completed only like this: Finally, a third axiom (the same one used in the second step of filling in the Venn diagram) asserts the probability of $E$ equals the sum of the probabilities of its four parts, because they are disjoint. Thus, beginning with the central probability and moving counterclockwise around the disk that portrays $E,$ $$\Pr(E) = 0 + 1/3 + 0 + 1/3 = 2/3.$$ One moral worth remembering: Draw Venn diagrams in full generality so they show all possible intersections of the sets, even when you know some of the probabilities are zero. This helps you keep track of all the information systematically. (It's also conceptually more accurate, because sets of probability zero do not have to be nonempty!)
{ "source": [ "https://stats.stackexchange.com/questions/466434", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/214618/" ] }
467,494
For features that are heavily skewed, the Transformation technique is useful to stabilize variance, make the data more normal distribution-like, improve the validity of measures of association. I am really having trouble understanding the intuition behind Box-Cox transform. I mean how to configure data transform method for both square root and log transform and estimating lambda. Could anyone explain in simple words (and maybe with an example) what is the Intuition behind Box-Cox transform
The design goals of the family of Box-Cox transformations of non-negative data were these: The formulas should be simple, straightforward, well understood, and easy to calculate. They should not change the middle of the data much, but affect the tails more. The family should be rich enough to induce large changes in the skewness of the data if necessary: this means it should be able to contract or extend one tail of the data while extending or contracting the other, by arbitrary amounts. Let's consider the implications of each in turn. 1. Simplicity Linear transformations--those of the form $x\to \alpha x + \beta$ for constants $\alpha$ and $\beta$ --merely change the scale and location of data; they cannot change the shape of their distribution. The next simplest formula is to consider power transformations, of the form $x\to x^\lambda$ for (nonzero) constant $\lambda.$ 2. Stability A power transformation enjoys the nice property that rescaling the data results in rescaling their powers. That is, multiplying the data $x$ by some positive scale factor $\alpha$ results in multiplying $x^\lambda$ by $\alpha^\lambda.$ OK, it's not the same scale factor, but it is still just a rescaling. In light of this, let's always standardize any batch of data $(x_1, x_2, \ldots, x_n)$ by rescaling it to place its center (perhaps its median) at $1.$ Specifically, this replaces each $x_i$ by $x_i$ divided by the middle value of all the $x$ 's. This won't change the shape of the data distribution--it really amounts to choosing a suitable unit of measurement for expressing the values. For those who like formulas, let $\mu$ be the median of the batch. We will be studying the transformations $$x \to \frac{(x/\mu)^\lambda - 1}{\lambda} = \frac{\mu^{-\lambda}}{\lambda}\,x^\lambda + \frac{-1}{\lambda} = \alpha\, x^\lambda + \beta$$ for various $\lambda.$ The effects of $\alpha$ and $\beta$ (which depend on $\lambda$ and $\mu$ ) on $x^\lambda$ do not change the shape of the distribution of the $x_i^\lambda.$ In this sense, the Box-Cox transformations of the standardized data really are just the power transformations. Because we have made $1$ the central value of the batch, design crition 2--"stability"--requires that different values of the power $\lambda$ have relatively little effect on values near $1.$ Let's look at this in a little more detail by examining what a power does to numbers near $1.$ According to the Binomial Theorem, if we write $x$ as $x=1+\epsilon$ (for fairly small $\epsilon$ ), then approximately $$(1 + \epsilon)^\lambda = 1 + \lambda \epsilon + \text{Something}\times \epsilon^2.$$ Ignoring $\epsilon^2$ as being truly tiny, this tells us that Taking a power $\lambda$ of a number $x$ near $1$ is a nearly linear function that changes the distance between $x$ and $1$ by a factor $\lambda.$ In light of this, we can match the effects of different possible $\lambda$ by means of a compensating division of the distance by $\lambda.$ That is, we will use $$\operatorname{BC}_\lambda(x) = \frac{x^\lambda - 1^\lambda}{\lambda} = \frac{x^\lambda - 1}{\lambda}.$$ The numerator is the (signed) distance between the power transform of $x$ and the power transform of the middle of the data ( $1$ ); the denominator adjusts for the expansion of $x-1$ by the factor $\lambda$ when taking the power. $\operatorname{BC}_\lambda$ is the Box-Cox transformation with parameter $\lambda.$ By means of this construction, we guarantee that when $x$ is close to a typical value of its batch of data, $\operatorname{BC}_\lambda(x)$ will approximately be the same value (and close to zero) no matter what $\lambda$ might be (within reason, of course: extreme values of $\lambda$ can do extreme things). 3. Flexibility We have many possible values of $\lambda$ to choose from. How do they differ? This can be explored by graphing the Box-Cox transformations for various $\lambda.$ Here is a set of graphs for $\lambda \in \{-1,-1/2, 0, 1/2, 1, 2\}.$ (For the meaning of $\lambda=0,$ see Natural Log Approximation elsewhere on this site.) The solid black line graphs the Box-Cox transformation for $\lambda=1,$ which is just $x\to x-1.$ It merely shifts the center of the batch to $0$ (as do all the Box-Cox transformations). The upward curving pink graph is for $\lambda=2.$ The downward curving graphs show, in order of increasing curvature, the smaller values of $\lambda$ down to $-1.$ The differing amounts and directions of curvature provide the desired flexibility to change the shape of a batch of data. For instance, the upward curving graph for $\lambda=2$ exemplifies the effect of all Box-Cox transformations with $\lambda$ exceeding $1:$ values of $x$ above $1$ (that is, greater than the middle of the batch, and therefore out in its upper tail) are pulled further and further away from the new middle (at $0$ ). Values of $x$ below $1$ (less than the middle of the batch, and therefore out in its lower tail) are pushed closer to the new middle. This "skews" the data to the right, or high values (rather strongly, even for $\lambda=2$ ). The downward curving graphs, for $\lambda \lt 1,$ have the opposite effect: they push the higher values in the batch towards the new middle and pull the lower values away from the new middle. This skews the data to the left (or lower values). The coincidence of all the graphs near the point $(1,0)$ is a result of the previous standardizations: it constitutes visual verification that choice of $\lambda$ makes little difference for values near the middle of the batch. Finally, let's look at what different Box-Cox transformations do to a small batch of data. Transformed values are indicated by the horizontal positions. (The original data look just like the black dots, shown at $\lambda=1,$ but are located $+1$ units to the right.) The colors correspond to the ones used in the first figure. The underlying gray lines show what happens to the transformed values when $\lambda$ is smoothly varied from $-1$ to $+2.$ It's another way of appreciating the effects of these transformations in the tails of the data. (It also shows why the value of $\lambda=0$ makes sense: it corresponds to taking values of $\lambda$ arbitrarily close to $0.$ )
{ "source": [ "https://stats.stackexchange.com/questions/467494", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/260797/" ] }
467,633
In one-hot encoding there is one bit reserved for each word we desire to encode. How is multi-hot encoding different from one-hot? In what scenarios would it make sense to use it over one-hot?
Imagine your have five different classes e.g. ['cat', 'dog', 'fish', 'bird', 'ant'] . If you would use one-hot-encoding you would represent the presence of 'dog' in a five-dimensional binary vector like [0,1,0,0,0] . If you would use multi-hot-encoding you would first label-encode your classes, thus having only a single number which represents the presence of a class (e.g. 1 for 'dog') and then convert the numerical labels to binary vectors of size $\lceil\text{log}_25\rceil = 3$ . Examples: 'cat' = [0,0,0] 'dog' = [0,0,1] 'fish' = [0,1,0] 'bird' = [0,1,1] 'ant' = [1,0,0] This representation is basically the middle way between label-encoding, where you introduce false class relationships ( 0 < 1 < 2 < ... < 4 , thus 'cat' < 'dog' < ... < 'ant' ) but only need a single value to represent class presence and one-hot-encoding, where you need a vector of size $n$ (which can be huge!) to represent all classes but have no false relationships. Note : multi-hot-encoding introduces false additive relationships, e.g. [0,0,1] + [0,1,0] = [0,1,1] that is 'dog' + 'fish' = 'bird' . That is the price you pay for the reduced representation.
{ "source": [ "https://stats.stackexchange.com/questions/467633", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/27838/" ] }
467,638
I am trying to calculate the approximate mean age of people that participated in a survey. I have 4 age groups: $[0,15);\,[15,35);\,[35,55)$ and $[55,75).$ For each group age I have the number of people that participated. How should I calculate the approximate mean age of people. Thank you in advance, Mat
Imagine your have five different classes e.g. ['cat', 'dog', 'fish', 'bird', 'ant'] . If you would use one-hot-encoding you would represent the presence of 'dog' in a five-dimensional binary vector like [0,1,0,0,0] . If you would use multi-hot-encoding you would first label-encode your classes, thus having only a single number which represents the presence of a class (e.g. 1 for 'dog') and then convert the numerical labels to binary vectors of size $\lceil\text{log}_25\rceil = 3$ . Examples: 'cat' = [0,0,0] 'dog' = [0,0,1] 'fish' = [0,1,0] 'bird' = [0,1,1] 'ant' = [1,0,0] This representation is basically the middle way between label-encoding, where you introduce false class relationships ( 0 < 1 < 2 < ... < 4 , thus 'cat' < 'dog' < ... < 'ant' ) but only need a single value to represent class presence and one-hot-encoding, where you need a vector of size $n$ (which can be huge!) to represent all classes but have no false relationships. Note : multi-hot-encoding introduces false additive relationships, e.g. [0,0,1] + [0,1,0] = [0,1,1] that is 'dog' + 'fish' = 'bird' . That is the price you pay for the reduced representation.
{ "source": [ "https://stats.stackexchange.com/questions/467638", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/285843/" ] }
467,704
Below is a daily chart of newly-detected COVID infections in Krasnodar Krai , a region of Russia, from April 29 to May 19. The population of the region is 5.5 million people. I read about it and wondered - does this (relatively smooth dynamics of new cases) look okay from the statistical standpoint? Or does this look suspicious? Can a curve be so level during an epidemic without any tinkering with the data by authorities of the region? In my home region, Sverdlovsk Oblast, for example, the chart is much more chaotic . I'm an amateur in statistics, so maybe I'm wrong and this chart is nothing out of the ordinary. According to a news report from 18 May 2020, a total of 136695 tests for COVID-19 had been made in the region since the start of the epidemic period and up to that day. As of 21 May 2020, a total of 2974 infections have been recorded in the region. P.S. Here's a link I found to a page with better-looking statistics , and covering a longer period, specifically for Krasnodar Krai. On that page, you can hover your cursor over the chart to get specific numbers for the day. (The title uses term "daily elicited" number of cases, and the bar caption "daily confirmed" number of cases):
It is decidedly out of the ordinary. The reason is that counts like these tend to have Poisson distributions. This implies their inherent variance equals the count. For counts near $100,$ that variance of $100$ means the standard deviations are nearly $10.$ Unless there is extreme serial correlation of the results (which is not biologically or medically plausible), this means the majority of individual values ought to deviate randomly from the underlying hypothesized "true" rate by up to $10$ (above and below) and, in an appreciable number of cases (around a third of them all) should deviate by more than that. This is difficult to test in a truly robust manner, but one way would be to overfit the data, attempting to describe them very accurately, and see how large the residuals tend to be. Here, for instance, are two such fits, a lowess smooth and an overfit Poisson GLM: The variance of the residuals for this Generalized Linear Model (GLM) fit (on a logit scale) is only $0.07.$ For other models with (visually) close fits the variance tends to be from $0.05$ to $0.10.$ This is too small. How can you know? Bootstrap it. I chose a parametric bootstrap in which the data are replaced by independent Poisson values drawn from distributions whose parameters equal the predicted values. Here is one such bootstrapped dataset: You can see how much more the individual values fluctuate than before, and by how much. Doing this $2000$ times produced $2001$ variances (in two or three seconds of computation). Here is their histogram: The vertical red line marks the value of the variance for the data. (In a well-fit model, the mean of this histogram should be close to $1.$ The mean is $0.75,$ a little less than $1,$ giving an indication of the degree of overfitting.) The p-value for this test is the fraction of those $2001$ variances that are equal to or less than the observed variance. Since every bootstrapped variance was larger, the p-value is only $1/2001,$ essentially zero. I repeated this calculation for other models. In the R code below, the models vary according to the number of knots k and degree d of the spline. In every case the p-value remained at $1/2001.$ This confirms the suspicious look of the data. Indeed, if you hadn't stated that these are counts of cases, I would have guessed they were percentages of something. For percentages near $100$ the variation will be very much less than in this Poisson model and the data would not look so suspicious. This is the code that produced the first and third figures. (A slight variant produced the second, replacing X by X0 at the beginning.) y <- c(63, 66, 66, 79, 82, 96, 97, 97, 99, 99, 98, 99, 98, 99, 95, 97, 99, 92, 95, 94, 93) X <- data.frame(x=seq_along(y), y=y) library(splines) k <- 6 d <- 4 form <- y ~ bs(x, knots=k, degree=d) fit <- glm(form, data=X, family="poisson") X$y.hat <- predict(fit, type="response") library(ggplot2) ggplot(X, aes(x,y)) + geom_point() + geom_smooth(span=0.4) + geom_line(aes(x, y.hat), size=1.25) + xlab("Day") + ylab("Count") + ggtitle("Data with Smooth (Blue) and GLM Fit (Black)", paste(k, "knots of degree", d)) stat <- function(fit) var(residuals(fit)) X0 <- X set.seed(17) sim <- replicate(2e3, { X0 $y <- rpois(nrow(X0), X0$ y.hat) stat(glm(form, data=X0, family="poisson")) }) z <- stat(fit) p <- mean(c(1, sim <= z)) hist(c(z, sim), breaks=25, col="#f0f0f0", xlab = "Residual Variance", main=paste("Bootstrapped variances; p =", round(p, log10(length(sim))))) abline(v = z, col='Red', lwd=2)
{ "source": [ "https://stats.stackexchange.com/questions/467704", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/41220/" ] }
469,799
Why does "the asymptotic nature of logistic regression" make it particularly prone to overfitting in high dimensions ? ( source ): I understand the LogLoss ( cross entropy ) grows quickly as $y$ (true probability) approaches $1-y'$ (predicted probability): but why does that imply that "the asymptotic nature of logistic regression would keep driving the loss towards 0 in high dimensions without regularization" ? In my mind, just because the loss can grow quickly (if we get very close to the wrong and full opposite answer), it doesn't mean that it would thus try to fully interpolate the data. If anything the optimizer would avoid entering the asymptotic part (fast growing part) of the loss as aggressively as it can.
The existing answers aren't wrong, but I think the explanation could be a little more intuitive. There are three key ideas here. 1. Asymptotic Predictions In logistic regression we use a linear model to predict $\mu$ , the log-odds that $y=1$ $$ \mu = \beta X $$ We then use the logistic/inverse logit function to convert this into a probability $$ P(y=1) = \frac{1}{1 + e^{-\mu}} $$ Importantly, this function never actually reaches values of $0$ or $1$ . Instead, $y$ gets closer and closer to $0$ as $\mu$ becomes more negative, and closer to $1$ as it becomes more positive. 2. Perfect Separation Sometimes, you end up with situations where the model wants to predict $y=1$ or $y=0$ . This happens when it's possible to draw a straight line through your data so that every $y=1$ on one side of the line, and $0$ on the other. This is called perfect separation . Perfect separation in 1D In 2D When this happens, the model tries to predict as close to $0$ and $1$ as possible, by predicting values of $\mu$ that are as low and high as possible. To do this, it must set the regression weights, $\beta$ as large as possible. Regularisation is a way of counteracting this: the model isn't allowed to set $\beta$ infinitely large, so $\mu$ can't be infinitely high or low, and the predicted $y$ can't get so close to $0$ or $1$ . 3. Perfect Separation is more likely with more dimensions As a result, regularisation becomes more important when you have many predictors. To illustrate, here's the previously plotted data again, but without the second predictors. We see that it's no longer possible to draw a straight line that perfectly separates $y=0$ from $y=1$ . Code # https://stats.stackexchange.com/questions/469799/why-is-logistic-regression-particularly-prone-to-overfitting library(tidyverse) theme_set(theme_classic(base_size = 20)) # Asymptotes mu = seq(-10, 10, .1) p = 1 / (1 + exp(-mu)) g = ggplot(data.frame(mu, p), aes(mu, p)) + geom_path() + geom_hline(yintercept=c(0, 1), linetype='dotted') + labs(x=expression(mu), y='P(y=1)') g g + coord_cartesian(xlim=c(-10, -9), ylim=c(0, .001)) # Perfect separation x = c(1, 2, 3, 4, 5, 6) y = c(0, 0, 0, 1, 1, 1) df = data.frame(x, y) ggplot(df, aes(x, y)) + geom_hline(yintercept=c(0, 1), linetype='dotted') + geom_smooth(method='glm', method.args=list(family=binomial), se=F) + geom_point(size=5) + geom_vline(xintercept=3.5, color='red', size=2, linetype='dashed') ## In 2D x1 = c(rnorm(100, -2, 1), rnorm(100, 2, 1)) x2 = c(rnorm(100, -2, 1), rnorm(100, 2, 1)) y = ifelse( x1 + x2 > 0, 1, 0) df = data.frame(x1, x2, y) ggplot(df, aes(x1, x2, color=factor(y))) + geom_point() + geom_abline(intercept=1, slope=-1, color='red', linetype='dashed') + scale_color_manual(values=c('blue', 'black')) + coord_equal(xlim=c(-5, 5), ylim=c(-5, 5)) + labs(color='y') ## Same data, but ignoring x2 ggplot(df, aes(x1, y)) + geom_hline(yintercept=c(0, 1), linetype='dotted') + geom_smooth(method='glm', method.args=list(family=binomial), se=T) + geom_point()
{ "source": [ "https://stats.stackexchange.com/questions/469799", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/27838/" ] }
469,819
I have a problem where I have to compare the effect of X ( univariate random variable) on distribution of Y (univariate random variable) between 2 different cases. Y is not following a Normal distribution. I am not sure whether to Mean/ median-based statistics (such as ANOVA) or use a distance between distribution metric (such as KL divergence).
The existing answers aren't wrong, but I think the explanation could be a little more intuitive. There are three key ideas here. 1. Asymptotic Predictions In logistic regression we use a linear model to predict $\mu$ , the log-odds that $y=1$ $$ \mu = \beta X $$ We then use the logistic/inverse logit function to convert this into a probability $$ P(y=1) = \frac{1}{1 + e^{-\mu}} $$ Importantly, this function never actually reaches values of $0$ or $1$ . Instead, $y$ gets closer and closer to $0$ as $\mu$ becomes more negative, and closer to $1$ as it becomes more positive. 2. Perfect Separation Sometimes, you end up with situations where the model wants to predict $y=1$ or $y=0$ . This happens when it's possible to draw a straight line through your data so that every $y=1$ on one side of the line, and $0$ on the other. This is called perfect separation . Perfect separation in 1D In 2D When this happens, the model tries to predict as close to $0$ and $1$ as possible, by predicting values of $\mu$ that are as low and high as possible. To do this, it must set the regression weights, $\beta$ as large as possible. Regularisation is a way of counteracting this: the model isn't allowed to set $\beta$ infinitely large, so $\mu$ can't be infinitely high or low, and the predicted $y$ can't get so close to $0$ or $1$ . 3. Perfect Separation is more likely with more dimensions As a result, regularisation becomes more important when you have many predictors. To illustrate, here's the previously plotted data again, but without the second predictors. We see that it's no longer possible to draw a straight line that perfectly separates $y=0$ from $y=1$ . Code # https://stats.stackexchange.com/questions/469799/why-is-logistic-regression-particularly-prone-to-overfitting library(tidyverse) theme_set(theme_classic(base_size = 20)) # Asymptotes mu = seq(-10, 10, .1) p = 1 / (1 + exp(-mu)) g = ggplot(data.frame(mu, p), aes(mu, p)) + geom_path() + geom_hline(yintercept=c(0, 1), linetype='dotted') + labs(x=expression(mu), y='P(y=1)') g g + coord_cartesian(xlim=c(-10, -9), ylim=c(0, .001)) # Perfect separation x = c(1, 2, 3, 4, 5, 6) y = c(0, 0, 0, 1, 1, 1) df = data.frame(x, y) ggplot(df, aes(x, y)) + geom_hline(yintercept=c(0, 1), linetype='dotted') + geom_smooth(method='glm', method.args=list(family=binomial), se=F) + geom_point(size=5) + geom_vline(xintercept=3.5, color='red', size=2, linetype='dashed') ## In 2D x1 = c(rnorm(100, -2, 1), rnorm(100, 2, 1)) x2 = c(rnorm(100, -2, 1), rnorm(100, 2, 1)) y = ifelse( x1 + x2 > 0, 1, 0) df = data.frame(x1, x2, y) ggplot(df, aes(x1, x2, color=factor(y))) + geom_point() + geom_abline(intercept=1, slope=-1, color='red', linetype='dashed') + scale_color_manual(values=c('blue', 'black')) + coord_equal(xlim=c(-5, 5), ylim=c(-5, 5)) + labs(color='y') ## Same data, but ignoring x2 ggplot(df, aes(x1, y)) + geom_hline(yintercept=c(0, 1), linetype='dotted') + geom_smooth(method='glm', method.args=list(family=binomial), se=T) + geom_point()
{ "source": [ "https://stats.stackexchange.com/questions/469819", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/287124/" ] }
471,732
What is the cleanest, easiest way to explain someone the concept of Kolmogorov Smirnov Test? What does it intuitively mean? It's a concept that I have difficulty in articulating - especially when explaining to someone. Can someone please explain it in terms of a graph and/or using simple examples?
The Kolmogorov-Smirnov test assesses the hypothesis that a random sample (of numerical data) came from a continuous distribution that was completely specified without referring to the data. Here is the graph of the cumulative distribution function (CDF) of such a distribution. A sample can be fully described by its empirical (cumulative) distribution function, or ECDF. It plots the fraction of data less than or equal to the horizontal values. Thus, with a random sample of $n$ values, when we scan from left to right it jumps upwards by $1/n$ each time we cross a data value. The next figure displays the ECDF for a sample of $n=10$ values taken from this distribution. The dot symbols locate the data. The lines are drawn to provide a visual connection among the points similar to the graph of the continuous CDF. The K-S test compares the CDF to the ECDF using the greatest vertical difference between their graphs. The amount (a positive number) is the Kolmogorov-Smirnov test statistic. We may visualize the KS test statistic by locating the data point situated furthest above or below the CDF. Here it is highlighted in red. The test statistic is the vertical distance between the extreme point and the value of the reference CDF. Two limiting curves, located this distance above and below the CDF, are drawn for reference. Thus, the ECDF lies between these curves and just touches at least one of them. To assess the significance of the KS test statistic, we compare it-- as usual --to the KS test statistics that would tend to occur in perfectly random samples from the hypothesized distribution. One way to visualize them is to graph the ECDFs for many such (independent) samples in a way that indicates what their KS statistics are. This forms the "null distribution" of the KS statistic. The ECDF of each of $200$ samples is shown along with a single red marker located where it departs the most from the hypothesized CDF. In this case it is evident that the original sample (in blue) departs less from the CDF than would most random samples. (73% of the random samples depart further from the CDF than does the blue sample. Visually, this means 73% of the red dots fall outside the region delimited by the two red curves.) Thus, we have (on this basis) no evidence to conclude our (blue) sample was not generated by this CDF. That is, the difference is "not statistically significant." More abstractly, we may plot the distribution of the KS statistics in this large set of random samples. This is called the null distribution of the test statistic. Here it is: The vertical blue line locates the KS test statistic for the original sample. 27% of the random KS test statistics were smaller and 73% of the random statistics were greater. Scanning across, it looks like the KS statistic for a dataset (of this size, for this hypothesized CDF) would have to exceed 0.4 or so before we would conclude it is extremely large (and therefore constitutes significant evidence that the hypothesized CDF is incorrect). Although much more can be said--in particular, about why KS test works the same way, and produces the same null distribution, for any continuous CDF--this is enough to understand the test and to use it together with probability plots to assess data distributions. In response to requests, here is the essential R code I used for the calculations and plots. It uses the standard Normal distribution ( pnorm ) for the reference. The commented-out line established that my calculations agree with those of the built-in ks.test function. I had to modify its code in order to extract the specific data point contributing to the KS statistic. ecdf.ks <- function(x, f=pnorm, col2="#00000010", accent="#d02020", cex=0.6, limits=FALSE, ...) { obj <- ecdf(x) x <- sort(x) n <- length(x) y <- f(x) - (0:(n - 1))/n p <- pmax(y, 1/n - y) dp <- max(p) i <- which(p >= dp)[1] q <- ifelse(f(x[i]) > (i-1)/n, (i-1)/n, i/n) # if (dp != ks.test(x, f)$statistic) stop("Incorrect.") plot(obj, col=col2, cex=cex, ...) points(x[i], q, col=accent, pch=19, cex=cex) if (limits) { curve(pmin(1, f(x)+dp), add=TRUE, col=accent) curve(pmax(0, f(x)-dp), add=TRUE, col=accent) } c(i, dp) }
{ "source": [ "https://stats.stackexchange.com/questions/471732", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/260797/" ] }
472,537
I'm confused about how to reconcile the probability of independent events not having anything to do with prior history, but sequences of events do (seemingly) take into account prior history. This question asks a similar question: Probability of independent events given the history . However, having read that, I found I had a very specific confusion about the seeming contradiction between two formulas for probabilities that seem equal to me, but will produce different results based on our understanding of P of sequences versus P of independent events: (A) P(HHHHH) = 0.03125 (B) P(H | HHHH) = 0.5 Can anyone explain how the left side of both equations, P(HHHHH) and P(H | HHHH) are different. And does anything change if we shift from a frequentist to bayesian perspective?
P(HHHHH) is the probability of having five heads in a row. But, P(H|HHHH) means having heads if the last four tosses were heads. In the former, you're at the beginning of the experiment and in the latter one you have already completed four tosses and know the results. Think about the following rewordings: P(HHHHH): If you were to start the experiment all over again, what would be the probability of having five heads? P(H|HHHH): If you were to start the experiment but keep restarting it until you got four heads in a row, and then, given that you have four heads, what would be the probability of having the final one as heads?
{ "source": [ "https://stats.stackexchange.com/questions/472537", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/220736/" ] }
472,822
From my own experience, LSTM has a long training time, and does not improve performance significantly in many real world tasks. To make the question more specific, I want to ask when LSTM will work better than other deep NN (may be with real world examples)? I know LSTM captures the sequential relationship in data, but is it really necessary? Most demos on related topic are meaningless. They just focus on toy data e.g., IMDB review, where simple logistic regression will get very good results. I do not see any value of using LSTM which has huge computational cost but marginal improvements (if there are any). Even with these toy examples, I did not find any good use cases that LSTM can solve very well but other models cannot.
Maybe. But RNNs aren't . Transformers learn "pseudo-temporal" relationships; they lack the true recurrent gradient that RNNs have, and thus extract fundamentally different features. This paper , for example, shows that the standard transformers are difficult to optimize in reinforcement learning settings, especially in memory-intensive environments. They do, however, eventually design a variant surpassing LSTMs. Where are RNNs still needed? Long memory tasks. Very long memory. IndRNNs have show ability to remember for 5000 timesteps, where LSTM barely manages 1000. A transformer is quadratic in time-complexity whereas RNNs are linear , meaning good luck processing even a single iteration of 5000 timesteps. If that isn't enough, the recent Legendre Memory Units have demonstrated memory of up to 512,000,000 timesteps ; I'm unsure the world's top supercomputer could fit the resultant 1E18 tensor in memory. Aside reinforcement learning, signal applications are memory-demanding - e.g. speech synthesis, video synthesis, seizure classification. While CNNs have shown much success on these tasks, many utilize RNNs inserted in later layers; CNNs learn spatial features, RNNs temporal/recurrrent. An impressive 2019 paper's network manages to clone a speaker's voice from a only a 5 second sample , and it uses CNNs + LSTMs. Memory vs. Feature Quality : One doesn't warrant the other; "quality" refers to information utility for a given task. For sentences with 50 words, for example, model A may classify superior to model B, but fail dramatically with 100 where B would have no trouble. This exact phenomenon is illustrated in the recent Bistable Recurrent Cell paper, where the cell shows better memory for longer sequences, but is outdone by LSTMs on shorter sequences. An intuition is, LSTMs' four-gated networking permits for greater control over information routing, and thus richer feature extraction. Future of LSTMs? My likeliest bet is, some form of enhancement - like a Bistable Recurrent Cell, maybe with attention, and recurrent normalization (e.g. LayerNorm or Recurrent BatchNorm ). BRC's design is based on control theory , and so are LMUs; such architectures enjoy self-regularization, and there's much room for further innovation. Ultimately, RNNs cannot be "replaced" by non-recurrent architectures, and will thus perform superior on some tasks that demand explicitly recurrent features. Recurrent Transformers If we can't do away with recurrence, can't we just incorporate it with transformers somehow? Yes : Universal Transformers . Not only is there recurrence, but variable input sequences are supported, just like in RNNs. Authors go so far as to argue that UTs are Turing complete ; whether that's true I haven't verified, but even if it is, it doesn't warrant practical ability to fully harness this capability. Bonus : It helps to visualize RNNs to better understand and debug them; you can see their weights, gradients, and activations in action with See RNN , a package of mine (pretty pics included). Update 6/29/2020 : new paper redesigns transformers to operate in time dimension with linear , O(N), complexity: Transformers are RNNs . Mind the title though; from section 3.4: "we consider recurrence with respect to time and not depth". So they are a kind of RNN, but still differ from 'traditional' ones. I've yet to read it, seems promising; a nice video explanation here .
{ "source": [ "https://stats.stackexchange.com/questions/472822", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/113777/" ] }
473,455
The central limit theorem (CLT) gives some nice properties about converging to a normal distribution. Prior to studying statistics formally, I was under the extremely wrong impression that the CLT said that data approached normality. I now find myself arguing with collaborators about this. I say that $68\%$ of the data need not be within one standard deviation of the mean when we have non-normal distributions. They agree but then say that, by the CLT, since we have many observations (probably 50,000), our data are very close to normal, so we can use the empirical rule and say that $68\%$ of the data are within one standard deviation of the mean. This is, of course, false. The population does not care how many observations are drawn from it; the population is the population, whether we sample from it or not! What would be a good way to explain why the central limit theorem is not about the empirical distribution converging?
As whuber notes , you can always point your collaborators to a binary discrete distribution. But they might consider that "cheating" and retreat to the weaker claim that the proposed statement only applied to continuous distributions. So use the uniform distribution on the unit interval $[0,1]$ . It has a mean of $\mu=0.5$ , a variance of $\frac{1}{12}$ , thus a standard deviation of $\sigma=\frac{1}{\sqrt{12}}\approx 0.289$ . But of course the interval $[\mu-\sigma,\mu+\sigma]\approx[0.211,0.789]$ of length $2\sigma\approx 0.577$ only contains $57.7\%$ of your data (more specifically: as the sample size increases, the proportion approaches $0.577$ ), not $68\%$ , no matter how many data points you sample.
{ "source": [ "https://stats.stackexchange.com/questions/473455", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/247274/" ] }
474,192
Han et al . (2015) used a method of iterative pruning to reduce their network to only 10% of its original size with no loss of accuracy by removing weights with very low values, since these changed very little. As someone new to the machine learning area, why wouldn't you do this (unless your network is already very small)? It seems to me that for deep learning your network would be smaller, faster, more energy efficient, etc. at no real cost. Should we all use this method for larger neural networks?
Pruning is indeed remarkably effective and I think it is pretty commonly used on networks which are "deployed" for use after training. The catch about pruning is that you can only increase efficiency, speed, etc. after training is done. You still have to train with the full size network. Most computation time throughout the lifetime of a model's development and deployment is spent during development: training networks, playing with model architectures, tweaking parameters, etc. You might train a network several hundred times before you settle on the final model. Reducing computation of the deployed network is a drop in the bucket compared to this. Among ML researchers, we're mainly trying to improve training techniques for DNN's. We usually aren't concerned with deployment, so pruning isn't used there. There is some research on utilizing pruning techniques to speed up network training, but not much progress has been made. See, for example, my own paper from 2018 which experimented with training on pruned and other structurally sparse NN architectures: https://arxiv.org/abs/1810.00299
{ "source": [ "https://stats.stackexchange.com/questions/474192", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/289680/" ] }
474,220
Let $X_1$ and $X_2$ be iid random variables from a $Bernoulli(p)$ distribution. Verify if the statistic $X_1+2X_2$ is sufficient for $p$ . I calculated and found out $X_1+X_2$ as a sufficient statistic for $p$ . Is this enough to rule out the possibility of $X1+2X2$ as a sufficient statistic? Is there a better way to show that explicitly? Addendum: I want to check for $X_1+2X_2$ , sorry for the mistake. It is now edited.
Pruning is indeed remarkably effective and I think it is pretty commonly used on networks which are "deployed" for use after training. The catch about pruning is that you can only increase efficiency, speed, etc. after training is done. You still have to train with the full size network. Most computation time throughout the lifetime of a model's development and deployment is spent during development: training networks, playing with model architectures, tweaking parameters, etc. You might train a network several hundred times before you settle on the final model. Reducing computation of the deployed network is a drop in the bucket compared to this. Among ML researchers, we're mainly trying to improve training techniques for DNN's. We usually aren't concerned with deployment, so pruning isn't used there. There is some research on utilizing pruning techniques to speed up network training, but not much progress has been made. See, for example, my own paper from 2018 which experimented with training on pruned and other structurally sparse NN architectures: https://arxiv.org/abs/1810.00299
{ "source": [ "https://stats.stackexchange.com/questions/474220", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/284757/" ] }
474,250
We got the following comment from an associate editor who asked for ex-post power calculations. We used the treatment difference and sample size from our study and reported the power of detecting the effect of the observed point estimate at a 5% significance level. The AE was not happy and wrote: It appears that you used your existing data. However, power calculations are something one does (or should do) ex ante, i.e., when the treatments averages are unknown. it continued "More to the point, the idea of the power calculations is to test for the probability of type I and type II errors. This, by definition, means that the sample mean and standard deviation are likely to be different than those in the population. " If I am not mistaken, I can not jointly test (determine, I think is what s/he meant), as one has to fix at least one of the errors. I am really not sure what to make out of this comment.
Pruning is indeed remarkably effective and I think it is pretty commonly used on networks which are "deployed" for use after training. The catch about pruning is that you can only increase efficiency, speed, etc. after training is done. You still have to train with the full size network. Most computation time throughout the lifetime of a model's development and deployment is spent during development: training networks, playing with model architectures, tweaking parameters, etc. You might train a network several hundred times before you settle on the final model. Reducing computation of the deployed network is a drop in the bucket compared to this. Among ML researchers, we're mainly trying to improve training techniques for DNN's. We usually aren't concerned with deployment, so pruning isn't used there. There is some research on utilizing pruning techniques to speed up network training, but not much progress has been made. See, for example, my own paper from 2018 which experimented with training on pruned and other structurally sparse NN architectures: https://arxiv.org/abs/1810.00299
{ "source": [ "https://stats.stackexchange.com/questions/474250", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/289700/" ] }
474,440
Both batch norm and layer norm are common normalization techniques for neural network training. I am wondering why transformers primarily use layer norm.
It seems that it has been the standard to use batchnorm in CV tasks, and layernorm in NLP tasks. The original Attention is All you Need paper tested only NLP tasks, and thus used layernorm. It does seem that even with the rise of transformers in CV applications, layernorm is still the most standardly used, so I'm not completely certain as to the pros and cons of each. But I do have some personal intuitions -- which I'll admit aren't grounded in theory, but which I'll nevertheless try to elaborate on in the following. Recall that in batchnorm, the mean and variance statistics used for normalization are calculated across all elements of all instances in a batch, for each feature independently. By "element" and "instance," I mean "word" and "sentence" respectively for an NLP task, and "pixel" and "image" for a CV task. On the other hand, for layernorm, the statistics are calculated across the feature dimension, for each element and instance independently ( source ). In transformers, it is calculated across all features and all elements, for each instance independently. This illustration from this recent article conveys the difference between batchnorm and layernorm: (in the case of transformers, where the normalization stats are calculated across all features and all elements for each instance independently, in the image that would correspond to the left face of the cube being colored blue.) Now onto the reasons why batchnorm is less suitable for NLP tasks. In NLP tasks, the sentence length often varies -- thus, if using batchnorm, it would be uncertain what would be the appropriate normalization constant (the total number of elements to divide by during normalization) to use. Different batches would have different normalization constants which leads to instability during the course of training. According to the paper that provided the image linked above, "statistics of NLP data across the batch dimension exhibit large fluctuations throughout training. This results in instability, if BN is naively implemented." (The paper is concerned with an improvement upon batchnorm for use in transformers that they call PowerNorm, which improves performance on NLP tasks as compared to either batchnorm or layernorm.) Another intuition is that in the past (before Transformers), RNN architectures were the norm. Within recurrent layers, it is again unclear how to compute the normalization statistics. (Should you consider previous words which passed through a recurrent layer?) Thus it's much more straightforward to normalize each word independently of others in the same sentence. Of course this reason does not apply to transformers, since computing on words in transformers has no time-dependency on previous words, and thus you can normalize across the sentence dimension too (in the picture above that would correspond to the entire left face of the cube being colored blue). It may also be worth checking out instance normalization and group normalization , I'm no expert on either but apparently each has its merits.
{ "source": [ "https://stats.stackexchange.com/questions/474440", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/188353/" ] }
474,454
I need a function that takes in a vector of probabilities, a value and an index. the value is can be negative, 0 or positive, and can be of any absolute value. it then adds the value to the probability in the index given inside the vector, and re scales it to sum up to one. the output vector has to be a probability distribution - so it has to still sum up to one, and include only values between 0 to 1. one more important feature that I need is that of I add a positive value to a probability, the resulting probability will always be higher. if I add a negative value, it will always be smaller. examples: func([0.1,0.2,0.3,0.4], 0.1, 3) -- > [0.1, 0.2, 0.3, 0.5] --> something like: [0.08, 0.18, 0.28, 0.46] so 0.4 went up, and all others went down. func([0. ,0. ,0., 1.], 0.1, 3) -- > [0., 0., 0., 1.1] --> [0. ,0. ,0., 1.] since no value van be higher than 1, it stayed the same func([0. ,0. ,0., 1.], -0.1, 3) -- > [0., 0., 0., 0.9] --> something like: [0.02 ,0.02 ,0.02, 0.94] here I used a negative value, so ind 3 went down. I tried softmax, dividing by sum, dividing by (1/sum) and combinations of those, but nothing works as I need. any ideas?
It seems that it has been the standard to use batchnorm in CV tasks, and layernorm in NLP tasks. The original Attention is All you Need paper tested only NLP tasks, and thus used layernorm. It does seem that even with the rise of transformers in CV applications, layernorm is still the most standardly used, so I'm not completely certain as to the pros and cons of each. But I do have some personal intuitions -- which I'll admit aren't grounded in theory, but which I'll nevertheless try to elaborate on in the following. Recall that in batchnorm, the mean and variance statistics used for normalization are calculated across all elements of all instances in a batch, for each feature independently. By "element" and "instance," I mean "word" and "sentence" respectively for an NLP task, and "pixel" and "image" for a CV task. On the other hand, for layernorm, the statistics are calculated across the feature dimension, for each element and instance independently ( source ). In transformers, it is calculated across all features and all elements, for each instance independently. This illustration from this recent article conveys the difference between batchnorm and layernorm: (in the case of transformers, where the normalization stats are calculated across all features and all elements for each instance independently, in the image that would correspond to the left face of the cube being colored blue.) Now onto the reasons why batchnorm is less suitable for NLP tasks. In NLP tasks, the sentence length often varies -- thus, if using batchnorm, it would be uncertain what would be the appropriate normalization constant (the total number of elements to divide by during normalization) to use. Different batches would have different normalization constants which leads to instability during the course of training. According to the paper that provided the image linked above, "statistics of NLP data across the batch dimension exhibit large fluctuations throughout training. This results in instability, if BN is naively implemented." (The paper is concerned with an improvement upon batchnorm for use in transformers that they call PowerNorm, which improves performance on NLP tasks as compared to either batchnorm or layernorm.) Another intuition is that in the past (before Transformers), RNN architectures were the norm. Within recurrent layers, it is again unclear how to compute the normalization statistics. (Should you consider previous words which passed through a recurrent layer?) Thus it's much more straightforward to normalize each word independently of others in the same sentence. Of course this reason does not apply to transformers, since computing on words in transformers has no time-dependency on previous words, and thus you can normalize across the sentence dimension too (in the picture above that would correspond to the entire left face of the cube being colored blue). It may also be worth checking out instance normalization and group normalization , I'm no expert on either but apparently each has its merits.
{ "source": [ "https://stats.stackexchange.com/questions/474454", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/185329/" ] }
474,616
In the research area of potential outcomes and individual treatment effect (ITE) estimation, a common assumption called ''strong ignorability'' is often made. Given a graphical model with the following variables: treatment $T=\{0,1\}$ (e.g. giving medication or not), covariates $X$ (e.g. patient history), and outcome $Y$ (e.g. health of a patient). The corresponding visualized graphical model would look like as follows: $Y \leftarrow X \rightarrow T \rightarrow Y$ (where Y is the same here, see image below) Then, strong ignorability is defined as : $(Y_0, Y_1) \perp\!\!\!\perp T \mid X$ where $Y_0 = Y(T=0)$ and $Y_1 = Y(T=1)$ . My question is, if this assumption is made, then this means the outcome is independent of the treatment given $X$ . But how can the outcome ever be independent of the treatment? Why do we even bother to solve the ITE problem if we start out with the assumption that the treatment does not really make a difference for the outcome? Isn't the whole idea of ITE estimation, to determine the effect of a treatment on the outcome Y by estimating the difference between the two potential outcomes $Y(T=0)$ and $Y(T=1)$ , one of which we observe as factual observation from our observational dataset? What I am missing here and why is my understanding incorrect? I guess it has something to do with the fact, that if we know $X$ (i.e. when X is given), then there is no uncertainty anymore about the treatment $T$ because knowing $X$ makes $T$ deterministic (as we can see from the graphical model above?) Moreover, I think I do not understand the difference between the following four things: $Y \perp\!\!\!\perp T \mid X$ $(Y_0, Y_1) ⊥ T \mid X$ $Y_0 \perp\!\!\!\perp T \mid X$ $Y_1 \perp\!\!\!\perp T \mid X$
I'll try to break it down a bit.. I think most of the confusion when studying potential outcomes (ie $Y_0,Y_1$ ) is to realize that $Y_0,Y_1$ are different than $Y$ without bringing in the covariate $X$ . The key is to realize that every individual $i$ has potential outcomes $(Y_{i1},Y_{i0})$ , but you only observe $Y_{iT}$ in the data. Ignorability says $$(Y_0,Y_1) \perp \!\!\! \perp T|X$$ which says that conditional on $X$ , then the potential outcomes are independent of treatment $T$ . It is not saying that $Y$ is independent of $T$ . As you point out, that makes no sense. In fact, a classic way to re-write $Y$ is as $$Y = Y_1T + Y_0(1-T)$$ which tells us that for every individual, we observe $Y_i$ which is either $Y_{i1}$ or $Y_{i0}$ depending on the value of treatment $T_i$ . The reason for potential outcomes is that we want to know the effect $Y_{i1} - Y_{i0}$ but only observe one of the two objects for everyone. The question is: what would have $Y_{i0}$ been for the individuals $i$ who have $T_i=1$ (and vice versa)? Ignoring the conditional on $X$ part, the ignorability assumption essentially says that treatment $T$ can certainly affect $Y$ by virtue of $Y$ being equal to $Y_1$ or $Y_0$ , but that $T$ is unrelated to the values of $Y_0,Y_1$ themselves. To motivate this, consider a simple example where we have only two types of people: weak people and strong people. Let treatment $T$ be receiving medication, and $Y$ is health of patient (higher $Y$ means healthier). Strong people are far healthier than weak people. Now suppose that receiving medication makes everyone healthier by a fixed amount. First case: suppose that only unhealthy people seek out medication. Then those with $T=1$ will be mostly the weak people, since they are the unhealthy people, and those with $T=0$ will be mostly strong people. But then ignorability fails, since the values of $(Y_1,Y_0)$ are related to treatment status $T$ : in this case, both $Y_1$ and $Y_0$ will be lower for $T=1$ than for $T=0$ since $T=1$ is filled with mostly weak people and we stated that weak people just are less healthy overall. Second case: suppose that we randomly assign medication to our pool of strong and weak people. Here, ignorability holds, since $(Y_1,Y_0)$ are independent of treatment status $T$ : weak and strong people are equally likely to receive treatment, so the values of $Y_1$ and $Y_0$ are on average the same for $T=0$ and $T=1$ . However, since $T$ makes everyone healthier, clearly $Y$ is not independent of $T$ .. it has a fixed effect on health in my example! In other words, ignorability allows that $T$ directly affects whether you receive $Y_1$ or $Y_0$ , but treatment status is not related to the these values. In this case, we can figure out what $Y_0$ would have been for those who get treatment by looking at the effect of those who didn't get treatment! We get a treatment effect by comparing those who get treatment to those who don't, but we need a way to make sure that those who get treatment are not fundamentally different from those who don't get treatment, and that's precisely what the ignorability condition assumes. We can illustrate with two other examples: A classic case where this holds is in randomized control trials (RCTs) where you randomly assign treatment to individuals. Then clearly those who get treatment may have a different outcome because treatment affects your outcome (unless treatment really has no effect on outcome), but those who get treatment are randomly selected and so treatment receival is independent of potential outcomes, and so you indeed do have that $(Y_0,Y_1) \perp \!\!\! \perp T$ . Ignorability assumption holds. For an example where this fails, consider treatment $T$ be an indicator for finishing high school or not, and let the outcome $Y$ be income in 10 years, and define $(Y_0,Y_1)$ as before. Then $(Y_0,Y_1)$ is not independent of $T$ since presumably the potential outcomes for those with $T=0$ are fundamentally different from those with $T=1$ . Maybe people who finish high school have more perseverance than those who don't, or are from wealthier families, and these in turn imply that if we could have observed a world where individuals who finished high school had not finished it, their outcomes would still have been different than the observed pool of individuals who did not finish high school. As such, ignorability assumption likely does not hold: treatment is related to potential outcomes, and in this case, we may expect that $Y_0 | T_i = 1 > Y_0 | T_i = 0$ . The conditioning on $X$ part is simply for cases where ignorability holds conditional on some controls. In your example, it may be that treatment is independent of these potential outcomes only after conditioning on patient history. For an example where this may happen, suppose that individuals with higher patient history $X$ are both sicker and more likely to receive treatment $T$ . Then without $X$ , we run into the same problem as described as above: the unrealized $Y_0$ for those who receive treatment may be lower than the realized $Y_0$ for those who did not receive treatment because the former are more likely to be unhealthy individuals, and so comparing those with and without treatment will cause issues since we are not comparing the same people. However, if we control for patient history, we can instead assume that conditional on $X$ , treatment assignment to individuals is again unrelated to their potential outcomes and so we are good to go again. Edit As a final note, based on chat with OP, it may be helpful to relate the potential outcomes framework to the DAG in OP's post (Noah's response covers a similar setting with more formality, so definitely also worth checking that out). In these type of DAGs, we fully model relationships between variables. Forgetting about $X$ for a it, suppose we just have that $T \rightarrow Y$ . What does this mean? Well it means that the only effect of T is through $T = 1$ or $T= 0$ , and through no other channels, so we immediately have that T affects $Y_1T+ Y_0(1-T)$ only through the value of $T$ . You may think "well what if T affects Y through some other channel" but by saying $T \rightarrow Y$ , we are saying there are no other channels. Next, consider your case of $X \rightarrow T \rightarrow Y \leftarrow X$ . Here, we have that T directly affects Y, but X also directly affects T and Y. Why does ignorability fail? Because T can be 1 through the effect of X, which will also affect Y, and so $T = 1$ could affect $Y_0$ and $Y_1$ for the group where $T=1$ , and so T affects $Y_1T + Y_0(1-T)$ both through 1. the direct effect of the value of T, but 2. T now also affects $Y_1$ and $Y_0$ through the fact that $X$ affects $Y$ and $T$ at the same time.
{ "source": [ "https://stats.stackexchange.com/questions/474616", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/210764/" ] }
475,138
What is the cleanest, easiest way to explain someone the concept of Inference? What does it intuitively mean? How would you go to explain it to the layperson, or to a person who has studied a very basic probability and statistics course? something that would contribute to making it also 'intuitively' clear would be greatly appreciated!
Sometimes it's best to explain a concept through a concrete example: Imagine you grab an apple, take a bite from it and it tastes sweet. Will you conclude based on that bite that the entire apple is sweet? If yes, you will have inferred that the entire apple is sweet based on a single bite from it. Inference is the process of using the part to learn about the whole. How the part is selected is important in this process: the part needs to be representative of the whole. In other words, the part should be like a mini-me version of the whole. If it is not, our learning will be flawed and possibly incorrect. Why do we need inference? Because we need to make conclusions and then decisions involving the whole based on partial information about it supplied by the part .
{ "source": [ "https://stats.stackexchange.com/questions/475138", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/260797/" ] }
476,424
In my statistical teaching, I encounter some stubborn ideas/principles relating to statistics that have become popularised, yet seem to me to be misleading, or in some cases utterly without merit. I would like to solicit the views of others on this forum to see what are the worst (commonly adopted) ideas/principles in statistical analysis/inference. I am mostly interested in ideas that are not just novice errors; i.e., ideas that are accepted and practiced by some actual statisticians/data analysts. To allow efficient voting on these, please give only one bad principle per answer, but feel free to give multiple answers.
I'll present one novice error (in this answer) and perhaps one error committed by more seasoned people. Very often, even on this website, I see people lamenting that their data are not normally distributed and so t-tests or linear regression are out of the question. Even stranger, I will see people try to rationalize their choice for linear regression because their covariates are normally distributed . I don't have to tell you that regression assumptions are about the conditional distribution, not the marginal. My absolute favorite way to demonstrate this flaw in thinking is to essentially compute a t-test with linear regression as I do here .
{ "source": [ "https://stats.stackexchange.com/questions/476424", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/173082/" ] }
476,762
Suppose the 95% confidence interval for $\ln(x)$ is $[l,u]$ . Is it true that the 95% CI for $x$ is simply $[e^l, e^u]$ ? I have the intuition the answer is yes, because $\ln$ is a continuous function. Is there some theorem that supports/refutes my intuition?
That is a 95% confidence interval for $x$ , but not the 95% confidence interval. For any continuous strictly-monotonic transformation, your method is a legitimate way to get a confidence interval for the transformed value. (For monotonically decreasing functions, you reverse the bounds.) The other excellent answer by tchakravarty shows that the quantiles match up for these transformations, which shows how you can prove this result. Generally speaking, there are an infinite number of possible 95% confidence intervals you could formulate for $x$ , and while this is one of them, it is not generally the shortest possible interval with this level of confidence. When formulating a confidence interval, it is usually best to try to optimise to produce the shortest possible interval with the required level of coverage --- that ensures that you can make the most accurate inference possible at the required confidence level. You can find an explanation of how to do this in a related question here . Taking a nonlinear transformation of an existing interval does not give you the optimum (shortest) confidence interval (unless by an incredible coincidence!). The general method used to obtain the shortest confidence interval is to go back and look at the initial probability statement operating on the pivotal quantity used to formulate the interval. Instead of using "equal tails" in the probability statement, you set the relative tail sizes as a control variable, and then you find the formula for the length of the confidence interval conditional on that variable. Finally, you use calculus methods to determine the value of the control variable that minimises the interval length. Often this method can be programmed for broad classes of problems, allowing you to rapidly compute optimal confidence intervals for an object of interest.
{ "source": [ "https://stats.stackexchange.com/questions/476762", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/291097/" ] }
478,142
From what I understand hypothesis testing is done to identify if a finding in the sample population is statistically significant. But if I have a census data, do we really need hypothesis testings? I was thinking may be I should perform multiple random sampling from the census data and see if there is any random behavior.
It all depends on your goal. If you want to know how many people smoke and how many people die of lung cancer you can just count them, but if you want to know whether smoking increases the risk for lung cancer then you need statistical inference. If you want to know high school students' educational attainments, you can just look at complete data, but if you want to know the effects of high school students' family backgrounds and mental abilities on their eventual educational attainments you need statistical inference. If you want to know workers' earnings, you can just look at census data, but if you want to study the effects of educational attainment on earnings, you need statistical inference (you can find more examples in Morgan & Winship, Counterfactuals and Causal Inference: Methods and Principles for Social Research .) Generally speaking, if you are only looking for summary statistics in order to communicate the largest amount of information as simply as possible, you can just count, sum, divide, plot etc. But if you wish to predict what will happen, or to understand what causes what, then you need statistical inference: assumptions, paradigms, estimation, hypothesis testing, model validation, etc.
{ "source": [ "https://stats.stackexchange.com/questions/478142", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/217524/" ] }
478,657
I was studying linear regression and got stuck in r-squared. I know how to calculate r-squared like a machine, but I want to understand r-squared in human language. For example, what is the meaning of r-squared = 81%? I googled and watch several tutorials and gathered some human intuition of r-squared = 81%. r-squared = 81% means: 81% less variance around the regression line than mean line 81% less error between predicted values and actual values Actual data is 81% close to the regression line than mean line 81% better prediction of actual values using regression line than mean line These are all human language of r-squared = 81% I got. Please correct me if I am wrong. I watched a video 1 and found another explanation of r-squared. Which is: "r-squared is the percentage of variation in 'Y' that is accounted for by its regression on 'X'" Well, the last explanation is a bit confusing for me. Could anyone make me understand with a simple example of what this line actually means?
As a matter of fact, this last explanation is the best one: r-squared is the percentage of variation in 'Y' that is accounted for by its regression on 'X' Yes, it is quite abstract. Let's try to understand it. Here is some simulated data. R code: set.seed(1) xx <- runif(100) yy <- 1-xx^2+rnorm(length(xx),0,0.1) plot(xx,yy,pch=19) What we are mainly interested in is the variation in the dependent variable $y$ . In a first step, let's disregard the predictor $x$ . In this very simple "model", the variation in $y$ is the sum of the squared differences between the entries of $y$ and the mean of $y$ , $\overline{y}$ : abline(h=mean(yy),col="red",lwd=2) lines(rbind(xx,xx,NA),rbind(yy,mean(yy),NA),col="gray") This sum of squares turns out to be: sum((yy-mean(yy))^2) [1] 8.14846 Now, we try a slightly more sophisticated model: we regress $y$ on $x$ and check how much variation remains after that. That is, we now calculate the sums of squared differences between the $y$ and the regression line : plot(xx,yy,pch=19) model <- lm(yy~xx) abline(model,col="red",lwd=2) lines(rbind(xx,xx,NA),rbind(yy,predict(model),NA),col="gray") Note how the differences - the gray lines - are much smaller now than before! And here is the sum of squared differences between the $y$ and the regression line: sum(residuals(model)^2) [1] 1.312477 It turns out that this is only about 16% of the sums of squared residuals we had above: sum(residuals(model)^2)/sum((yy-mean(yy))^2) [1] 0.1610705 Thus, our regression line model reduced the unexplained variation in the observed data $y$ by 100%-16% = 84%. And this number is precisely the $R^2$ that R will report to us: summary(model) Call: lm(formula = yy ~ xx) ... snip ... Multiple R-squared: 0.8389, Adjusted R-squared: 0.8373 Now, one question you might have is why we calculate variation as a sum of squares . Wouldn't it be easier to just sum up the absolute lengths of the deviations we plot above? The reason for that lies in the fact that squares are just much easier to handle mathematically, and it turns out that if we work with squares, we can prove all kinds of helpful theorems about $R^2$ and related quantities, namely $F$ tests and ANOVA tables.
{ "source": [ "https://stats.stackexchange.com/questions/478657", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/292184/" ] }
479,552
Under a standard gaussian distribution (mean 0 and variance 1), the kurtosis is $3$ . Compared to a heavy tail distribution, is the kurtosis normally larger or smaller?
I. A direct answer to the OP Answer: It depends on what you mean by “heavy tails.” By some definitions of “heavy tails,” the answer is “no,” as pointed out here and elsewhere. Why do we care about heavy tails? Because we care about outliers (substitute the phrase “rare, extreme observation” if you have a problem with the word “outlier.” However, I will use the term “outlier” throughout for brevity.) Outliers are interesting from several points of view: In finance, outlier returns cause much more money to change hands than typical returns (see Taleb‘s discussion of black swans). In hydrology, the outlier flood will cause enormous damage and needs to be planned for. In statistical process control, outliers indicate “out of control” conditions that warrant immediate investigation and rectification. In regression analysis, outliers have enormous effects on the least squares fit. In statistical inference, the degree to which distributions produce outliers has an enormous effect on standard t tests for mean values. Similarly, the degree to which a distribution produces outliers has an enormous effect on the accuracy of the usual estimate of the variance of that distribution. So for various reasons, there is a great interest in outliers in data, and in the degree to which a distribution produces outliers. Notions of heavy-tailedness were therefore developed to characterize outlier-prone processes and data. Unfortunately, the commonly-used definition of “heavy tails” involving exponential bounds and asymptotes is too limited in its characterization of outliers and outlier-prone data generating processes: It requires tails extending to infinity, so it rules out bounded distributions that produce outliers. Further, the standard definition does not even apply to a data set , since all empirical distributions are necessarily bounded. Here is an alternative class of definitions of ”heavy-tailedness,” which I will call “tail-leverage( $m$ )” to avoid confusion with existing definitions of heavy-tailedness, that addresses this concern. Definition: Assume absolute moments up to order $m>2$ exist for random variables $X$ and $Y$ . Let $U = |(X - \mu_X)/\sigma_X|^m$ and let $V =|(Y - \mu_Y)/\sigma_Y|^m$ . If $E(V) > E(U)$ , then $Y$ is said to have greater tail-leverage( $m$ ) than $X$ . The mathematical rationale for the definition is as follows: Suppose $E(V) > E(U)$ , and let $\mu_U = E(U)$ . Draw the pdf (or pmf, in the discrete case, or in the case of an actual data set) of $V$ , which is $p_V(v)$ . Place a fulcrum at $\mu_U$ on the horizontal axis. Because of the well-known fact that the distribution balances at its mean, the distribution $p_V(v)$ “falls to the right” of the fulcrum at $\mu_U$ . Now, what causes it to “fall to the right”? Is it the concentration of mass less than 1, corresponding to the observations of $Y$ that are within a standard deviation of the mean? Is it the shape of the distribution of $Y$ corresponding to observations that are within a standard deviation of the mean? No, these aspects are to the left of the fulcrum, not to the right. It is the extremes of the distribution (or data) of $Y$ , in one or both tails, that produce high positive values of $V$ , which cause the “falling to the right.” To illustrate, consider the following two graphs of discrete distributions. The top distribution has kurtosis = 2.46, "platykurtic," and the bottom has kurtosis = 3.45, "leptokurtic." Notice that kurtosis is my tail leverage measure with $m=4$ . Both distributions are scaled to a mean of 0.0 and variance of 1.0. Now, consider the distributions of the data values raised to the fourth power, with the red vertical bar indicating the mean of the top distribution: The top distribution balances at the red bar, which locates the kurtosis of the original, untransformed data (2.46). But the bottom distribution, having larger mean (3.45, the kurtosis of the original, untransformed data), "falls to the right" of the red bar located at 2.46. What causes it to "fall to the right"? Is it greater peakedness? No, because the first distribution is more peaked. Is it greater concentration of mass near the mean? No, because this would make it "fall to the left." As is apparent from the graph, it is the extreme values that makes it "fall to the right." BTW, the term “leverage” should now be clear, given the physical representation involving the point of balance. But it is worth noting that, in the characterization of the distribution “falling to the right,” that the “tail leverage” measures can legitimately be called measures of “tail weight.” I chose not to do that because the "leverage" term is more precise. Much has been made of the fact that kurtosis does not correspond directly to the standard definition of “heavy tails.” Of course it doesn’t. Neither does it correspond to any but one of the infinitely many definitions of “tail leverage” I just gave. If you restrict your attention to the case where $m=4$ , then an answer to the OP’s question is as follows: Greater tail leverage (using $m=4$ in the definition) does indeed imply greater kurtosis (and conversely). They are identical. Incidentally, the “leverage” definition applies equally to data as it does to distributions: When you apply the kurtosis formula to the empirical distribution, it gives you the estimate of kurtosis without all the so-called “bias corrections.” (This estimate has been compared to others and is reasonable, often better in terms of accuracy; see "Comparing Measures of Sample Skewness and Kurtosis," D. N. Joanes and C. A. Gill, Journal of the Royal Statistical Society. Series D (The Statistician) Vol. 47, No. 1 (1998), pp. 183-189.) My stated leverage definition also resolves many of the various comments and answers given in response to the OP: Some beta distributions can be more greatly tail-leveraged (even if “thin-tailed” by other measures) than the normal distribution. This implies a greater outlier potential of such distributions than the normal, as described above regarding leverage and the fulcrum, despite the normal distribution having infinite tails and the beta being bounded. Further, uniforms mixed with classical “heavy-tailed” distributions are still "heavy-tailed," but can have less tail leverage than the normal distribution, provided the mixing probability on the “heavy tailed” distribution is sufficiently low so that the extremes are very uncommon, and assuming finite moments. Tail leverage is simply a measure of the extremes (or outliers). It differs from the classic definition of heavy-tailedness, even though it is arguably a viable competitor. It is not perfect; a notable flaw is that it requires finite moments, so quantile-based versions would be useful as well. Such alternative definitions are needed because the classic definition of “heavy tails” is far too limited to characterize the universe of outlier-prone data-generating processes and their resulting data. II. My paper in The American Statistician My purpose in writing the paper “Kurtosis as Peakedness, 1905-2014: R.I.P.” was to help people answer the question, “What does higher (or lower) kurtosis tell me about my distribution (or data)?” I suspected the common interpretations (still seen, by the way), “higher kurtosis implies more peaked, lower kurtosis implies more flat” were wrong, but could not quite put my finger on the reason. And, I even wondered that maybe they had an element of truth, given that Pearson said it, and even more compelling, that R.A. Fisher repeated it in all revisions of his famous book. However, I was not able to connect any math to the statement that higher (lower) kurtosis implied greater peakedness (flatness). All the inequalities went in the wrong direction. Then I hit on the main theorem of my paper. Contrary to what has been stated or implied here and elsewhere, my article was not an “opinion” piece; rather, it was a discussion of three mathematical theorems. Yes, The American Statistician (TAS) does often require mathematical proofs. I would not have been able to publish the paper without them. The following three theorems were proven in my paper, although only the second was listed formally as a “Theorem.” Main Theorem: Let $Z_X = (X - \mu_X)/\sigma_X$ and let $\kappa(X) = E(Z_X^4)$ denote the kurtosis of $X$ . Then for any distribution (discrete, continuous or mixed, which includes actual data via their discrete empirical distribution), $E\{Z_X^4 I(|Z_X| > 1)\}\le\kappa(X)\le E\{Z_X^4 I(|Z_X| > 1)\} +1$ . This is a rather trivial theorem to prove but has major consequences: It states that the shape of the distribution within a standard deviation of the mean (which ordinarily would be where the “peak” is thought to be located) contributes very little to the kurtosis. Instead, the theorem implies that for all data and distributions, kurtosis must lie within $\pm 0.5$ of $E\{Z_X^4 I(|Z_X| > 1)\} + 0.5$ . A very nice visual image of this theorem by user "kjetil b Halvorsen" is given at https://stats.stackexchange.com/a/362745/102879; see my comment that follows as well. The bound is sharpened in the Appendix of my TAS paper: Refined Theorem: Assume $X$ is continuous and that the density of $Z_X^2$ is decreasing on [0,1]. Then the “+1” of the main theorem can be sharpened to “+0.5”. This simply amplifies the point of the main theorem that kurtosis is mostly determined by the tails. More recently, @sextus-empiricus was able to reduce the " $+0.5$ " bound to " $+1/3$ ", see https://math.stackexchange.com/a/3781761 . A third theorem proven in my TAS paper states that large kurtosis is mostly determined by (potential) data that are $b$ standard deviations away from the mean, for arbitrary $b$ . Theorem 3: Consider a sequence of random variables $X_i$ , $ i = 1,2,\dots$ , for which $\kappa(X_i) \rightarrow \infty$ . Then $E\{Z_i^4I(|Z_i| > b)\}/ \kappa(X_i) \rightarrow 1$ , for each $b>0$ . The third theorem states that high kurtosis is mostly determined by the most extreme outliers; i.e., those observations that are $b$ or more standard deviations from the mean. These are mathematical theorems, so there can be no argument with them. Supposed “counterexamples” given in this thread and in other online sources are not counterexamples; after all, a theorem is a theorem, not an opinion. So what of one suggested “counterexample,” where spiking the data with many values at the mean (which thereby increases “peakedness”) causes greater kurtosis? Actually, that example just makes the point of my theorems: When spiking the data in this way, the variance is reduced, thus the observations in the tails are more extreme, in terms of number of standard deviations from the mean. And it is observations with large standard deviation from the mean, according to the theorems in my TAS paper, that cause high kurtosis. It’s not the peakedness. Or to put it another way, the reason that the spike increases kurtosis is not because of the spike itself, it is because the spike causes a reduction in the standard deviation, which makes the tails more standard deviations from the mean (i.e., more extreme), which in turn increases the kurtosis. It simply cannot be stated that higher kurtosis implies greater peakedness, because you can have a distribution that is perfectly flat over an arbitrarily high percentage of the data (pick 99.99% for concreteness) with infinite kurtosis. (Just mix a uniform with a Cauchy suitably; there are some minor but trivial and unimportant technical details regarding how to make the peak absolutely flat.) By the same construction, high kurtosis can be associated with any shape whatsoever for 99.99% of the central distribution - U-shaped, flat, triangular, multi-modal, etc. There is also a suggestion in this thread that the center of the distribution is important, because throwing out the central data of the Cauchy example in my TAS paper makes the data have low kurtosis. But this is also due to outliers and extremes: In throwing out the central portion, one increases the variance so that the extremes are no longer extreme (in terms of $Z$ values), hence the kurtosis is low. Any supposed "counterexample" actually obeys my theorems. Theorems have no counterexamples; otherwise, they would not be theorems. A more interesting exercise than “spiking” or “deleting the middle” is this: Take the distribution of a random variable $X$ (discrete or continuous, so it includes the case of actual data), and replace the mass/density within one standard deviation of the mean arbitrarily, but keep the mean and standard deviation of the resulting distribution the same as that of $X$ . Q: How much change can you make to the kurtosis statistic over all such possible replacements? A: The difference between the maximum and minimum kurtosis values over all such replacements is $\le 0.25. $ The above question and its answer comprise yet another theorem. Anyone want to publish it? I have its proof written down (it’s quite elegant, as well as constructive, identifying the max and min distributions explicitly), but I lack the incentive to submit it as I am now retired. I have also calculated the actual max differences for various distributions of $X$ ; for example, if $X$ is normal, then the difference between the largest and smallest kurtosis is over all replacements of the central portion is 0.141. Hardly a large effect of the center on the kurtosis statistic! On the other hand, if you keep the center fixed, but replace the tails, keeping the mean and standard deviation constant, you can make the kurtosis infinitely large. Thus, the effect on kurtosis of manipulating the center while keeping the tails constant, is $\le 0.25$ . On the other hand, the effect on kurtosis of manipulating the tails, while keeping the center constant, is infinite. So, while yes, I agree that spiking a distribution at the mean does increase the kurtosis, I do not find this helpful to answer the question, “What does higher kurtosis tell me about my distribution?” There is a difference between “A implies B” and “B implies A.” Just because all bears are mammals does not imply that all mammals are bears. Just because spiking a distribution increases kurtosis does not imply that increasing kurtosis implies a spike; see the uniform/Cauchy example alluded to above in my answer. It is precisely this faulty logic that caused Pearson to make the peakedness/flatness interpretations in the first place. He saw a family of distributions for which the peakedness/flatness interpretations held, and wrongly generalized. In other words, he observed that a bear is a mammal, and then wrongly inferred that a mammal is a bear. Fisher followed suit forever, and here we are. A case in point: People see this picture of "standard symmetric PDFs" (on Wikipedia at https://en.wikipedia.org/wiki/File:Standard_symmetric_pdfs.svg ) and think it generalizes to the “flatness/peakedness” conclusions. Yes, in that family of distributions, the flat distribution has the lower kurtosis and the peaked one has the higher kurtosis. But it is an error to conclude from that picture that high kurtosis implies peaked and low kurtosis implies flat. There are other examples of low kurtosis (less than the normal distribution) distributions that are infinitely peaked, and there are examples of infinite kurtosis distributions that are perfectly flat over an arbitrarily large proportion of the observable data. The bear/mammal conundrum also arises in the Finucan conditions, which state (oversimplified) that if tail probability and peak probability increase (losing some mass in between to maintain the standard deviation), then kurtosis increases. This is all fine and good, but you cannot turn the logic around and say that increasing kurtosis implies increasing tail and peak mass (and reducing what is in between). That is precisely the fatal flaw with the sometimes-given interpretation that kurtosis measures the “movement of mass simultaneously to the tails and peak but away from the shoulders." Again, all mammals are not bears. A good counterexample to that interpretation is given here https://math.stackexchange.com/a/2523606/472987 in “counterexample #1, which shows a family of distributions in which the kurtosis increases to infinity, while the mass inside the center stays constant. (There is also a counterexample #2 that has the mass in the center increasing to 1.0 yet the kurtosis decreases to its minimum, so the often-made assertion that kurtosis measures “concentration of mass in the center” is wrong as well.) Many people think that higher kurtosis implies “more probability in the tails.” This is not true; counterexample #1 shows that you can have higher kurtosis with less tail probability when the tails extend. So what does kurtosis measure? It precisely measures tail leverage (which can be called tail weight as well) as amplified through fourth powers, as I stated above with my definition of tail-leverage( $m$ ). I would just like to reiterate that my TAS article was not an opinion piece. It was instead a discussion of mathematical theorems and their consequences. There is much additional supportive material in the current post that has come to my attention since writing the TAS article, and I hope readers find it to be helpful for understanding kurtosis.
{ "source": [ "https://stats.stackexchange.com/questions/479552", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/108150/" ] }
479,569
As far as I understand, in Machine Learning there are 2 moments for optimization. Before training the model there is the optimization of the hyperparameters to find the best configuration of the model before really training the model (please correct me if I am wrong). The second moment is the optimization of the parameters. The optimization of the parameters are only possible when we have an active learning model, or an online machine Learning model? And the optimization of the parameters adjust the same coefficients as the ones during the optimization of the hyperparameters?
I. A direct answer to the OP Answer: It depends on what you mean by “heavy tails.” By some definitions of “heavy tails,” the answer is “no,” as pointed out here and elsewhere. Why do we care about heavy tails? Because we care about outliers (substitute the phrase “rare, extreme observation” if you have a problem with the word “outlier.” However, I will use the term “outlier” throughout for brevity.) Outliers are interesting from several points of view: In finance, outlier returns cause much more money to change hands than typical returns (see Taleb‘s discussion of black swans). In hydrology, the outlier flood will cause enormous damage and needs to be planned for. In statistical process control, outliers indicate “out of control” conditions that warrant immediate investigation and rectification. In regression analysis, outliers have enormous effects on the least squares fit. In statistical inference, the degree to which distributions produce outliers has an enormous effect on standard t tests for mean values. Similarly, the degree to which a distribution produces outliers has an enormous effect on the accuracy of the usual estimate of the variance of that distribution. So for various reasons, there is a great interest in outliers in data, and in the degree to which a distribution produces outliers. Notions of heavy-tailedness were therefore developed to characterize outlier-prone processes and data. Unfortunately, the commonly-used definition of “heavy tails” involving exponential bounds and asymptotes is too limited in its characterization of outliers and outlier-prone data generating processes: It requires tails extending to infinity, so it rules out bounded distributions that produce outliers. Further, the standard definition does not even apply to a data set , since all empirical distributions are necessarily bounded. Here is an alternative class of definitions of ”heavy-tailedness,” which I will call “tail-leverage( $m$ )” to avoid confusion with existing definitions of heavy-tailedness, that addresses this concern. Definition: Assume absolute moments up to order $m>2$ exist for random variables $X$ and $Y$ . Let $U = |(X - \mu_X)/\sigma_X|^m$ and let $V =|(Y - \mu_Y)/\sigma_Y|^m$ . If $E(V) > E(U)$ , then $Y$ is said to have greater tail-leverage( $m$ ) than $X$ . The mathematical rationale for the definition is as follows: Suppose $E(V) > E(U)$ , and let $\mu_U = E(U)$ . Draw the pdf (or pmf, in the discrete case, or in the case of an actual data set) of $V$ , which is $p_V(v)$ . Place a fulcrum at $\mu_U$ on the horizontal axis. Because of the well-known fact that the distribution balances at its mean, the distribution $p_V(v)$ “falls to the right” of the fulcrum at $\mu_U$ . Now, what causes it to “fall to the right”? Is it the concentration of mass less than 1, corresponding to the observations of $Y$ that are within a standard deviation of the mean? Is it the shape of the distribution of $Y$ corresponding to observations that are within a standard deviation of the mean? No, these aspects are to the left of the fulcrum, not to the right. It is the extremes of the distribution (or data) of $Y$ , in one or both tails, that produce high positive values of $V$ , which cause the “falling to the right.” To illustrate, consider the following two graphs of discrete distributions. The top distribution has kurtosis = 2.46, "platykurtic," and the bottom has kurtosis = 3.45, "leptokurtic." Notice that kurtosis is my tail leverage measure with $m=4$ . Both distributions are scaled to a mean of 0.0 and variance of 1.0. Now, consider the distributions of the data values raised to the fourth power, with the red vertical bar indicating the mean of the top distribution: The top distribution balances at the red bar, which locates the kurtosis of the original, untransformed data (2.46). But the bottom distribution, having larger mean (3.45, the kurtosis of the original, untransformed data), "falls to the right" of the red bar located at 2.46. What causes it to "fall to the right"? Is it greater peakedness? No, because the first distribution is more peaked. Is it greater concentration of mass near the mean? No, because this would make it "fall to the left." As is apparent from the graph, it is the extreme values that makes it "fall to the right." BTW, the term “leverage” should now be clear, given the physical representation involving the point of balance. But it is worth noting that, in the characterization of the distribution “falling to the right,” that the “tail leverage” measures can legitimately be called measures of “tail weight.” I chose not to do that because the "leverage" term is more precise. Much has been made of the fact that kurtosis does not correspond directly to the standard definition of “heavy tails.” Of course it doesn’t. Neither does it correspond to any but one of the infinitely many definitions of “tail leverage” I just gave. If you restrict your attention to the case where $m=4$ , then an answer to the OP’s question is as follows: Greater tail leverage (using $m=4$ in the definition) does indeed imply greater kurtosis (and conversely). They are identical. Incidentally, the “leverage” definition applies equally to data as it does to distributions: When you apply the kurtosis formula to the empirical distribution, it gives you the estimate of kurtosis without all the so-called “bias corrections.” (This estimate has been compared to others and is reasonable, often better in terms of accuracy; see "Comparing Measures of Sample Skewness and Kurtosis," D. N. Joanes and C. A. Gill, Journal of the Royal Statistical Society. Series D (The Statistician) Vol. 47, No. 1 (1998), pp. 183-189.) My stated leverage definition also resolves many of the various comments and answers given in response to the OP: Some beta distributions can be more greatly tail-leveraged (even if “thin-tailed” by other measures) than the normal distribution. This implies a greater outlier potential of such distributions than the normal, as described above regarding leverage and the fulcrum, despite the normal distribution having infinite tails and the beta being bounded. Further, uniforms mixed with classical “heavy-tailed” distributions are still "heavy-tailed," but can have less tail leverage than the normal distribution, provided the mixing probability on the “heavy tailed” distribution is sufficiently low so that the extremes are very uncommon, and assuming finite moments. Tail leverage is simply a measure of the extremes (or outliers). It differs from the classic definition of heavy-tailedness, even though it is arguably a viable competitor. It is not perfect; a notable flaw is that it requires finite moments, so quantile-based versions would be useful as well. Such alternative definitions are needed because the classic definition of “heavy tails” is far too limited to characterize the universe of outlier-prone data-generating processes and their resulting data. II. My paper in The American Statistician My purpose in writing the paper “Kurtosis as Peakedness, 1905-2014: R.I.P.” was to help people answer the question, “What does higher (or lower) kurtosis tell me about my distribution (or data)?” I suspected the common interpretations (still seen, by the way), “higher kurtosis implies more peaked, lower kurtosis implies more flat” were wrong, but could not quite put my finger on the reason. And, I even wondered that maybe they had an element of truth, given that Pearson said it, and even more compelling, that R.A. Fisher repeated it in all revisions of his famous book. However, I was not able to connect any math to the statement that higher (lower) kurtosis implied greater peakedness (flatness). All the inequalities went in the wrong direction. Then I hit on the main theorem of my paper. Contrary to what has been stated or implied here and elsewhere, my article was not an “opinion” piece; rather, it was a discussion of three mathematical theorems. Yes, The American Statistician (TAS) does often require mathematical proofs. I would not have been able to publish the paper without them. The following three theorems were proven in my paper, although only the second was listed formally as a “Theorem.” Main Theorem: Let $Z_X = (X - \mu_X)/\sigma_X$ and let $\kappa(X) = E(Z_X^4)$ denote the kurtosis of $X$ . Then for any distribution (discrete, continuous or mixed, which includes actual data via their discrete empirical distribution), $E\{Z_X^4 I(|Z_X| > 1)\}\le\kappa(X)\le E\{Z_X^4 I(|Z_X| > 1)\} +1$ . This is a rather trivial theorem to prove but has major consequences: It states that the shape of the distribution within a standard deviation of the mean (which ordinarily would be where the “peak” is thought to be located) contributes very little to the kurtosis. Instead, the theorem implies that for all data and distributions, kurtosis must lie within $\pm 0.5$ of $E\{Z_X^4 I(|Z_X| > 1)\} + 0.5$ . A very nice visual image of this theorem by user "kjetil b Halvorsen" is given at https://stats.stackexchange.com/a/362745/102879; see my comment that follows as well. The bound is sharpened in the Appendix of my TAS paper: Refined Theorem: Assume $X$ is continuous and that the density of $Z_X^2$ is decreasing on [0,1]. Then the “+1” of the main theorem can be sharpened to “+0.5”. This simply amplifies the point of the main theorem that kurtosis is mostly determined by the tails. More recently, @sextus-empiricus was able to reduce the " $+0.5$ " bound to " $+1/3$ ", see https://math.stackexchange.com/a/3781761 . A third theorem proven in my TAS paper states that large kurtosis is mostly determined by (potential) data that are $b$ standard deviations away from the mean, for arbitrary $b$ . Theorem 3: Consider a sequence of random variables $X_i$ , $ i = 1,2,\dots$ , for which $\kappa(X_i) \rightarrow \infty$ . Then $E\{Z_i^4I(|Z_i| > b)\}/ \kappa(X_i) \rightarrow 1$ , for each $b>0$ . The third theorem states that high kurtosis is mostly determined by the most extreme outliers; i.e., those observations that are $b$ or more standard deviations from the mean. These are mathematical theorems, so there can be no argument with them. Supposed “counterexamples” given in this thread and in other online sources are not counterexamples; after all, a theorem is a theorem, not an opinion. So what of one suggested “counterexample,” where spiking the data with many values at the mean (which thereby increases “peakedness”) causes greater kurtosis? Actually, that example just makes the point of my theorems: When spiking the data in this way, the variance is reduced, thus the observations in the tails are more extreme, in terms of number of standard deviations from the mean. And it is observations with large standard deviation from the mean, according to the theorems in my TAS paper, that cause high kurtosis. It’s not the peakedness. Or to put it another way, the reason that the spike increases kurtosis is not because of the spike itself, it is because the spike causes a reduction in the standard deviation, which makes the tails more standard deviations from the mean (i.e., more extreme), which in turn increases the kurtosis. It simply cannot be stated that higher kurtosis implies greater peakedness, because you can have a distribution that is perfectly flat over an arbitrarily high percentage of the data (pick 99.99% for concreteness) with infinite kurtosis. (Just mix a uniform with a Cauchy suitably; there are some minor but trivial and unimportant technical details regarding how to make the peak absolutely flat.) By the same construction, high kurtosis can be associated with any shape whatsoever for 99.99% of the central distribution - U-shaped, flat, triangular, multi-modal, etc. There is also a suggestion in this thread that the center of the distribution is important, because throwing out the central data of the Cauchy example in my TAS paper makes the data have low kurtosis. But this is also due to outliers and extremes: In throwing out the central portion, one increases the variance so that the extremes are no longer extreme (in terms of $Z$ values), hence the kurtosis is low. Any supposed "counterexample" actually obeys my theorems. Theorems have no counterexamples; otherwise, they would not be theorems. A more interesting exercise than “spiking” or “deleting the middle” is this: Take the distribution of a random variable $X$ (discrete or continuous, so it includes the case of actual data), and replace the mass/density within one standard deviation of the mean arbitrarily, but keep the mean and standard deviation of the resulting distribution the same as that of $X$ . Q: How much change can you make to the kurtosis statistic over all such possible replacements? A: The difference between the maximum and minimum kurtosis values over all such replacements is $\le 0.25. $ The above question and its answer comprise yet another theorem. Anyone want to publish it? I have its proof written down (it’s quite elegant, as well as constructive, identifying the max and min distributions explicitly), but I lack the incentive to submit it as I am now retired. I have also calculated the actual max differences for various distributions of $X$ ; for example, if $X$ is normal, then the difference between the largest and smallest kurtosis is over all replacements of the central portion is 0.141. Hardly a large effect of the center on the kurtosis statistic! On the other hand, if you keep the center fixed, but replace the tails, keeping the mean and standard deviation constant, you can make the kurtosis infinitely large. Thus, the effect on kurtosis of manipulating the center while keeping the tails constant, is $\le 0.25$ . On the other hand, the effect on kurtosis of manipulating the tails, while keeping the center constant, is infinite. So, while yes, I agree that spiking a distribution at the mean does increase the kurtosis, I do not find this helpful to answer the question, “What does higher kurtosis tell me about my distribution?” There is a difference between “A implies B” and “B implies A.” Just because all bears are mammals does not imply that all mammals are bears. Just because spiking a distribution increases kurtosis does not imply that increasing kurtosis implies a spike; see the uniform/Cauchy example alluded to above in my answer. It is precisely this faulty logic that caused Pearson to make the peakedness/flatness interpretations in the first place. He saw a family of distributions for which the peakedness/flatness interpretations held, and wrongly generalized. In other words, he observed that a bear is a mammal, and then wrongly inferred that a mammal is a bear. Fisher followed suit forever, and here we are. A case in point: People see this picture of "standard symmetric PDFs" (on Wikipedia at https://en.wikipedia.org/wiki/File:Standard_symmetric_pdfs.svg ) and think it generalizes to the “flatness/peakedness” conclusions. Yes, in that family of distributions, the flat distribution has the lower kurtosis and the peaked one has the higher kurtosis. But it is an error to conclude from that picture that high kurtosis implies peaked and low kurtosis implies flat. There are other examples of low kurtosis (less than the normal distribution) distributions that are infinitely peaked, and there are examples of infinite kurtosis distributions that are perfectly flat over an arbitrarily large proportion of the observable data. The bear/mammal conundrum also arises in the Finucan conditions, which state (oversimplified) that if tail probability and peak probability increase (losing some mass in between to maintain the standard deviation), then kurtosis increases. This is all fine and good, but you cannot turn the logic around and say that increasing kurtosis implies increasing tail and peak mass (and reducing what is in between). That is precisely the fatal flaw with the sometimes-given interpretation that kurtosis measures the “movement of mass simultaneously to the tails and peak but away from the shoulders." Again, all mammals are not bears. A good counterexample to that interpretation is given here https://math.stackexchange.com/a/2523606/472987 in “counterexample #1, which shows a family of distributions in which the kurtosis increases to infinity, while the mass inside the center stays constant. (There is also a counterexample #2 that has the mass in the center increasing to 1.0 yet the kurtosis decreases to its minimum, so the often-made assertion that kurtosis measures “concentration of mass in the center” is wrong as well.) Many people think that higher kurtosis implies “more probability in the tails.” This is not true; counterexample #1 shows that you can have higher kurtosis with less tail probability when the tails extend. So what does kurtosis measure? It precisely measures tail leverage (which can be called tail weight as well) as amplified through fourth powers, as I stated above with my definition of tail-leverage( $m$ ). I would just like to reiterate that my TAS article was not an opinion piece. It was instead a discussion of mathematical theorems and their consequences. There is much additional supportive material in the current post that has come to my attention since writing the TAS article, and I hope readers find it to be helpful for understanding kurtosis.
{ "source": [ "https://stats.stackexchange.com/questions/479569", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/94924/" ] }
481,110
In estimation of treatment effects a commonly used method is matching. There are of course several techniques used for matching but one of the more popular techniques is propensity-score matching. However, I sometimes stumble upon contexts where it is said that the use of propensity scores for matching is controversial and that critics have indicated that other procedures might be preferable. So I was just wondering if anyone was familiar with this criticism and perhaps could explain it or provide references. So in short, the question I am asking is: why is it problematical to use propensity scores for matching?
It's true that there are not only other ways of performing matching but also ways of adjusting for confounding using just the treatment and potential confounders (e.g., weighting, with or without propensity scores). Here I'll just mention the documented problems with propensity score (PS) matching. Matching, in general, can be a problematic method because it discards units, can change the target estimand, and is nonsmooth, making inference challenging. Using propensity scores to match adds additional problems. The most famous critique of propensity score matching comes from King and Nielsen (2019). They have three primary arguments: 1) propensity score matching seeks to imitate a randomized experiment instead of a block randomized experiment, the latter of which yields far better precision and control against confounding, 2) propensity score matching induces the "propensity score paradox", where further trimming of the units increases imbalance after a point (not shared by some other matching methods), and 3) effect estimation is more sensitive to model specification after using propensity score matching than other matching methods. I'll discuss these arguments briefly. Argument (1) is undeniable, but it's possible to improve PS matching by first exact matching on some variables or coarsened versions of them and doing PS matching within strata of the variables or by using the PS just to create a caliper and using a different form of matching (e.g., Mahalanobis distance matching [MDM]) to actually pair units. Though these should be standard methods, researchers typically just apply PS matching without these other beneficial steps. This increases reliance on correct specification of the propensity score model to control confounding since balance is achieved only on average but not exactly or necessarily in various combinations of variables. Argument (2) is only somewhat tenable. It's true that the PS paradox can occur when the caliper is successively narrowed, excluding more units, but researchers can easily assess whether this is happening with their data and adjust accordingly. If imbalance increases after tightening a caliper, then the caliper can just be relaxed again. In addition, Ripollone et al. (2018) found that while the PS paradox does occur, it doesn't always occur in the typically recommended caliper widths that are most often used by researchers, indicating that the PS paradox is not as problematic for the actual use of PS matching as the paradox would otherwise suggest. Argument (3) is also only somewhat tenable. King and Nielsen demonstrated that if, after PS matching, you were to use many different models to estimate the treatment effect, the range of possible effect estimates would be much larger than if you were to use a different form of matching (in particular, MDM). The implication is that PS matching doesn't protect against model dependence, which is often touted as its primary benefit. The effect estimate still depends on the outcome model used. The problem with this argument is that researchers typically don't try hundreds of different outcome models after matching; the two most common are no model (i.e., a t-test) or a model involving only main effects for the covariates used in matching. Any other model would be viewed as suspicious, so norms against unusual models already protect against model dependence. I attempted to replicate King and Nielsen's findings by recreating their data scenario to settle an argument with a colleague (unrelated to the points above; it was about whether it matters whether the covariates included were confounders or mediators). You can see that replication attempt here . Using the same data-generating process, I was able to replicate some of their findings but not all of them. (In the demonstration you can ignore the graphs on the right.) Other critiques of PS matching are more about their statistical performance. Abadie and Imbens (2016) demonstrate that PS matching is not very precise. De los Angeles Resa and Zubizarreta (2016) find in simulations that PS matching can vastly underperform compared to cardinality matching, which doesn't involve a propensity score. This is because PS matching relies on the theoretical properties of the PS to balance the covariates while cardinality matching uses constraints to require balance, thereby ensuring balance is met in the sample. In almost all scenarios considered, PS matching did worse than cardinality matching. That said, as with many simulation studies, the paper likely wouldn't have been published if PS matching did better, so there may be a selection effect here. Still, it's hard to deny that PS matching is suboptimal. What should you do? It depends. Matching typically involves a tradeoff among balance, generalizability, and sample size, which correspond to internal validity, external validity, and precision. PS matching optimizes none of them, but it can be modified to sacrifice some to boost another (e.g., using a caliper decreases sample size and hampers generalizability [see my post here for details on that], but often improves balance). If generalizability is less important to you, which is implicitly the case if you were to be using a caliper, then cardinality matching is a good way of maintaining balance and precision. Even better would be overlap weighting (Li et al., 2018), which guarantees exact mean balance and the most precise PS-weighted estimate possible, but uses weighting rather than matching and so is more dependent on correct model specification. In many cases, though, PS matching does just fine, and you can assess whether it is working well in your dataset before you commit to it anyway. If it's not leaving you with good balance (measured broadly) or requires too tight of a caliper to do so, you might consider a different method. Abadie, A., & Imbens, G. W. (2016). Matching on the Estimated Propensity Score. Econometrica, 84(2), 781–807. https://doi.org/10.3982/ECTA11293 de los Angeles Resa, M., & Zubizarreta, J. R. (2016). Evaluation of subset matching methods and forms of covariate balance. Statistics in Medicine, 35(27), 4961–4979. https://doi.org/10.1002/sim.7036 King, G., & Nielsen, R. (2019). Why Propensity Scores Should Not Be Used for Matching. Political Analysis, 1–20. https://doi.org/10.1017/pan.2019.11 Li, F., Morgan, K. L., & Zaslavsky, A. M. (2018). Balancing covariates via propensity score weighting. Journal of the American Statistical Association, 113(521), 390–400. https://doi.org/10.1080/01621459.2016.1260466 Ripollone, J. E., Huybrechts, K. F., Rothman, K. J., Ferguson, R. E., & Franklin, J. M. (2018). Implications of the Propensity Score Matching Paradox in Pharmacoepidemiology. American Journal of Epidemiology, 187(9), 1951–1961. https://doi.org/10.1093/aje/kwy078
{ "source": [ "https://stats.stackexchange.com/questions/481110", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/16367/" ] }
481,391
The canonical probabilistic interpretation of linear regression is that $y$ is equal to $\theta^Tx$ , plus a Gaussian noise random variable $\epsilon$ . However, in standard logistic regression, we don't consider noise (e.g. random bit flips with probability $p$ ) of the label $y$ . Why is that?
Short answer: we do, just implicitly. A possibly more enlightening way of looking at things is the following. In Ordinary Least Squares, we can consider that we do not model the errors or noise as $N(0,\sigma^2)$ distributed, but we model the observations as $N(x\beta,\sigma^2)$ distributed. (Of course, this is precisely the same thing, just looking at it in two different ways.) Now the analogous statement for logistic regression becomes clear: here, we model the observations as Bernoulli distributed with parameter $p(x)=\frac{1}{1+e^{-x\beta}}$ . We can flip this last way of thinking around if we want: we can indeed say that we are modeling the errors in logistic regression. Namely, we are modeling them as "the difference between a Bernoulli distributed variable with parameter $p(x)$ and $p(x)$ itself". This is just very unwieldy, and this distribution does not have a name, plus the error here depends on our independent variables $x$ (in contrast to the homoskedasticity assumption in OLS, where the error is independent of $x$ ), so this way of looking at things is just not used as often.
{ "source": [ "https://stats.stackexchange.com/questions/481391", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/113240/" ] }
481,543
I have to randomly generate 1000 points over a unit disk such that are uniformly distributed on this disk. Now, for that, I select a radius $r$ and angular orientation $\alpha$ such that the radius $r$ is a uniformly distributed variate from $r \in [0,1]$ while $\alpha$ is a uniformly distributed variate from $\alpha \in [0, 2\pi]$ using the following code r <- runif(1000, min=0, max=1) alpha <- runif(1000, min=0, max=2*pi) x <- r*cos(alpha) y <- r*sin(alpha) plot(x,y, pch=19, col=rgb(0,0,0,0.05), asp=1) Then I look at my sample space and it looks like this: This obviously doesn't look like a sample with uniform distribution over the disk. Hence, I guessed that the problem might be occurring as a result of a lack of independence between the variables $r$ and $\alpha$ in contingency to how they've been linked computationally. To take care of that I wrote a new code. rm(list=ls()) r <- runif(32, min=0, max=1) df_res <- data.frame(matrix(c(-Inf, Inf), byrow = T, nrow = 1)) for (i in 1:32) { for (j in 1:32) { alpha <- runif(32, min=0, max=2*pi) r <- runif(32, min=0, max=1) df <- data.frame(matrix(c(r[i],alpha[j]), byrow = T, nrow = 1)) df_res <- rbind(df_res,df) } } df_res <- subset(df_res, df_res $X1 != -Inf) x<- df_res$ X1 *cos(df_res $X2) y <- df_res$ X1 *sin(df_res$X2) plot(x,y, pch=19, col=rgb(0,0,0,0.05), asp=1) And, yet again the sample looks non-uniformly distributed over the disk I'm starting to suspect that there is a deeper mathematical problem going on in the vicinity. Could someone help me write code that would create a sample space uniformly distributed over the disk, or explain the mathematical fallacy if any in my reasoning?
The problem is due to the fact that the radius is not uniformly distributed. Namely, if $(X,Y)$ is uniformly distributed over $$\left\{ (x,y);\ x^2+y^2\le 1\right\}$$ then the (polar coordinates) change of variables $$R=(X^2+Y^2)^{1/2}\qquad A=\text{sign}(Y)\arccos(X/R)$$ has the density $$\frac{1}{\pi} \mathbb{I}_{(0,1)}(r)\left|\frac{\text{d}(X,Y)}{\text{d}(R,A)}(r,\alpha)\right|\mathbb{I}_{(0,2\pi)}(\alpha)$$ Using $x = r \cos \alpha$ and $y = r \sin \alpha$ leads to $$\left|\frac{\text{d}(X,Y)}{\text{d}(R,A)}(r,\alpha)\right|=r(\sin^2\alpha+\cos^2\alpha)=r$$ Therefore, the angle $A$ is distributed uniformly over $(0,2\pi)$ but the radius $R$ has density $f(r)=2r\mathbb{I}_{(0,1)}(r)$ and cdf $F(r)=r^2$ over $(0,1)$ . As one can check by running r <- sqrt(runif(1000, min=0, max=1) ) alpha <- runif(1000, min=0, max=2*pi) x <- r*cos(alpha) y <- r*sin(alpha) plot(x,y, pch=19, col=rgb(0,0,0,0.05), asp=1) where the radius is simulated by the inverse cdf representation, which makes it the square root of a Uniform variate, the random repartition of the 10³ simulated points is compatible with a uniform:
{ "source": [ "https://stats.stackexchange.com/questions/481543", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/187800/" ] }
481,553
I'm looking at feature importance. I have a set of point estimates for features I would like to rank from two different models. I also have their confidence intervals. What is the best way to rank these point estimates while also accounting for the confidence intervals? I don't think it makes sense to rank point estimates by averaging their magnitude without accounting for the confidence intervals. EDIT: Following something similar to this (in the feature importance section where they compare different model feature importance outputs): https://www.r-bloggers.com/iml-and-h2o-machine-learning-model-interpretability-and-feature-explanation/
The problem is due to the fact that the radius is not uniformly distributed. Namely, if $(X,Y)$ is uniformly distributed over $$\left\{ (x,y);\ x^2+y^2\le 1\right\}$$ then the (polar coordinates) change of variables $$R=(X^2+Y^2)^{1/2}\qquad A=\text{sign}(Y)\arccos(X/R)$$ has the density $$\frac{1}{\pi} \mathbb{I}_{(0,1)}(r)\left|\frac{\text{d}(X,Y)}{\text{d}(R,A)}(r,\alpha)\right|\mathbb{I}_{(0,2\pi)}(\alpha)$$ Using $x = r \cos \alpha$ and $y = r \sin \alpha$ leads to $$\left|\frac{\text{d}(X,Y)}{\text{d}(R,A)}(r,\alpha)\right|=r(\sin^2\alpha+\cos^2\alpha)=r$$ Therefore, the angle $A$ is distributed uniformly over $(0,2\pi)$ but the radius $R$ has density $f(r)=2r\mathbb{I}_{(0,1)}(r)$ and cdf $F(r)=r^2$ over $(0,1)$ . As one can check by running r <- sqrt(runif(1000, min=0, max=1) ) alpha <- runif(1000, min=0, max=2*pi) x <- r*cos(alpha) y <- r*sin(alpha) plot(x,y, pch=19, col=rgb(0,0,0,0.05), asp=1) where the radius is simulated by the inverse cdf representation, which makes it the square root of a Uniform variate, the random repartition of the 10³ simulated points is compatible with a uniform:
{ "source": [ "https://stats.stackexchange.com/questions/481553", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/293217/" ] }
481,579
I want to do a simulation study in R and I have already some empirical data, that gives me a hint about the variance parameters to set. But what should I use for the error variance? Here's an example of what I mean: > a <- aov(terms(yield ~ block + N * P + K, keep.order=TRUE), npk) > anova(a) Analysis of Variance Table Response: yield Df Sum Sq Mean Sq F value Pr(>F) block 5 343.29 68.659 4.3911 0.012954 * N 1 189.28 189.282 12.1055 0.003684 ** P 1 8.40 8.402 0.5373 0.475637 N:P 1 21.28 21.282 1.3611 0.262841 K 1 95.20 95.202 6.0886 0.027114 * Residuals 14 218.90 15.636 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 > var(residuals(a)) [1] 9.517536 So would I use 15.6 or 9.5 as my empirical error variance?
The problem is due to the fact that the radius is not uniformly distributed. Namely, if $(X,Y)$ is uniformly distributed over $$\left\{ (x,y);\ x^2+y^2\le 1\right\}$$ then the (polar coordinates) change of variables $$R=(X^2+Y^2)^{1/2}\qquad A=\text{sign}(Y)\arccos(X/R)$$ has the density $$\frac{1}{\pi} \mathbb{I}_{(0,1)}(r)\left|\frac{\text{d}(X,Y)}{\text{d}(R,A)}(r,\alpha)\right|\mathbb{I}_{(0,2\pi)}(\alpha)$$ Using $x = r \cos \alpha$ and $y = r \sin \alpha$ leads to $$\left|\frac{\text{d}(X,Y)}{\text{d}(R,A)}(r,\alpha)\right|=r(\sin^2\alpha+\cos^2\alpha)=r$$ Therefore, the angle $A$ is distributed uniformly over $(0,2\pi)$ but the radius $R$ has density $f(r)=2r\mathbb{I}_{(0,1)}(r)$ and cdf $F(r)=r^2$ over $(0,1)$ . As one can check by running r <- sqrt(runif(1000, min=0, max=1) ) alpha <- runif(1000, min=0, max=2*pi) x <- r*cos(alpha) y <- r*sin(alpha) plot(x,y, pch=19, col=rgb(0,0,0,0.05), asp=1) where the radius is simulated by the inverse cdf representation, which makes it the square root of a Uniform variate, the random repartition of the 10³ simulated points is compatible with a uniform:
{ "source": [ "https://stats.stackexchange.com/questions/481579", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/293153/" ] }
481,750
I am learning about the normal distribution and was watching this video . At 6:28, the question imposed is what is the probability of an ice-cream weighing exactly 120 grams (using the normal distribution). She states that the answer to this is zero, as the probability of any exact value is zero in a normal distribution. She then states that there are infinitely many weights from 199.9 to 120.1, and that the probability of any specific weight is 1 over infinity, which is zero. I am a bit confused about this. Why is the probability one over infinity for a specific value, like at 120? She then states that an ice cream could weight 120 grams or 120.000001 grams; how is that related to the probability of a specific point being zero?
The video suggests that $\mu=112$ g and $\sigma=9$ g in this particular normal distribution. If that is the case, we can find the probability that the weight is in a given interval, in the video described as the area under the graph for that interval. For example, the probability it is between $119.5$ g and $120.5$ g is $$\Phi\left(\tfrac{120.5-112}{9}\right) - \Phi\left(\tfrac{119.5-112}{9}\right) = \Phi\left(\tfrac{17}{18}\right) - \Phi\left(\tfrac{15}{18}\right)\approx 0.82753- 0.79767=0.02986$$ which the video describes as about $0.03$ Similarly we can look at other intervals around $120$ g: Lower Upper Probability 119 121 0.05969 119.5 120.5 0.02986 119.9 120.1 0.00592 119.99 120.01 0.00059 119.999 120.001 0.00006 and as we cut the width of the interval by a factor of $10$ each time, the probability of the weight being in that narrower also roughly falls by a factor of $10$ . So as the interval falls towards zero, the probability of being in that interval also falls towards zero. In that sense the probability of being exactly $120$ must be smaller than any positive number and so must be $0$ .
{ "source": [ "https://stats.stackexchange.com/questions/481750", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/277377/" ] }
483,927
Probabilities of a random variable's observations are in the range $[0,1]$ , whereas log probabilities transform them to the log scale. What then is the corresponding range of log probabilities, i.e. what does a probability of 0 become, and is it the minimum of the range, and what does a probability of 1 become, and is this the maximum of the log probability range? What is the intuition of this of being of any practical use compared to $[0,1]$ ? I know that log probabilities allow for stable numerical computations such as summation, but besides arithmetic, how does this transformation make applications any better compared to the case where raw probabilities are used instead? a comparative example for a continuous random variable before and after logging would be good
The log of $1$ is just $0$ and the limit as $x$ approaches $0$ (from the positive side) of $\log x$ is $-\infty$ . So the range of values for log probabilities is $(-\infty, 0]$ . The real advantage is in the arithmetic. Log probabilities are not as easy to understand as probabilities (for most people), but every time you multiply together two probabilities (other than $1 \times 1 = 1$ ), you will end up with a value closer to $0$ . Dealing with numbers very close to $0$ can become unstable with finite precision approximations, so working with logs makes things much more stable and in some cases quicker and easier. Why do you need any more justification than that?
{ "source": [ "https://stats.stackexchange.com/questions/483927", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/250716/" ] }
483,988
I am trying to choose a classifier which is going to be trained on a huge dataset, however because of the nature of the problem, I do not expect the model to perform very accurately at all even with a large dataset to train on. In fact around 75% accuracy will be more than enough to call it a success. Of course I am thinking of using High Bias-Low Variance models like Naive bayes classifier or logistic regression. What I want to know is, in general which ml models perform comparatively better when it is difficult to achieve high accuracy because of the nature of the problem itself, even when having sufficient data to train on.
The log of $1$ is just $0$ and the limit as $x$ approaches $0$ (from the positive side) of $\log x$ is $-\infty$ . So the range of values for log probabilities is $(-\infty, 0]$ . The real advantage is in the arithmetic. Log probabilities are not as easy to understand as probabilities (for most people), but every time you multiply together two probabilities (other than $1 \times 1 = 1$ ), you will end up with a value closer to $0$ . Dealing with numbers very close to $0$ can become unstable with finite precision approximations, so working with logs makes things much more stable and in some cases quicker and easier. Why do you need any more justification than that?
{ "source": [ "https://stats.stackexchange.com/questions/483988", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/223476/" ] }
484,384
Consider the following joint distribution for the random variables $A$ and $B$ : $$ \begin{array} {|r|r|}\hline & B=1 & B=2 \\ \hline A=1 & 49\% & 1\% \\ \hline A=2 & 49\% & 1\% \\ \hline \end{array}$$ Intuitively, if I know A, I can predict very well B (98% accuracy!) but I if know B, I can't say anything about A Questions: can we say that A causes B? if yes, what is the mathematical way to conclude that A causes B? thank you! (and apologies for the maybe "naive" question)
can we say that A causes B? No, this is (presumably) a simple observational study. To infer causation it is necessary (but not necessarily sufficient) to conduct an experiment or a controlled trial. Just because you are able to make good predictions does not say anything about causality. If I observe the number of people who carry cigarette lighters, this will predict the number of people who have a cancer diagnosis, but it doesn't mean that carrying a lighter causes cancer. Edit: To address one of the points in the comments: But now I wonder: can there ever be causation without correlation? Yes. This can happen in a number of ways. One of the easiest to demonstrate is where the causal relation is not linear. For example: > X <- 1:20 > Y <- 21*X - X^2 > cor(X,Y) [1] 0 Clearly Y is caused by X , yet the correlation is zero.
{ "source": [ "https://stats.stackexchange.com/questions/484384", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/173505/" ] }
485,784
Let's consider $Y_n$ the max of $n$ iid samples $X_i$ of the same distribution: $Y_n = max(X_1, X_2, ..., X_n)$ Do we know some common distributions for $X$ such that $Y$ is uniformly distributed $U(a,b)$ ? I guess we can always "construct a distribution" $X$ to enforce this condition for $Y$ but I was just wondering if a famous distribution satisfies this condition.
Let $F$ be the CDF of $X_i$ . We know that the CDF of $Y$ is $$G(y) = P(Y\leq y)= P(\textrm{all } X_i\leq y)= \prod_i P(X_i\leq y) = F(y)^n$$ Now, it's no loss of generality to take $a=0$ , $b=1$ , since we can just shift and scale the distribution of $X$ to $[0,\,1]$ and then unshift and unscale the distribution of $Y$ . So what does $F$ have to be to get $G(y) =y$ ? We need $F(x)= x^{1/n}I_{[0,1]}$ , so $f(x)=\frac{1}{n}x^{1/n-1}I_{[0,1]}$ , which is a Beta(1/n,1) density. Let's check > r<-replicate(100000, max(rbeta(4,1/4,1))) > hist(r)
{ "source": [ "https://stats.stackexchange.com/questions/485784", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/61145/" ] }
486,527
What is the best programmatic way for determining whether two predictor variables are linearly or non-linearly or not even related, maybe using any of the packages scipy/statsmodels or anything else in python. I know about the ways like plotting and manually checking. But I am looking for some other programmatic technique that is almost certain to differentiate whether a bivariate plot would be linear or non-linear or no relationship between them in nature. I hear about the concept of KL divergence somewhere. Not really sure of the concept and in-depth, and whether can it really be applied for this sort of problem.
It is very difficult to achieve what you want programmatically because there are so many different forms of nonlinear associations. Even looking at correlation or regression coefficients will not really help. It is always good to refer back to Anscombe's quartet when thinking about problems like this: Obviously the association between the two variables is completely different in each plot, but each has exactly the same correlation coefficient. If you know a priori what the possible non-linear relations could be, then you could fit a series of nonlinear models and compare the goodness of fit. But if you don't know what the possible non-linear relations could be, then I can't see how it can be done robustly without visually inspecting the data. Cubic splines could be one possibility but then it may not cope well with logarithmic, exponential and sinusoidal associations, and could be prone to overfitting. EDIT: After some further thought, another approach would be to fit a generalised additive model (GAM) which would provide good insight for many nonlinear associations, but probably not sinusoidal ones. Truly, the best way to do what you want is visually. We can see instantly what the relations are in the plots above, but any programmatic approach such as regression is bound to have situations where it fails miserably. So, my suggestion, if you really need to do this is to use a classifier based on the image of the bivariate plot. create a dataset using randomly generated data for one variable, from a randomly chosen distribution. Generate the other variable with a linear association (with random slope) and add some random noise. Then choose at random a nonlinear association and create a new set of values for the other variable. You may want to include purely random associations in this group. Create two bivariate plots, one linear the other nonlinear from the data simulated in 1) and 2). Normalise the data first. Repeat the above steps millions of times, or as many times as your time scale will allow Create a classifier, train, test and validate it, to classify linear vs nonlinear images. For your actual use case, if you have a different sample size to your simulated data then sample or re-sample to get obtain the same size. Normalise the data, create the image and apply the classifier to it. I realise that this is probably not the kind of answer you want, but I cannot think of a robust way to do this with regression or other model-based approach. EDIT: I hope no one is taking this too seriously. My point here is that, in a situation with bivariate data, we should always plot the data. Trying to do anything programatically, whether it is a GAM, cubic splines or a vast machine learning approach is basically allowing the analyst to not think , which is a very dangerous thing. Please always plot your data.
{ "source": [ "https://stats.stackexchange.com/questions/486527", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/295759/" ] }
486,672
When I learned linear regression in my statistics class, we are asked to check for a few assumptions which need to be true for linear regression to make sense. I won't delve deep into those assumptions, however, these assumptions don't appear when learning linear regression from machine learning perspective. Is it because the data is so large that those assumptions are automatically taken care of? Or is it because of the loss function (i.e. gradient descent)?
It’s because statistics puts an emphasis on model inference, while machine learning puts an emphasis on accurate predictions. We like normal residuals in linear regression because then the usual $\hat{\beta}=(X^TX)^{-1}X^Ty$ is a maximum likelihood estimator. We like uncorrelated predictors because then we get tighter confidence intervals on the parameters than we would if the predictors were correlated. In machine learning, we often don’t care about how we get the answer, just that the result has a tight fit both in and out of sample. Leo Breiman has a famous article on the “two cultures” of modeling: https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726 Breiman, Leo. "Statistical modeling: The two cultures (with comments and a rejoinder by the author)." Statistical science 16.3 (2001): 199-231.
{ "source": [ "https://stats.stackexchange.com/questions/486672", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/171849/" ] }
486,676
What is constant error (CE) and variable error (VE)? I’ve seen the terms in several papers.
It’s because statistics puts an emphasis on model inference, while machine learning puts an emphasis on accurate predictions. We like normal residuals in linear regression because then the usual $\hat{\beta}=(X^TX)^{-1}X^Ty$ is a maximum likelihood estimator. We like uncorrelated predictors because then we get tighter confidence intervals on the parameters than we would if the predictors were correlated. In machine learning, we often don’t care about how we get the answer, just that the result has a tight fit both in and out of sample. Leo Breiman has a famous article on the “two cultures” of modeling: https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726 Breiman, Leo. "Statistical modeling: The two cultures (with comments and a rejoinder by the author)." Statistical science 16.3 (2001): 199-231.
{ "source": [ "https://stats.stackexchange.com/questions/486676", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/295967/" ] }
487,449
*sorry if this isn't the right SE community, maybe it's more philosophical* You often hear this refrain in games like Poker or Hearthstone. The idea is that making play A this game resulted in a loss, but always making play A in the long run/limit is the best odds/EV. My question is: Why does this idea seem to require a frequentist approach, yet at the same time, even if this is the ONLY game played, the same play is still "correct". Are there any situations in the physical world were frequentism and bayesianism make separate predictions? (I know QM interpretations get into objective vs subjective nature of probability, but that won't be settled anytime soon). How can I reassure myself taking a frequentist approach is always the best for right here and now?
I do not believe that this is a question of Bayesian vs. frequentist frameworks. It is a question of having the correct (predictive) distribution and minimizing the expected loss with respect to this distribution and a specified loss function. Whether the predictive distribution is delivered by a Bayesian or by a frequentist is irrelevant - all that matters is how far it diverges from reality. (Of course, getting only a single realization makes it hard to assess this, but again, that is orthogonal.)
{ "source": [ "https://stats.stackexchange.com/questions/487449", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/296388/" ] }
489,824
When I learned about conditional probability , I found this statement: if A is not independent of B then also B is not independent of A. Formally, if P(A) ≠ P(A|B) then P(B) ≠ P(B|A). I think "not independent" is the same as "dependent", right? So does that mean this statement is also correct: "if A is dependent on B then B is also dependent on A"? I'm a little bit confused because in my mother language, the translation of "dependent" is directed word that's not symmetrical.
In statistics, “dependent” and “not independent” have the same meaning. There is no inherent notion of causation. In regular English, I would say that “dependent” implies causation. Dinner temperature depends on oven temperature, not the other way around.
{ "source": [ "https://stats.stackexchange.com/questions/489824", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/105981/" ] }
490,643
From " Why should I be Bayesian when my model is wrong? ", one of the key benefits of Bayesian inference to be able to inject exogenous domain knowledge into the model, in the form of a prior. This is especially useful when you don't have enough observed data to make good predictions. However, the prior's influence diminishes (to zero?) as the dataset grows larger. So if you have enough data, the prior provides very little value. What's the benefit of using Bayesian analysis in this case? Maybe that we still get a posterior distribution over parameter values? (But for large enough data, wouldn't the posterior just collapse to the MLE?)
Being Bayesian is not only about information fed through the prior. But even then: Where the prior is zero, no amount of data will turn that over. Having a full Bayesian posterior distribution to draw from opens loads and loads of ways to make inference from. It is easy to explain a credible interval to any audience whilst you know that most audiences have a very vague understanding of what a confidence interval is. Andrew Gelman said in one of his youtube videos, that $p$ is always slightly lower then $0.05$ because if it wasn't smaller then we would not read about it and if it was much smaller they'd examine subgroups. While that is not an absolute truth, indeed when you have large data you will be tempted to investigate defined subgroups ("is it still true when we only investigate caucasian single women under 30?") and that tends to shrink even large data quite a lot. $p$ -values tend to get worthless with large data as in real life no null hypthesis holds true in large data sets. It is part of the tradition about $p$ values that we keep the acceptable alpha error at $.05$ even in huge datasets where there is absolutely no need for such a large margin of error. Baysian analysis is not limited to point hyptheses and can find that the data is in a region of practical equivalence to a null hypotheses, a Baysian factor can grow your believe in some sort of null hypothesis equivalent where a $p$ value can only accumulate evidence against it. Could you find ways to emulate that via confidence intervals and other Frequentist methods? Probably yes, but Bayes comes with that approach as the standard. "But for large enough data, wouldn't the posterior just collapse to the MLE" - what if a posterior was bimodal or if two predictors are correlated so you could have different combinations of e.g. $\beta_8$ and $\beta_9$ - a posterior can represent these different combinations, an MLE point estimator does not.
{ "source": [ "https://stats.stackexchange.com/questions/490643", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/113240/" ] }
490,647
In (agglomerative) hierarchical clustering (and clustering in general), linkages are measures of "closeness" between pairs of clusters. The single linkage $\mathcal{L}_{1,2}^{\min}$ is the smallest value over all $\Delta(X_1, X_2)$ . The complete linkage $\mathcal{L}_{1,2}^{\max}$ is the largest value over all $\Delta(X_1, X_2)$ . The average linkage $\mathcal{L}_{1,2}^{\text{mean}}$ is the average over all distances $\Delta(X_1, X_2)$ . The centroid linkage $\mathcal{L}_{1,2}^{\text{cent}}$ is the Euclidean distance between the cluster means of the two clusters. We can clearly see the outliers as "singletons" in a dendrogram: (From https://www.statisticshowto.com/hierarchical-clustering/ ) Which of these linkages is best for the detection of outliers?
Being Bayesian is not only about information fed through the prior. But even then: Where the prior is zero, no amount of data will turn that over. Having a full Bayesian posterior distribution to draw from opens loads and loads of ways to make inference from. It is easy to explain a credible interval to any audience whilst you know that most audiences have a very vague understanding of what a confidence interval is. Andrew Gelman said in one of his youtube videos, that $p$ is always slightly lower then $0.05$ because if it wasn't smaller then we would not read about it and if it was much smaller they'd examine subgroups. While that is not an absolute truth, indeed when you have large data you will be tempted to investigate defined subgroups ("is it still true when we only investigate caucasian single women under 30?") and that tends to shrink even large data quite a lot. $p$ -values tend to get worthless with large data as in real life no null hypthesis holds true in large data sets. It is part of the tradition about $p$ values that we keep the acceptable alpha error at $.05$ even in huge datasets where there is absolutely no need for such a large margin of error. Baysian analysis is not limited to point hyptheses and can find that the data is in a region of practical equivalence to a null hypotheses, a Baysian factor can grow your believe in some sort of null hypothesis equivalent where a $p$ value can only accumulate evidence against it. Could you find ways to emulate that via confidence intervals and other Frequentist methods? Probably yes, but Bayes comes with that approach as the standard. "But for large enough data, wouldn't the posterior just collapse to the MLE" - what if a posterior was bimodal or if two predictors are correlated so you could have different combinations of e.g. $\beta_8$ and $\beta_9$ - a posterior can represent these different combinations, an MLE point estimator does not.
{ "source": [ "https://stats.stackexchange.com/questions/490647", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/163242/" ] }
492,000
A fair die is rolled 1,000 times. What is the probability of rolling the same number 5 times in a row? How do you solve this type of question for variable number of throws and number of repeats?
Below we compute the probability in four ways: Computation with Markov Chain 0.473981098314993 Computation with generating function 0.473981098314988 Estimation false method 0.536438013618686 Estimation correct method 0.473304632462677 The first two are exact methods and differ only a little (probably some round of errors), the third method is a naive estimation that does not give the correct number, the fourth method is better and gives a result that is very close to the exact method. Computationally: Markov Chain You can model this computationally with a transition matrix Say the column vector $X_{k,j} = \lbrace x_1,x_2,x_3,x_4,x_5 \rbrace_{j}$ is the probability to have $k$ of the same numbers in a row in the $j$ -th dice roll. Then (when assuming a 6-sided dice) $$X_{k,j} = M \cdot X_{k,j-1}$$ with $$M = \begin{bmatrix} \frac{5}{6} & \frac{5}{6} & \frac{5}{6} & \frac{5}{6} & 0 \\ \frac{1}{6} & 0& 0 & 0 & 0 \\ 0& \frac{1}{6} & 0& 0 & 0 \\ 0 & 0& \frac{1}{6} & 0& 0 \\ 0&0 & 0& \frac{1}{6} & 1 \\ \end{bmatrix}$$ where this last entry $M_{5,5} = 1$ relates to 5 of the same in a row being an absorbing state where we 'stop' the experiment. After the first roll you will be certainly in state 1 (there's certainly only 1 of the same number in a row). $$X_{k,1} = \lbrace 1,0,0,0,0 \rbrace$$ After the $j$ -th roll this will be multiplied with $M$ a $j-1$ times $$X_{k,j} = M^{j-1} \lbrace 1,0,0,0,0 \rbrace$$ R-Code: library(matrixcalc) ### allows us to use matrix.power M <- matrix(c(5/6, 5/6, 5/6, 5/6, 0, 1/6, 0 , 0 , 0 , 0, 0, 1/6, 0 , 0 , 0, 0, 0 , 1/6, 0 , 0, 0, 0 , 0 , 1/6, 1), 5, byrow = TRUE) start <- c(1,0,0,0,0) matrix.power(M,999) %*% start The result is $$X_{k,1000} = \begin{bmatrix} 0.438631855\\ 0.073152468\\ 0.012199943\\ 0.002034635\\ \color{red}{0.473981098}\end{bmatrix}$$ and this last entry 0.473981098 is the probability to roll the same number 5 times in a row in 1000 rolls. generating function Our question is: How to calculate the probability of rolling any number at least $k$ times in a row, out of $n$ tries? This is equivalent to the question How to calculate the probability of rolling the number 6 at least $k-1$ times in a row, out of $n-1$ tries? You can see it as tracking whether the dice roll $m$ is the same number as the number of the dice roll $m-1$ (which has 1/6-th probabilty). And this needs to happen $k-1$ times in a row (in our case 4 times). In this Q&A the alternative question is solved as a combinatorial problem: How many ways can we roll the dice $n$ times without the number '6' occuring $k$ or more times in a row. This is found by finding all possible combinations of ways that we can combine the strings 'x', 'x6', 'x66', 'x666' (where 'x' is any number 1,2,3,4,5) into a string of length $n+1$ ( $n+1$ instead of $n$ because in this way of constructing strings the first letter is always $x$ here). In this way we counted all possibilities to make a string of length $n$ but with only 1, 2, or 3 times a 6 in a row (and not 4 or more times). Those combinations can be found by using an equivalent polynomial. This is very similar to the binomial coefficients which relate to the coefficients when we expand the power $(x+y)^n$ , but it also relates to a combination . The polynomial is $$\begin{array}{rcl} P(x) &=& \sum_{k=0}^\infty (5x+5x^2+5x^3+5x^4)^k\\ &=& \frac{1}{1-(5x+5x^2+5x^3+5x^4)} \\ &=& \frac{1}{1-5\frac{x-x^5}{1-x}}\\ &=& \frac{1-x}{1-6x+5x^5} \end{array}$$ The coefficient of the $x^n$ relates to the number of ways to arrange the numbers 1,2,3,4,5,6 in a string of length $n-1$ without 4 or more 6's in a row. This coefficient can be found by a recursive relation. $$P(x) (1-6x+5x^5) = 1-x$$ which implies that the coefficients follow the relation $$a_n - 6a_{n-1} + 5 a_{n-5} = 0$$ and the first coefficients can be computed manually $$a_1,a_2,a_3,a_4,a_5,a_6,a_7 = 5,30,180,1080,6475,38825,232800$$ With this, you can compute $a_{1000}$ and $1-a_{1000}/6^{999}$ will be the probability to roll the same number 5 times in a row 5. In the R-code below we compute this (and we include a division by 6 inside the recursion because the numbers $a_{1000}$ and $6^{999}$ are too large to compute directly). The result is $0.473981098314988$ , the same as the computation with the Markov Chain. x <- 6/5*c(5/6,30/6^2,180/6^3,1080/6^4,6475/6^5,38825/6^6,232800/6^7) for (i in 1:1000) { t <- tail(x,5) x <- c(x,(6/6*t[5]-5/6^5*t[1])) ### this adds a new number to the back of the vector x } 1-x[1000] Analytic/Estimate Method 1: wrong You might think, the probability to have in any set of 5 neighboring dices, 5 of the same numbers, is $\frac{1}{6^4} = \frac{1}{1296}$ , and since there are 996 sets of 5 neighboring dices the probability to have in at least one of these sets 5 of the same dices is: $$ 1-(1-\frac{1}{6^4})^{996} \approx 0.536$$ But this is wrong. The reason is that the 996 sets are overlapping and not independent. Method 2: correct A better way is to approximate the Markov chain that we computed above. After some time you will get that the occupation of the states, with 1,2,3,4 of the same number in a row, are more or less stable and the ratio's will be roughly $1/6,1/6^2,1/6^3,1/6^4$ (*). Thus the fraction of the time that we have 4 in a row is: $$\text{frequency 4 in a row} = \frac{1/6^4}{1/6+1/6^2+1/6^3+1/6^4}$$ If we have these 4 in a row then we have a 1/6-th probability to finish the game. So the frequency of finishing the game is $$\text{finish-rate} = \frac{1}{6} \text{frequency 4 in a row} = \frac{1}{1554}$$ and the probability to be finished after $k$ steps is approximately $$P_k \approx 1-(1-\frac{1}{1554})^{k-4} \underbrace{\approx 0.47330}_{\text{if $k=1000$}}$$ much closer to the exact computation. (*) The occupation in state $k$ during roll $j$ will relate to the occupation in state $k-1$ during roll $j-1$ . We will have $x_{k,j} = \frac{1}{6} x_{k-1,j-1} \approx \frac{1}{6} x_{k-1,j}$ . Note that this requires that you have $x_{k-1,j} \approx x_{k-1,j-1}$ , which occurs when the finish-rate is small. If this is not the case, then you could apply a factor to compensate, but the assumption of relatively steady ratio's will be wrong as well. Related problems Limit distribution associated with counts (non-trivial combinatoric problem) Checking if a coin is fair based on how often a subsequence occurs What is the probability of rolling all faces of a die after n number of rolls Probability of a similar sub-sequence of length X in two sequences of length Y and Z This latter related problem gives a different approximation based on expectation values and estimates the distribution as an overdispersed Poisson distribution. Giving an approximation $1- \exp \left(-(1000-5+1)\left(\frac{1}{6^4}\right) /1.2 \right)\approx 0.4729354$ which isn't bad either.
{ "source": [ "https://stats.stackexchange.com/questions/492000", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/177494/" ] }
492,966
I keep seeing the term "inverse probability" mentioned in passing albeit without any explanation. I know it has to do with Bayesian inference, but what exactly do we mean by inverting a probability? My thinking at the moment is that in "normal" probability we take outcomes of random experiments and try to estimate probabilities based on the outcomes of these experiments whereas in "inverse probability" we're interested in going from a probability (a prior for an unknown quantity) to knowing the "outcome of an experiment", the experiment being finding out the value of an unknown quantity (i.e. via the posterior, and maybe finding the MAP hypothesis). That is, in "conventional probability" we go from outcomes of an experiment to probability vs. in inverse probability we go the other way: we go from a prior to uncovering the outcome of an experiment.
"Inverse probability" is a rather old-fashioned way of referring to Bayesian inference; when it's used nowadays it's usually as a nod to history. De Morgan (1838), An Essay on Probabilities , Ch. 3 "On Inverse Probabilities", explains it nicely: In the preceding chapter, we have calculated the chances of an event, knowing the circumstances under which it is to happen or fail. We are now to place ourselves in an inverted position: we know the event, and ask what is the probability which results from the event in favour of of any set of circumstances under which the same might have happened. An example follows using Bayes' Theorem. I'm not sure that the term mightn't have at some point encompassed putative or proposed non-Bayesian, priorless, methods of getting from $f(y|\theta)$ to $p(\theta|y)$ (in @Christopher Hanck's notation); but at any rate Fisher was clearly distinguishing between "inverse probability" & his methods—maximum likelihood, fiducial inference—by the 1930's. It also strikes me that several early-20th-Century writers seem to view the use of what we now call uninformative/ignorance/reference priors as part & parcel of the "inverse probability" method † , or even of "Bayes' Theorem" ‡ . † Fisher (1930), Math. Proc. Camb. Philos. Soc. , 26 , p 528, "Inverse probability", clearly distinguishes, perhaps for the first time, between Bayesian inference from flat "ignorance" priors ("the inverse argument proper"), the unexceptionable application of Bayes' Theorem when the prior describes aleatory probabilities ("not inverse probability strictly speaking"), & his fiducial argument. ‡ For example, Pearson (1907), Phil. Mag. , p365, "On the influence of past experience on future expectation", conflates Bayes' Theorem with the "equal distribution of ignorance".
{ "source": [ "https://stats.stackexchange.com/questions/492966", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/243542/" ] }
493,548
When we calculate mean and variance using the two equations taught in school: $\mu = \frac{1}{N}\sum_{i=1}^N{x_i}$ $\sigma^2 = \frac{1}{N}\sum_{i=1}^N{(x_i-\mu)^2}$ Then do we assume, that the data are normally distributed? Since the equations come from maximum likelihood of normal distribution estimation and to my knowledge, they should.
No, those equations come directly from the mean and variance formulae in terms of expected value, considering the collected data as a population. $$\mu = \mathbb{E}\big[X\big]$$ $$\sigma^2 = \mathbb{E}\big[\big(X-\mu\big)^2\big]$$ Since you have a finite number of observations, the distribution is discrete, $^{\dagger}$ and the expected value is a sum. $$\mu = \mathbb{E}\big[X\big] = \sum_{i=1}^N p(x_i)x_i = \sum_{i=1}^N \dfrac{1}{N}x_i = \dfrac{1}{N}\sum_{i=1}^Nx_i$$ $$\sigma^2 = \mathbb{E}\big[\big(X-\mu\big)^2\big] = \sum_{i=1}^N p(x_i)(x_i - \mu)^2 = \sum_{i=1}^N \dfrac{1}{N}(x_i - \mu)^2 = \dfrac{1}{N}\sum_{i=1}^N (x_i - \mu)^2$$ (To get from $p(x_i)$ to $\dfrac{1}{N}$ , note that each individual $x_i$ has probability $1/N$ .) This is why the $\dfrac{1}{N}\sum_{i=1}^N (x_i - \mu)^2$ gets called the "population" variance. It literally is the population variance if you consider the observed data to be the population. $^{\dagger}$ This is a sufficient, but not necessary, condition for a discrete distribution. A Poisson distribution is an example of a discrete distribution with infinitely many values.
{ "source": [ "https://stats.stackexchange.com/questions/493548", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/283959/" ] }
493,738
Sorry if the title isn't clear, I'm not a statistician, and am not sure how to phrase this. I was looking at the global coronavirus statistics on worldometers , and sorted the table by cases per million population to get an idea of how different countries had fared. Note My use of Vatican City below is purely because that was the first tiny country I saw in the list. As @smci pointed out, Vatican City has a few issues that may make it different from others. Therefore, please keep "tiny country" in mind when reading on, as my question applies to any tiny country. The table shows the Vatican City as being the 7th worst country, with 33,666 cases per million. Now given that the total population of Vatican City is only 802, I'm not sure how much we can make of this figure. When the country's population is small, even a minor fluctuation in the number of cases would make a significant difference to the cases per million. As an artificial example, consider a fictional country with only 1 inhabitant. If that person got the virus, then the cases per million would be 1,000,000, which is way higher than anything in that table. Obviously the Vatican City is an extreme example, but there are other countries with smallish populations that appear quite high on the list, and I guess the same question would apply to them. So is there a way of deciding what is "too small" a population to be significant? If this question isn't clear enough, please explain why rather than downvoting, as I would like to understand it, and am happy to clarify if I didn't explain it well enough.
I will describe how a statistician interprets count data. With a tiny bit of practice you can do it, too. The basic analysis When cases arise randomly and independently, the times of their occurrences are reasonably accurately modeled with a Poisson process. This implies that the number of cases appearing in any predetermined interval has a Poisson distribution. The only thing we need to remember about that is that its variance equals its expectation. In less technical jargon, this means that the amount by which the value is likely to differ from the average (its standard error ) is proportional to the square root of the average. (See Why is the square root transformation recommended for count data? for an explanation and discussion of the square root and some related transformations of count data.) In practice, we estimate the average by using the observed value. Thus, The standard error of a count of independent events with equal expected rates of occurrence is the square root of the count. (Various modifications of this rule exist for really small counts, especially counts of zero, but that shouldn't be an issue in the present application.) In the case of Vatican City, a rate of 33,666 cases per million corresponds to $$\frac{33666}{10^6} \times 802 = 27$$ cases. The square root of $27$ is $5$ (we usually don't need to worry about additional significant figures for this kind of analysis, which is usually done mentally and approximately). Equivalently, this standard error is $\sqrt{27}$ cases out of $802$ people, equivalent to $6500$ per million. We are therefore justified in stating The Vatican City case rate is $33666\pm 6500$ per million. This shows how silly it is to quote five significant figures for the rate. It is better to acknowledge the large standard error by limiting the sig figs, as in The observed Vatican City case rate is $34000 \pm 6500$ per million. (Do not make the mistake of just taking the square root of the rate! In this example, the square root of 33,666 is only 183, which is far too small. For estimating standard errors square roots apply to counts, not rates. ) A good rule of thumb is to use one additional significant digit when reporting the standard error, as I did here (the case rate was rounded to the nearest thousand and its SE was rounded to the nearest 100). A slightly more nuanced analysis Cases are not independent: people catch them from other people and because human beings do not dart about the world like atoms in a vial of hot gas, cases occur in clusters. This violates the independence assumption. What really happens, then, is that the effective count should be somewhere between the number of cases and the number of distinct clusters. We cannot know the latter: but surely it is smaller (perhaps far smaller) than the number of cases. Thus, The square root rule gives a lower bound on the standard error when the events are (positively) correlated. You can sometimes estimate how to adjust the standard error. For instance, if you guess that cases occur in clusters of ten or so, then you should multiply the standard error by the square root of ten. Generally, The standard error of a count of positively correlated events is, very roughly, the square root of the count times the square root of a typical cluster size. This approximation arises by assuming all cases in a cluster are perfectly correlated and otherwise the cases in any two different clusters are independent. If we suspect the Vatican City cases are clustered, then in the most extreme case it is a single cluster: the count is $1,$ its square root is $1,$ and the standard error therefore is one whole cluster: namely, about $27$ people. If you want to be cautious about not exaggerating the reliability of the numbers, then, you might think of this Vatican City rate as being somewhere between just above zero and likely less than 70,000 per million ( $1\pm 1$ clusters of $27$ of out a population of $802$ ).
{ "source": [ "https://stats.stackexchange.com/questions/493738", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/86658/" ] }
493,952
I keep seeing density functions that don't explicitly arise from conditioning written with the conditional sign: For example for the density of the Gaussian $N(\mu,\sigma)$ why write: $$ f(x| \mu, \sigma)=\frac{1}{\sqrt{2\pi \sigma^2}}\exp{-\frac{(x-\mu)^2}{2\sigma^2}}$$ instead of $$ f(x)=\frac{1}{\sqrt{2\pi \sigma^2}}\exp{-\frac{(x-\mu)^2}{2\sigma^2}}$$ Is this done purely to be explicit as to what the parameter values are or(what I'm hoping for) is there some meaning related to conditional probability?
In a Bayesian context, the parameters are random variables, so in that context the density is actually the conditional density of $X \mid (\mu, \sigma)$ . In that setting, the notation is very natural. Outside of a Bayesian context, it is just a way to make it clear that the density depends (here I am using this word colloquially, not probabilistically) on the parameters. Some people use $f_{\mu, \sigma}(x)$ or $f(x; \mu, \sigma)$ to the same effect. This latter point can be important in the context of likelihood functions. A likelihood function is a function of the parameters $\theta$ , given some data $x$ . The likelihood is sometimes written as $L(\theta \mid x)$ or $L(\theta ; x)$ , or sometimes as $L(\theta)$ when the data $x$ is understood to be given. What is confusing is that in the case of a continuous distribution, the likelihood function is defined as the value of the density corresponding to the parameter $\theta$ , evaluated at the data $x$ , i.e. $L(\theta; x) := f_\theta(x)$ . Writing $L(\theta; x) = f(x)$ would be confusing, since the left-hand side is a function of $\theta$ , while the right-hand side ostensibly does not appear to depend on $\theta$ . While I prefer writing $L(\theta; x) := f_\theta(x)$ , some might write $L(\theta; x) := f(x \mid \theta)$ . I have not really seen much consistency in notation across different authors, although someone more well-read than I can correct me if I am wrong.
{ "source": [ "https://stats.stackexchange.com/questions/493952", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/243542/" ] }
494,157
From the CDC ( https://www.cdc.gov/nchs/fastats/deaths.htm ): Death rate: 863.8 deaths per 100,000 population Life expectancy: 78.6 years Now in a static situation I would expect that the death rate to be the reciprocal of the life expectancy or about 1,270 deaths per 100K which is about a 40% difference from the actual. Quite a lot. Is this because the population age profile is not static? The US median age is about 38 years and has increased by about 1 year over the last decade. Is this really enough of a variation to account for the 40% difference? I tried looking for the mean age to see if that statistic could shed more light on the subject but could not find any data. I would like to understand this in more detail so any information is appreciated.
In short The discrepancy between death rate and the reciprocal of the life expectancy generally occurs when the age distribution of the population is not the same as the survival curve, which relates to a hypothetical population on which the life expectancy is based (and more specifically the population is younger than what the survival curve suggests). There can be several reasons that create differences between the actual population and this hypothetical population The death rate per age group has dropped suddenly/fast and the population is not yet stabilized (not equal to the survival curve based on the new death rates per age group) The population is growing . If every year more babies are born than the previous year, then the population will be relatively younger than the hypothetical population based on what a survival curve suggests. Migration . Migration is often occurring with relatively younger people. So the countries with positive net immigration will be relatively younger and the countries with negative immigration will be relatively older. Life expectancy The life expectancy is a virtual number based on a hypothetical person/population for which the mortality rates in the future are the same as the current mortality rates. Some example using data (2014) from the Dutch bureau of statistics https://opendata.cbs.nl/statline/#/CBS/nl/dataset/7052_95/table?dl=98D9 graph 1 shows (current) death rate for age $i$ $$f_i$$ graph 2 shows survival rate for age $i$ (for a hypothetical population that will experience the death rate for age $i$ as it is for the people that are currently of age $i$ ) $$s_i = \prod_{j=0}^{j=i-1} (1-f_j)$$ graph 3 shows probability for dying at age $i$ $$p_i = s_i f_i$$ Note that $p_i$ is a hypothetical situation. Death rates In the above example, the hypothetical population will follow the middle graph. However the actual population is not this hypothetical population. In particular, we have much less elderly people than what would be expected based on the survival rates. These survival rates are based on the death rates in the present time . But when the elderly grew up these death rates were much larger. Therefore, the population contains less elderly than the current survival rate curve suggest. The population looks more like this (sorry for it being in Dutch and not well documented, I am getting these images from some old doodles, I will see if I can make the graphs again): So around 2040 the distribution of the population will be more similar to the curve of the survival rate. Currently, the population distribution is more pointy, and that is because the people that are currently old did not experience the probabilities of dying at age $i$ on which the hypothetical life expectancy is based. How death rates are changing In addition, there is a slightly lower birth rate (less than 2 per woman), and so the younger population is shrinking. This means that the death rate will not just rise to 1/life_expectancy, but even surpass it. This is an interesting paradox. (As Neil G commented it's Simpson's paradox) On the one hand the death rate is decreasing in each separate age group. On the other hand the death rate is increasing for the total population. Note this graph interactive version on gapminder We see that in the past decades the death rates have dropped quickly (due to decrease in death rate) and now are rising again (due to stabilization of the population, and due to decrease in birth rate). Most countries follow this pattern (some started earlier some started later). Simulation In this question the answer contains a piece of R-code that simulates the survival rate curve for a change of the risk-ratio of death for all ages. Below we use the same function life_expect and simulate the death rate in a population when we let this risk ratio change from 1.5 to 1.0 in the course of 50 years (thus life expectancy will increase and the inverse, the death rate based on life-expectancy, will decrease). What we see is that the drop in the death rate in the population is larger than what we would expect based on the life expectancy, and is only stabilizing at this expected number after some time when we stop the change in risk ratios. Note, in this population we kept the births constant. Another way how the discrepancy between the reciprocal of the life expectancy and the death rate arrises is when the number of births is increasing (population growth) which causes the population to be relatively young in comparison to the hypothetical population based on the survival curve. ### initial population ts <- life_expect(base, 0, rr = 1.5, rrstart = 0) pop <- ts$survival Mpop <- pop ### death rates dr <- sum(ts $death_rate*pop)/sum(pop) de <- 1/(ts$ Elife+1) for (i in -100:200) { ### rr changing from 1.5 to 1 for i between 0 and 50 t <- life_expect(base, 0, rr = 1.5-max(0,0.5*min(i/50,1)), rrstart = 0) ### death rate in population dr <- c(dr,sum(t$death_rate*pop)/sum(pop)) ### death rate based on life expectancy de <- c(de,1/(t$Elife+1)) ### update population pop <- c(1,((1-t$death_rate)*pop)[-101]) Mpop <- cbind(Mpop,pop) } ### plotting plot(de * 100, type = "l", lty = 2, lwd = 2, ylim = c(1.10,1.4), xlab = "time", xaxt = "n", ylab = "rate %") lines(dr * 100, col = 2) legend(0,1.10, c("death rate in population", "death rate based on life expectancy"), lty = c(1,2), lwd = c(1,2), col = c(2,1), cex = 0.7, xjust = 0, yjust = 0)
{ "source": [ "https://stats.stackexchange.com/questions/494157", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/301279/" ] }
496,774
I know there's a similar question here: How to calculate 95% CI of vaccine with 90% efficacy? but it doesn't have an answer, at the moment. Also, my question is different: the other question asks how to compute VE, using functions from a R package. I want to know why vaccine efficacy is defined as illustrated at the bottom of this page : $$ \text{VE} = 1 - \text{IRR}$$ where $$ \text{IRR} = \frac{\text{illness rate in vaccine group}}{\text{illness rate in placebo group}}$$ and which is the statistical model behind it. My attempts: I thought the researches would fit a logistic regression model a single binary predictor $X$ , identifying subjects who got the vaccine ( $X=1$ ) or not ( $X=0$ ): $p(Y|X) = \frac{1}{1+\exp{-(\beta_0 +\beta_1 X)}}$ However, this is clearly not the case, because for the Moderna vaccine we know that there were 5 cases in the vaccine arm, and 90 in the placebo arm, which corresponds to a $\text{VE}$ of $94.\bar{4}\%$ . These data alone are enough to determine $\text{VE}$ , but surely they're not enough to fit a LR model, and thus to determine $\beta_1$ . Also, by looking at page 111-113 of the Pfizer document, it looks like a different (Bayesian?) analysis is performed. Again, the point estimate seems to be $ \text{VE} = 1 - \text{IRR}$ , but the power of a test is mentioned, and two tables 7 and 8 are presented which show probability of success and failure. Can you show me how to obtain the results in such tables?
The relation between efficiency and illness risk ratio I want to know why vaccine efficacy is defined as illustrated at the bottom of this page : $$ \text{VE} = 1 - \text{IRR}$$ where $$ \text{IRR} = \frac{\text{illness rate in vaccine group}}{\text{illness rate in placebo group}}$$ This is just a definition. Possibly the following expression may help you to get a different intuition about it $$\begin{array}{} VE &=& \text{relative illness rate reduction}\\ &=& \frac{\text{change (reduction) in illness rate}}{\text{illness rate}}\\ &=& \frac{\text{illness rate in placebo group} -\text{illness rate in vaccine group}}{\text{illness rate in placebo group}}\\ &=& 1-IRR \end{array}$$ Modelling with logistic regression These data alone are enough to determine $\text{VE}$ , but surely they're not enough to fit a LR model, and thus to determine $\beta_1$ . Note that $$\text{logit}(p(Y|X)) = \log \left( \frac{p(Y|X)}{1-p(Y|X)} \right) = \beta_0 + \beta_1 X$$ and given the two observations $\text{logit}(p(Y|X=0))$ and $\text{logit}(p(Y|X=1))$ the two parameters $\beta_0$ and $\beta_1$ can be computed R-code example: Note the below code uses cbind in the glm function. For more about entering this see this answer here . vaccindata <- data.frame(sick = c(5,90), healthy = c(15000-5,15000-90), X = c(1,0) ) mod <- glm(cbind(sick,healthy) ~ X, family = binomial, data = vaccindata) summary(mod) This gives the result: Call: glm(formula = cbind(sick, healthy) ~ X, family = binomial, data = vaccindata) Deviance Residuals: [1] 0 0 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -5.1100 0.1057 -48.332 < 2e-16 *** X -2.8961 0.4596 -6.301 2.96e-10 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 9.2763e+01 on 1 degrees of freedom Residual deviance: 2.3825e-12 on 0 degrees of freedom AIC: 13.814 Number of Fisher Scoring iterations: 3 So the parameter $\beta_1$ is estimated as $-2.8961$ with standard deviation $0.4596$ From this, you can compute (estimate) the odds, the efficiency, and their confidence intervals. See also: How exactly is the "effectiveness" in the Moderna and Pfizer vaccine trials estimated? The Bayesian model (Table 6) Also, by looking at page 111-113 of the Pfizer document, it looks like a different (Bayesian?) analysis is performed. Again, the point estimate seems to be $ \text{VE} = 1 - \text{IRR}$ , but the power of a test is mentioned, and two tables 7 and 8 are presented which show probability of success and failure. Can you show me how to obtain the results in such tables? These analyses are performed in an early stage to verify whether, given the outcomes, the vaccine is effective. The tables give hypothetical observations for which they would reach the tipping point to declare either failure (posterior probability of success <5%) or great success (the probability that VE>30% is larger than 0.995). These percentages for the tipping points are actually based on controlling Type I error (more about that below). They control the overall type I error, but it is not clear how this is distributed among the multiple go/no-go points. The outcome considered is the ratio/count of vaccinated people among all infected people. Conditional on the total infected people this ratio follows a binomial distribution*. For more details about the computation of the posterior in this case see: How does the beta prior affect the posterior under a binomial likelihood *There is probably a question here about that; I still have to find a link for this; but you can derive this based on the idea that both groups are approximately Poisson distributed (more precisely they are binomial distributed) and the probability to observe a specific combination of cases $k$ and $n-k$ conditional on reaching $n$ total cases is $$\frac{\lambda_1^k e^{-\lambda_1}/k! \cdot \lambda_2^{n-k}e^{-\lambda_2}/(n-k)! }{\lambda_2^ne^{-(\lambda_1\lambda_2)}/n! } = {n \choose k} \left(\frac{\lambda_1}{\lambda_1+\lambda_2}\right)^k \left(1- \frac{\lambda_1}{\lambda_1+\lambda_2}\right)^{n-l}$$ The graphic below shows a plot for the output for these type of computations Success boundary This is computed by the posterior distribution for the value $$\begin{array}{}\theta &=& (1-VE)/(2-VE)\\ &=& RR/(1-RR) \\&=& \text{vaccinated among infected}\end{array}$$ For instance the in case of 6 vaccinated and 26 placebo among the first 32 infected people the posterior is Beta distributed with parameters 0.7+6 and 1+26 and the cumulative distribution for $\theta < (1-0.3)/(2-0.3)$ will be $\approx 0.996476$ for 7 vaccinated and 25 placebo it will be 0.989 which is below the level. In R you would compute these figures as pbeta(7/17,0.700102+6,1+26) Futility boundary For this they compute the probability of success which is the power of the test. Say for a given hypothesis the test criterium can be to observe 53 or less cases in the vaccine group among the first 164 cases. Then as function of the true VE you can estimate how probable it is to pass the test. In the table 6 they compute this not as a function of a single VE, but as an integral over the posterior distribution of the VE or $\theta$ (and this $\theta$ is beta distributed and the test result will be beta-binomial distributed). It seems like they used something like the following: ### predict the probability of success (observing 53 or less in 164 cases at the end) ### k is the number of infections from vaccine ### n is the total number of infections ### based on k and n the posterior distribution can be computed ### based on the posterior distribution (which is a beta distribution) ### we can compute the success probability predictedPOS <- function(k,n) { #### posterior alpha and beta alpha = 0.7+k beta = 1+n-k ### dispersion and mean s = alpha + beta m = alpha/(alpha+beta) ### probability to observe 53 or less out of 164 in final test ### given we allread have observed k out of n (so 53-k to go for the next 164-n infections) POS <- rmutil::pbetabinom(53-k,164-n,m,s) return(POS) } # 0.03114652 predictedPOS(15,32) # 0.02486854 predictedPOS(26,62) # 0.04704588 predictedPOS(35,92) # 0.07194807 predictedPOS(14,32) # 0.07194807 predictedPOS(25,62) # 0.05228662 predictedPOS(34,92) The values 14, 25, 34 are the highest values for which the posterior POS is still above 0.05. For the values 15, 26, 35 it is below. Controlling type I error (Table 7 and 8) Table 7 and 8 give an analysis for the probability to succeed given a certain VE (they display for 30, 50, 60, 70, 80%). It gives the probability that the analysis passes the criterium for success during one of the interim analyses or with the final analysis. The first column is easy to compute. It is binomially distributed. E.g. The probabilities 0.006, 0.054, 0.150, 0.368, 0.722 in the first columns are the the probability to have 6 cases or less when $p=(100-VE)/(200-VE)$ and $n = 32$ . The other columns are not similar binomial distributions. They represent the probability of reaching the success criterium if there wasn't success during the earlier analysis. I am not sure how they computed this (they refer to a statistical analysis plan, SAP, but it is unclear where this can be found and if it is open access). However, we can simulate it with some R-code ### function to simulate succes for vaccine efficiency analysis sim <- function(true_p = 0.3) { p <- (1-true_p)/(2-true_p) numbers <- c(32,62,92,120,164) success <- c(6,15,25,35,53) failure <- c(15,26,35) n <- c() ### simulate whether the infection cases are from vaccine or placebo group n[1] <- rbinom(1,numbers[1],p) n[2] <- rbinom(1,numbers[2]-numbers[1],p) n[3] <- rbinom(1,numbers[3]-numbers[2],p) n[4] <- rbinom(1,numbers[4]-numbers[3],p) n[5] <- rbinom(1,numbers[5]-numbers[4],p) ### days with succes or failure s <- cumsum(n) <= success f <- cumsum(n)[1:3] >= failure ### earliest day with success or failure min_s <- min(which(s==TRUE),7) min_f <- min(which(f==TRUE),6) ### check whether success occured before failure ### if no success occured then it has value 7 and will be highest ### if no failure occured then it will be 6 and be highest unless no success occured either result <- (min_s<min_f) return(result) } ### compute power (probability of success) ### for different efficienc<y of vaccine set.seed(1) nt <- 10^5 x <- c(sum(replicate(nt,sim(0.3)))/nt, sum(replicate(nt,sim(0.5)))/nt, sum(replicate(nt,sim(0.6)))/nt, sum(replicate(nt,sim(0.7)))/nt, sum(replicate(nt,sim(0.8)))/nt) x This gives 0.02073 0.43670 0.86610 0.99465 0.99992 which is close to the overall probability of success in the final column. Although they use a Bayesian analysis to compute values in table 6. They have chosen the boundaries, based on which they performed the Bayesian analysis, according to controlling the type I error (I think that they use the probability to have success given VE = 0.3, p=0.021, as the basis for the type I error. This means that if the true VE = 0.3 then they might, erroneously, still declare success with probability 0.021, and if the true VE<0.3 this type I error will be even less)
{ "source": [ "https://stats.stackexchange.com/questions/496774", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/58675/" ] }
497,935
Suppose you estimate a model using firm-level or state-level data and then apply the estimates at a lower level of aggregation, say at factory-level $^*$ or county-level. If it makes things easier, imagine this is a model describing output of widgets Y given some number of inputs (X and Z). I would like to know: Is there a name for this? Is this always a bad idea? What if it is not a lower level of aggregation, but merely a different level of aggregation (say model US state data, but use the model on CBSA data, ignoring the fact that not all of the US is in some CBSA)? I think this is related to external validity and the ecological fallacy, but perhaps there is something more specific. $^*$ Assuming each firm has some number of factories.
The assumption that the relationships are the same at a finer level of aggregation is exactly the ecological fallacy. The problem, more generally, of the relationship depending on how you aggregate is the Modifiable Areal Unit Problem
{ "source": [ "https://stats.stackexchange.com/questions/497935", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/7071/" ] }
498,551
$X,Y,Z$ are random variables. How to construct an example when $X$ and $Z$ are correlated, $Y$ and $Z$ are correlated, but $X$ and $Y$ are independent?
Intuitive example: $Z = X + Y$ , where $X$ and $Y$ are any two independent random variables with finite nonzero variance.
{ "source": [ "https://stats.stackexchange.com/questions/498551", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/302889/" ] }
498,552
I am creating two distributions as shown below. # Binomial distribution a = np.random.binomial(3, 0.5, 5000) a.sort() # Normal distribution b = np.random.normal(mean, std_dev, size_dist) b.sort() I want to know does sorting the array ruins the distributions i.e. do the order of random numbers in array make up the distribution or they can be sorted and doesn't matter?
Intuitive example: $Z = X + Y$ , where $X$ and $Y$ are any two independent random variables with finite nonzero variance.
{ "source": [ "https://stats.stackexchange.com/questions/498552", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/256926/" ] }
498,580
I am trying to understand the mathematical properties of supervised learning and semi-supervised learning. Let us consider the case for the mean $\mu$ . Then the supervised learning estimator can just be given as the the sample mean $$ \hat{\mu}_{s}=1/n\sum_{i=1}^n Y_i$$ (Here we assume $Y$ is just a standard regression model, say $E[Y|X]=\beta_0+\beta X$ .) Now the semi-supervised estimator becomes $$ \hat{\mu}_{ss}=1/N \sum_{j=1}^N (\hat{\beta}_0+\hat{\beta}X_j).$$ Here $N$ is the amount of unlabelled data we have with $N>n$ . After a bit of work, I see that the semisupervised is asymptotically linear (and so of course asymptotically normal). However now I would like to compare the two estimators to see which is more efficient. How do I do this? What are the asymptotic standard errors of both the supervised and semisupervised estimators.
Intuitive example: $Z = X + Y$ , where $X$ and $Y$ are any two independent random variables with finite nonzero variance.
{ "source": [ "https://stats.stackexchange.com/questions/498580", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/304054/" ] }
500,249
Let's assume that we have a distribution with known statistical properties (mean, variance, skewness, kurtosis). Let's also assume that mean is equal to zero. Is there an analytical expressions for the average value of the absolute values of the considered random variable? In other words can we say that: avg(abs(x)) = F(var(x), skew(x), kurt(x))
In general knowing these 4 properties is not enough to tell you the expectation of the absolute value of a random variable. As proof, here are two discrete distributions $X$ and $Y$ which have mean 0 and the same variance, skew, and kurtosis, but for which $\mathbb{E}(|X|) \ne \mathbb{E}(|Y|)$ . t P(X=t) P(Y=t) -3 0.100 0.099 -2 0.100 0.106 -1 0.100 0.085 0 0.400 0.420 1 0.100 0.085 2 0.100 0.106 3 0.100 0.099 You can verify that the 1st, 2nd, 3rd, and 4th central moments of these distributions are the same, and that the expectation of the absolute value is different. Edit: explanation of how I found this example. For ease of calculation I decided that: $X$ and $Y$ would both be symmetric about $0$ , so that the mean and skew would automatically be $0$ . $X$ and $Y$ would both be discrete taking values on $\{-n, .., +n\}$ for some $n$ . For a given distribution $X$ , we want to find another distribution $Y$ satisfying the simultaneous equations $\mathbb{E}(Y^2) = \mathbb{E}(X^2)$ and $\mathbb{E}(Y^4) = \mathbb{E}(X^4)$ . We find $n = 2$ isn't enough to provide multiple solutions, because subject to the above constraints we only have 2 degrees of freedom: once we pick $f(2)$ and $f(1)$ , the rest of the distribution is fixed, and our two simultaneous equations in two variables have a unique solution, so $Y$ must have the same distribution as $X$ . But $n = 3$ gives us 3 degrees of freedom, so should lead to infinite solutions. Given $X$ , our 3 degrees of freedom in picking $Y$ are: $$f_Y(1) = f_X(1)+p \\ f_Y(2) = f_X(2)+q \\ f_Y(3) = f_X(3)+r$$ Then our simultaneous equations become: $$ \begin{align} p + 4q + 9r& = 0 \\ p + 16q + 81r& = 0 \end{align} $$ The general solution is: $$ p = 15r \\ q = -6r \\ $$ Finally I arbitrarily picked $$ \begin{align} f_X(1) & = 0.1 \\ f_X(2) & = 0.1 \\ f_X(3) & = 0.1 \\ r & = -0.001 \end{align} $$ giving me the above counterexample.
{ "source": [ "https://stats.stackexchange.com/questions/500249", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2407/" ] }
500,948
I ended up in a debate regarding logistic regression and neural networks (NNs). Is it wrong to say that logistic regression is a specific case of a neural network? I have seen a lot of explanation in which logistic regression is shown as a NN, like the following: From Tess Fernandez . Or like this: To me there are no differences, at least on the surface. There is a linear combination of the input, a fixed nonlinear function (sigmoid) and a classification based on the output probabilities, which is exactly a simple neural network with a single layer with a single node (at least in binary problem) and that use the sigmoid function as a nonlinear activation function. But someone told me that it is not exactly so, because the assumptions behind this model are completely different from a neural network. What are these assumptions? And why should logistic regression be considered different from a neural network? I know that NN can handle more complex problems (like nonlinearly separable problems), but this puzzle me a bit.
You have to be very specific about what you mean. We can show mathematically that a certain neural network architecture trained with a certain loss coincides exactly with logistic regression at the optimal parameters. Other neural networks will not. A binary logistic regression makes predictions $\hat{y}$ using this equation: $$ \hat{y}=\sigma(X \beta + \beta_0) $$ where $X$ is a $n \times p$ matrix of features (predictors, independent variables) and vector $\beta$ is the vector of $p$ coefficients and $\beta_0$ is the intercept and $\sigma(z)=\frac{1}{\exp(-z)+1}$ . Conventionally in a logistic regression, we would roll the $\beta_0$ scalar into the vector $\beta$ and append a column of 1s to $X$ , but I've moved it out of $\beta$ for clarity of exposition. A neural network with no hidden layers and one output neuron with a sigmoid activation makes predictions using the equation $$ \hat{y}=\sigma(X \beta + \beta_0) $$ with $\hat{y},\sigma,X, \beta, \beta_0$ as before. Clearly, the equation is exactly the same. In the neural-networks literature, $\beta_0$ is usually called a "bias," even though it has nothing to do with the statistical concept of bias . Otherwise, the terminology is identical. A logistic regression has the Bernoulli likelihood as its objective function, or, equivalently, the Bernoulli log-likelihood function. This objective function is maximized : $$ \arg\max_{\beta,\beta_0} \sum_i \left[ y_i \log(\hat{y_i}) + (1-y_i)\log(1-\hat{y_i})\right] $$ where $y \in \{0,1\}$ . We can motivate this objective function from a Bernoulli probability model where the probability of success depends on $X$ . A neural network can, in principle, use any loss function we like. It might use the so-called "cross-entropy" function (even though the "cross-entropy" can motivate any number of loss functions; see How to construct a cross-entropy loss for general regression targets? ), in which case the model minimizes this loss function: $$ \arg\min_{\beta,\beta_0} -\sum_i \left[ y_i \log(\hat{y_i}) + (1-y_i)\log(1-\hat{y_i})\right] $$ In both cases, these objective functions are strictly convex (concave) when certain conditions are met. Strict convexity implies that there is a single minimum and that this minimum is a global. Moreover, the objective functions are identical, since minimizing a strictly convex function $f$ is equivalent to maximizing $-f$ . Therefore, these two models recover the same parameter estimates $\beta, \beta_0$ . As long as the model attains the single optimum, it doesn't matter what optimizer is used, because there is only one optimum for these specific models. However, a neural network is not required to optimize this specific loss function; for instance, a triplet-loss for this same model would likely recover different estimates $\beta,\beta_0$ . And the MSE/least squares loss is not convex in this problem, so that neural network would differ from logistic regression as well (see: What is happening here, when I use squared loss in logistic regression setting? ).
{ "source": [ "https://stats.stackexchange.com/questions/500948", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/139341/" ] }
503,835
Independence between random variables $X$ and $Y$ implies that $\text{Corr}\left(f(X),g(Y)\right)=0$ for arbitrary functions $f(\cdot)$ and $g(\cdot)$ ( here is a related thread). But is the following statement, or a similar one (perhaps more rigorously defined), correct? If $\text{Corr}\left(f(X),g(Y)\right)=0$ for all possible functions $f(\cdot)$ and $g(\cdot)$ , then $X$ and $Y$ are independent.
Using indicator functions of measurable sets like $$f(x)=\mathbb I_A(x)\quad g(x)=\mathbb I_B(x)$$ leads to $$\text{cov}(f(X),g(Y))=\mathbb P(X\in A,Y\in B)-\mathbb P(X\in A)\mathbb P(Y\in B)$$ therefore implying independence. As shown in the following snapshot of A. Dembo's probability course, proving the result for indicator functions is enough. This is due to this monotone class theorem:
{ "source": [ "https://stats.stackexchange.com/questions/503835", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/53690/" ] }
503,844
Input : I have a dataset that has sale from the last three years and does not have any missing information along with some other features like Date, Promotion, Store_number, Public_holiday_indicator, Promotion_running, weather etc. The dataset ranges from 1st Jan 2017 to 1st Jan 2020 Output : What I want to predict is, sales for the next two month i.e. 2nd Jan to 2nd March 2020 My question is that is this a regression problem as I have got other features that impacts the sales or is this a time series problem as I have got a time bound data and there are no missing values in the Date column. Or Can I solve this using both the approaches and choose the one which has better results?
Using indicator functions of measurable sets like $$f(x)=\mathbb I_A(x)\quad g(x)=\mathbb I_B(x)$$ leads to $$\text{cov}(f(X),g(Y))=\mathbb P(X\in A,Y\in B)-\mathbb P(X\in A)\mathbb P(Y\in B)$$ therefore implying independence. As shown in the following snapshot of A. Dembo's probability course, proving the result for indicator functions is enough. This is due to this monotone class theorem:
{ "source": [ "https://stats.stackexchange.com/questions/503844", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/270983/" ] }
512,678
A famous aphorism by cosmologist Martin Rees(*) goes "absence of evidence is not evidence of absence" . On the other hand, quoting Wikipedia: In carefully designed scientific experiments, even null results can be evidence of absence. For instance, a hypothesis may be falsified if a vital predicted observation is not found empirically. (At this point, the underlying hypothesis may be rejected or revised and sometimes, additional ad hoc explanations may even be warranted.) Whether the scientific community will accept a null result as evidence of absence depends on many factors, including the detection power of the applied methods, the confidence of the inference, as well as confirmation bias within the community. Therefore, for the sake of scientific progress, we end up accepting the absence of evidence as evidence of absence. This is also at the heart of two very famous analogies, namely Russell's teapot and Carl Sagan's Dragon in the garage . My question is: how can we formally justify, based on Bayesian probability theory, that absence of evidence can legitimately be used as evidence of absence? Under which conditions is that true? (the answer is expected to depend on the specific details of the problem such as the model we assume, the information gain provided by our observations given the model, or the prior probabilities of the competing hypotheses involved). (*) the origin of the aphorism seems to be much older, see e.g. this .
There's a difference between not looking and therefore not seeing any X, and looking and not seeing any X. The latter is 'evidence', the former is not. So the hypothesis under test is "There is a unicorn in that field behind the hill." Alice stays where she is and doesn't look. If there is a unicorn in the field, Alice sees no unicorns. If there is no unicorn in the field, Alice sees no unicorns. P(sees no unicorn | is unicorn) = P(sees no unicorn | no unicorn) = 1. When the hypothesis makes no difference to the observation, the 'evidence' contributed by the observation to belief in the hypothesis is zero. Bob climbs to the top of the hill and looks down on the field, and sees no unicorn. If there is a unicorn in the field, Bob would see it. If there is no unicorn in the field, Bob would see no unicorn. P(sees no unicorn | is unicorn) $\neq$ P(sees no unicorn | no unicorn). When the hypothesis being true or false changes the probability of the observation, evidence is contributed. Looking and seeing no unicorns in the field is positive evidence that there are no unicorns in the field. We can quantify evidence using Bayesian probability. $$P(H_1|O)={P(O|H_1)P(H_1)\over P(O)}$$ $$P(H_2|O)={P(O|H_2)P(H_2)\over P(O)}$$ where $H_1$ is "there is no unicorn in that field". $H_2$ is "there is a unicorn in that field", and $O$ is "I see no unicorn". Divide one by the other: $${P(H_1|O)\over P(H_2|O)}={P(O|H_1)\over P(O|H_2)}{P(H_1)\over P(H_2)}$$ Take logarithms to make the multiplication additive: $$\mathrm{log}{P(H_1|O)\over P(H_2|O)}=\mathrm{log}{P(O|H_1)\over P(O|H_2)}+\mathrm{log}{P(H_1)\over P(H_2)}$$ We interpret this as saying that the Bayesian belief in favour of $H_1$ over $H_2$ after the observation is equal to the evidence in favour of $H_1$ over $H_2$ arising from the observation plus the Bayesian belief in favour of $H_1$ over $H_2$ before the observation. The additive evidence arising from the experiment is quantified as: $$\mathrm{log}{P(O|H_1)\over P(O|H_2)}$$ Alice, by not looking, has no evidence. $\mathrm{log}(1/1)=0$ . Bob, by looking and not seeing, does. $\mathrm{log}(1/0)=\infty$ , meaning absolute certainty. (Of course, if there is a 10% possibility that there is an invisible unicorn in the field, Bob's evidence is $\mathrm{log}(1/0.1)=1$ , if we use base 10 logs. This expresses information using a unit called the hartley .) Rees' dictum is based on people claiming things like that there are no unicorns in the universe based on having looked at only a tiny portion of it and having seen none. Strictly speaking, there is non-zero evidence arising from this, but it's near zero, being related to the log of the volume of space and time searched divided by the volume of the universe. Regarding the issue of null hypothesis experiments, the issue here is that often we are not able to quantify the probability of the observation given an open alternative hypothesis. What is the probability of seeing the reaction if our current understanding is wrong and some unknown physical theory is true? So we set $H_2$ to be a null hypothesis we intend to falsify, such that the probability of the observation given the null is very low. And we presume $H_1$ is restricted to unknown alternative theories in which the observation is reasonably probable. $$\mathrm{log}{P(H_{alt}|O)\over P(H_{null}|O)}=\mathrm{log}{P(H_{alt}|O)\over 0.05}=\mathrm{log}(20\times P(H_{alt}|O))\approx \mathrm{log}20$$ It requires some judicious assumptions about the existence of plausible alternatives, but from a Bayesian point of view doesn't look any different to any other sort of evidence.
{ "source": [ "https://stats.stackexchange.com/questions/512678", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/257740/" ] }
514,095
Can ordinary least squares regression be solved with Newton's method? If so, how many steps would be required to achieve convergence? I know that Newton's method works on twice differentiable functions, I'm just not sure how this works with OLS.
If used for OLS regression, Newton's method converges in a single step, and is equivalent to using the standard, closed form solution for the coefficients. On each iteration, Newton's method constructs a quadratic approximation of the loss function around the current parameters, based on the gradient and Hessian. The parameters are then updated by minimizing this approximation. For quadratic loss functions (as we have with OLS regression) the approximation is equivalent to the loss function itself, so convergence occurs in a single step. This assumes we're using the 'vanilla' version of Newton's method. Some variants use a restricted step size, in which case multiple steps would be needed. It also assumes the design matrix has full rank. If this doesn't hold, the Hessian is non-invertible so Newton's method can't be used without modifying the problem and/or update rule (also, there's no unique OLS solution in this case). Proof Assume the design matrix $X \in \mathbb{R}^{n \times d}$ has full rank. Let $y \in \mathbb{R}^n$ be the responses, and $w \in \mathbb{R}^d$ be the coefficients. The loss function is: $$L(w) = \frac{1}{2} \|y - X w\|_2^2$$ The gradient and Hessian are: $$\nabla L(w) = X^T X w - X^T y \quad \quad H_L(w) = X^T X$$ Newton's method sets the parameters to an initial guess $w_0$ , then iteratively updates them. Let $w_t$ be the current parameters on iteration $t$ . The updated parameters $w_{t+1}$ are obtained by subtracting the product of the inverse Hessian and the gradient: $$w_{t+1} = w_t - H_L(w_t)^{-1} \nabla L(w_t)$$ Plug in the expressions for the gradient and Hessian: $$w_{t+1} = w_t - (X^T X)^{-1} (X^T X w_t - X^T y)$$ $$= (X^T X)^{-1} X^T y$$ This is the standard, closed form expression for the OLS coefficients. Therefore, no matter what we choose for the initial guess $w_0$ , we'll have the correct solution at $w_1$ after a single iteration. Furthermore, this is a stationary point. Notice that the expression for $w_{t+1}$ doesn't depend on $w_t$ , so the solution won't change if we continue beyond one iteration. This indicates that Newton's method converges in a single step.
{ "source": [ "https://stats.stackexchange.com/questions/514095", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/314509/" ] }
514,106
Assume I have a high school statistics class under my belt: One common misunderstanding is that 95% efficacy means that in the Pfizer clinical trial, 5% of vaccinated people got COVID. But that's not true; the actual percentage of vaccinated people in the Pfizer (and Moderna) trials who got COVID-19 was about a hundred times less than that: 0.04% . What the 95% actually means is that vaccinated people had a 95% lower risk of getting COVID-19 compared with the control group participants, who weren't vaccinated. In other words, vaccinated people in the Pfizer clinical trial were 20 times less likely than the control group to get COVID-19. -Reference: https://www.livescience.com/covid-19-vaccine-efficacy-explained.html I would like to understand with a "concrete" example how the these numbers are calculated, so I can understand the contours, limitations and assumptions of what it means to 95% (mRNA C19 vaccines) vs ~70% (JNJ). A concrete example or a pointer to such an example written for at a college freshman level is appreciated.
If used for OLS regression, Newton's method converges in a single step, and is equivalent to using the standard, closed form solution for the coefficients. On each iteration, Newton's method constructs a quadratic approximation of the loss function around the current parameters, based on the gradient and Hessian. The parameters are then updated by minimizing this approximation. For quadratic loss functions (as we have with OLS regression) the approximation is equivalent to the loss function itself, so convergence occurs in a single step. This assumes we're using the 'vanilla' version of Newton's method. Some variants use a restricted step size, in which case multiple steps would be needed. It also assumes the design matrix has full rank. If this doesn't hold, the Hessian is non-invertible so Newton's method can't be used without modifying the problem and/or update rule (also, there's no unique OLS solution in this case). Proof Assume the design matrix $X \in \mathbb{R}^{n \times d}$ has full rank. Let $y \in \mathbb{R}^n$ be the responses, and $w \in \mathbb{R}^d$ be the coefficients. The loss function is: $$L(w) = \frac{1}{2} \|y - X w\|_2^2$$ The gradient and Hessian are: $$\nabla L(w) = X^T X w - X^T y \quad \quad H_L(w) = X^T X$$ Newton's method sets the parameters to an initial guess $w_0$ , then iteratively updates them. Let $w_t$ be the current parameters on iteration $t$ . The updated parameters $w_{t+1}$ are obtained by subtracting the product of the inverse Hessian and the gradient: $$w_{t+1} = w_t - H_L(w_t)^{-1} \nabla L(w_t)$$ Plug in the expressions for the gradient and Hessian: $$w_{t+1} = w_t - (X^T X)^{-1} (X^T X w_t - X^T y)$$ $$= (X^T X)^{-1} X^T y$$ This is the standard, closed form expression for the OLS coefficients. Therefore, no matter what we choose for the initial guess $w_0$ , we'll have the correct solution at $w_1$ after a single iteration. Furthermore, this is a stationary point. Notice that the expression for $w_{t+1}$ doesn't depend on $w_t$ , so the solution won't change if we continue beyond one iteration. This indicates that Newton's method converges in a single step.
{ "source": [ "https://stats.stackexchange.com/questions/514106", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/126770/" ] }
514,557
I have just heard about the Wine/Water Paradox in Bayesian statistics, but didn't understand it very well (see Mikkelson 2004 for an introduction). Can you explain in simple terms what the paradox is (and why is it a paradox), why it matters for Bayesian statistics, and its resolution?
What the paradox is There is a mixture of wine and water. Let $x$ be the amount of wine divided by the amount of water. Suppose we know that $x$ is between $1/3$ and $3$ but nothing else about $x$ . We want the probability that $x \le 2$ . Without a sample space or probability model, we have no way to calculate probabilities. So we have to decide how to model the problem. The Principle of Indifference states that if we have no reason to favour one outcome over another, then we should assign them the same probability. This means that we should say that every possible value of $x$ is equally likely. Therefore, the probability that $x \le 2$ is $(2 - 1/3)/(3 -1/3) = 5/8$ . (If you are not comfortable with continuous probability, we could do another version in which $x$ can only take on the values $1/3, 2/3, 1, 4/3, 5/3, 2, 7/3, 8/3, 3$ . Then the probability would be $6/9$ . This version will lead to the same paradox.) That's fine, the answer is $5/8$ . But now, what would happen if we decided to use the same model, but for the ratio of water divided by wine? Call this $y$ . Then $y = 1/x$ . Now, if we assume that all values of $y$ are equally likely, we want the probability that $y \ge 1/2$ . But this is $(3 - 1/2)/(3-1/3) = 15/16$ , (or $8/9$ in the discrete version.) The paradox is that that these two values are not equal. So, how should we assign the probability that $x \le 2$ ? Should it be $5/8$ or $15/16$ ? It depends on our model. But why would we favour one model over the other? The Principle of Indifference tells us to choose either model, but they give different answers depending on which liquid is called "water" and which liquid is called "wine". Why it matters for Bayesian statistics In Bayesian statistics, every calculation is based on choosing a prior distribution for the parameters of interest. For example, if we wanted to make some inference about the wine/water problem, we would have to decide a prior distribution on the ratio of wine and water. Often we want to choose the prior distribution which implies "no prior knowledge", which is usually a uniform or flat prior, which assumes all values are equally likely. But we have just seen that when we look at things in a different way, "all values of $x$ are equally likely" becomes "all values of $1/x$ are very much not equally likely", so it seems that there is no way to assign a prior distribution of "no information about the value of $x$ ". This is rather alarming, since all our calculations will depend on assumptions which we didn't intend to make. Resolution of the paradox The paradox has been touted (for over a century) as a refutation of the Principle of Indifference. Statisticians are happy to say that the Principle isn't valid, and this may be true, but if we can't use the Principle of Indifference, then we can't actually take random samples from anything at all, because even in a computer, sampling is ultimately based on counting the number of possible outcomes among equally likely outcomes. So what is wrong with the paradox? The key here is that we do have some prior knowledge about the ratio $x$ of wine to water. Namely, that it is the ratio of wine to water . In other words, if $z$ is the proportion of water in the mixture, then $x = z/(1-z)$ . So saying that all values of $x$ are equally likely is the same as saying that all values of $z/(1-z)$ are equally likely, which seems like an odd thing to assume. If instead, we assume that all values of $z$ are equally likely, then we get the answer $5/6$ , and the paradox vanishes. This is what Mikkelson is getting at in his paper. Assuming that all values of $z$ are equally likely is a bit like saying "every molecule in the mixture is equally likely to be wine or water, and we are indifferent as to which it is" which seems like a reasonable assumption for this particular situation. Alternatively, we could view the situation as putting a prior on $x$ proportional to $1/(1+x)^2$ . This is called the Jeffreys Prior. Jeffreys was a physicist who had the idea that priors ought to be chosen in such a way as to be invariant to reparametrisations like this. So he would have said that, if we know the quantity $x$ is a ratio, it's natural to choose this prior instead of any other one. I am not claiming that I have a resolution of the paradox, or that it's not important. We should definitely be careful about what priors we use and which assumptions we are implicity making. I'm just saying that choosing a prior is more or less the same as choosing a statistical model for something, and we should be careful about choosing these too. It's a bit unfair to Bayesians to say: "Your choice of prior inevitably leads to a contradiction, but I can choose to model some quantity with a normal distribution or whatever, and it's fine because I can't be bothered to think about these issues." Notes Information Geometry It would be nice if statistics could be made "coordinate-free" so that it doesn't depend on parametrisations. I believe the subject that attempts to do this is called Information Geometry, and it hasn't been found to be of much practical value so far, but you never know. The Gibbs Paradox The Principle of Indifference is fundamental to statistical mechanics, which is the branch of physics which describes the behaviour of gases and things. In statistical mechanics, we assume that each possible configuration of particles is equally likely; this is a fundamental assumption which underpins all calculations. This is relevant to the above for two reasons. In the wine/water problem, statistical mechanics would say that the answer is $5/6$ . A physicist would find it very weird to say something like "Let's assume that every possible ratio of hydrogen to oxygen in this container is equally likely." The second reason is that a paradox involving the Principle of Indifference actually happened in statistical mechanics. It had to be resolved by assuming that particles are indistinguishable, otherwise the theory fails to agree with practical experiments. I am not sure of the details, but you can read up on it under the search term "Gibbs Paradox". The indistinguishability assumption was not theoretically justified until quantum mechanics was developed.
{ "source": [ "https://stats.stackexchange.com/questions/514557", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
514,627
There seems to be some disagreement on among answers on the internet for the question: A bag contains 1 fair and 1 double-sided (heads) coin. We choose a coin at random and flip it once. What is the probability of the coin being the double-sided sided one, given the result is heads? I decided to explore the problem using Python, and came up with the code below. Does the code correctly represent the situation, and is the experimental answer of "roughly two thirds" which it produces correct please? import random num_trials = 10000 results = { "fair, heads": 0, "two-headed, heads": 0 } for i in range(num_trials): this_coin = random.choice(["fair", "two-headed"]) if this_coin == "fair": if random.choice(["heads", "tails"]) == "heads": results["fair, heads"] += 1 else: results["two-headed, heads"] += 1 fair_heads = results["fair, heads"] double_sided_heads = results["two-headed, heads"] print("fair, heads: " , fair_heads ) print("two-headed, heads: ", double_sided_heads) print("experimental probability of two-headed coin given heads:", double_sided_heads, "/", double_sided_heads + fair_heads ) print("experimental probability of two-headed coin given heads:", double_sided_heads/ (double_sided_heads + fair_heads) )
The outcomes are: Fair coin, $H$ Fair coin, $T$ Unfair coin, $H$ Unfair coin, $H$ (the other one) Each of these is equally likely, so each has a probability of $1/4$ , meaning that $P(\text{H}) = \frac{3}{4}$ . We want to know $ P(\text{Unfair} \vert H) $ . This is a job for Bayes' Theorem: $P(B\vert A) = \dfrac{P(A\vert B)P(B)}{P(A)}$ . Our $B$ is the unfair coin, and our $A$ is heads. $$ P(\text{Unfair} \vert H) = \dfrac{P(H\vert\text{Unfair})P(\text{Unfair})}{P(H)} = \dfrac{1\times \frac{1}{2}}{\frac{3}{4}} = \dfrac{2}{3} $$ $\square$
{ "source": [ "https://stats.stackexchange.com/questions/514627", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/309260/" ] }
514,637
I am looking for methods/metrics to compare data matrices, which originate from the same dataset projected on two different feature spaces. For background: these are DNA sequencing data projected on two different gene catalogues. The catalogues likely have a significant degree of overlap, but the exact correspondence between them is unknown, due to lack of a single established standard in the field. There are about a hundred samples and about a thousand features in each matrix (the number of features is not the same). One approach that I have tried is clustering the samples and visually examining the dendrograms. This could be taken further by using one of the available metrics for comparing dendograms. I looking for alternative methods of quantitative comparison.
The outcomes are: Fair coin, $H$ Fair coin, $T$ Unfair coin, $H$ Unfair coin, $H$ (the other one) Each of these is equally likely, so each has a probability of $1/4$ , meaning that $P(\text{H}) = \frac{3}{4}$ . We want to know $ P(\text{Unfair} \vert H) $ . This is a job for Bayes' Theorem: $P(B\vert A) = \dfrac{P(A\vert B)P(B)}{P(A)}$ . Our $B$ is the unfair coin, and our $A$ is heads. $$ P(\text{Unfair} \vert H) = \dfrac{P(H\vert\text{Unfair})P(\text{Unfair})}{P(H)} = \dfrac{1\times \frac{1}{2}}{\frac{3}{4}} = \dfrac{2}{3} $$ $\square$
{ "source": [ "https://stats.stackexchange.com/questions/514637", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/110833/" ] }
515,699
I have a 2x2 table with two independent groups of people that replied Yes or No in the survey: Yes No Group A 350 1250 Group B 1700 3800 Could you help to find a test that can be run on these figures to see if there is a statistical significance between the two groups if it exists?
BruceET provides one way of analyzing this table. There are several tests for 2 by 2 tables which are all asymptotically equivalent, meaning that with enough data all tests are going to give you the same anwer. I present them here with R code for posterity. In my answer, I'm going to transpose the table since I find it easier to have groups as columns and outcomes as rows. The table is then Group A Group B Yes 350 1700 No 1250 3800 I'll reference the elements of this table as Group A Group B Yes $a$ $b$ No $c$ $d$ $N$ will be the sum of all the elements $N = a+b+c+d$ . The Chi Square Test Perhaps the most common test for 2 by 2 tables is the chi square test. Roughly, the null hypothesis of the chi square test is that the proportion of people who answer yes is the same in each group, and in particular it is the same as the proportion of people who answer yes were I to ignore groups completely. The test statistic is $$ X^2_P = \dfrac{(ad-bc)^2N}{n_1n_2m_1m_2} \sim \chi^2_1$$ Here $n_i$ are the column totals and $m_i$ are the row totals. This test statistic is asymptotically distributed as Chi square (hence the name) with one degree of freedom. The math is not important, to be frank. Most software packages, like R, implement this test readily. m = matrix(c(350,1250, 1700, 3800), nrow=2) chisq.test(m, correct = F) Pearson's Chi-squared test data: m X-squared = 49.257, df = 1, p-value = 2.246e-12 The correct=F is so that R implements the test as I have written it and does not apply a continuity correction which is useful for small samples. The p value is very small here so we can conclude that the proportion of people who answer yes in each group is different. Test of Proportions The test of proportions is similar to the chi square test. Let $\pi_i$ be the probability of answering Yes in group $i$ . The test of proportions tests the null that $\pi_1 = \pi_2$ . In short, the test statistic for this test is $$ z = \dfrac{p_1-p_2}{\sqrt{\dfrac{p_1(1-p_1)}{n_1} + \dfrac{p_2(1-p_2)}{n_2}}} \sim \mathcal{N}(0,1) $$ Again, $n_i$ are the column totals and $p_1 = a/n_1$ and $p_2=b/n_2$ . This test statistic has standard normal asymptotic distribution. If your alternative is that $p_1 \neq p_2$ then you want this test statistic to be larger than 1.96 in absolute value in most cases to reject the null. In R # Note that the n argument is the column sums prop.test(x=c(350, 1700), n=c(1600, 5500), correct = F) data: c(350, 1700) out of c(1600, 5500) X-squared = 49.257, df = 1, p-value = 2.246e-12 alternative hypothesis: two.sided 95 percent confidence interval: -0.11399399 -0.06668783 sample estimates: prop 1 prop 2 0.2187500 0.3090909 Note that the X-squared statistic in the output of this test is identical to the chi-square test. There is a good reason for that which I will not talk about here. Note also that this test provides a confidence interval for the difference in proportions, which is an added benefit over the chi square test. Fisher's Exact Test Fisher's exact test conditions on the quantites $n_1 = a+c$ and $m_1 = a + b$ . The null of this test is that the probability of success in each group is the same, $\pi_1 = \pi_2$ , like the test of proportions. The actual null hypothesis in the derivation of the test is about the odds ratio, but that is not important now. The exact probability of observing the table provided is $$ p = \dfrac{n_1! n_2! m_1! m_2!}{N! a! b! c! d!} $$ John Lachin writes Thus, the probability of the observed table can be considered to arise from a collection of $N$ subjects of whom $m_1$ have positive response, with $a$ of these being drawn from the $n_1$ subjects in group 1 and $b$ from among the $n_2$ subjects in group 2 ( $a+b=m_1$ , $n_1 + n_2 = N$ ). Importantly, this is not the p value. It is the probability of observing this table. In order to compute the p value, we need to sum up probabilities of observing tables which are more extreme than this one. Luckily, R does this for us m = matrix(c(350,1250, 1700, 3800), nrow=2) fisher.test(m) Fisher's Exact Test for Count Data data: m p-value = 1.004e-12 alternative hypothesis: true odds ratio is not equal to 1 95 percent confidence interval: 0.5470683 0.7149770 sample estimates: odds ratio 0.6259224 Note the result is about odds ratios and not about probabilities in each group. It is also worth noting, again from Lachin, The Fisher-Irwin exact test has been criticized as being too conservative because other unconditional tests have been shown to yield a smaller p value and thus are more powerful. When the data are large, this point becomes moot because you've likely got enough power to detect small effects, but it all depends on what you're trying to test (as it always does). Thus far, we have examined what are likely to be the most prevalent tests for this sort of data. The following tests are equivalent to the first two, but are perhaps less known. I present them here for completeness. Odds Ratio The odds ratio $\widehat{OR}$ for this table is $ad/bc$ , but because the odds ratio is bound to be strictly positive, it can be more convenient to work with the log odds ratio $\log(\widehat{OR})$ . Asymptotically, the sampling distribution for the log odds ratio is normal. This means we can apply a simple $z$ test. Our test statistic is $$ Z = \dfrac{\log(\widehat{OR}) - \log(OR)}{\sqrt{\hat{V}(\log(\widehat{OR})}} $$ . Here, $\hat{V}(\log(\widehat{OR}))$ is the estimated variance of the log odds ratio and is equal to $1/a + 1/b + 1/c + 1/d$ . In R odds_ratio = m[1, 1]*m[2, 2]/(m[2, 1]*m[1, 2]) vr = sum(1/m) Z = log(odds_ratio)/sqrt(vr) p.val = 2*pnorm(abs(Z), lower.tail = F) which returns a Z value of -6.978754 and a p value less than 0.01. Cochran's test The test statistic is $$ X^2_u = \dfrac{\dfrac{n_2a-n_1b}{N}}{\dfrac{n_1n_2m_1m_2}{N^3}} \sim \chi^2_1 $$ In R m = matrix(c(350,1250, 1700, 3800), nrow=2) a = 350 b = 1700 c = 1250 d = 3800 N = a+b+c+d n1 = a+c n2 = b+d m1 =a+b m2 =c+d X = ((n2*a-n1*b)/N)^2 /((n1*n2*m1*m2)/N^3) # Look familiar? X >>>49.25663 p.val = pchisq(X,1, lower.tail=F) p.val >>>[1] 2.245731e-12 Conditional Mantel-Haenszel (CMH) Test The CMH Test (I think I've seen this called the Cochran Mantel-Haenszel Test elsewhere) is a test which conditions on the first column total and first row total. The test statistic is $$ X^2_c = \dfrac{\left( a - \dfrac{n_1m_1}{N} \right)^2}{\dfrac{n_1n_2m_1m_2}{N^2(N-1)}} \sim \chi^2_1$$ In R a = 350 b = 1700 c = 1250 d = 3800 N = a+b+c+d n1 = a+c n2 = b+d m1 =a+b m2 =c+d top =( a - n1*m1/N)^2 bottom = (n1*n2*m1*m2)/(N^2*(N-1)) X = top/bottom X >>>49.24969 p.val = pchisq(X, 1, lower.tail = F) p.val >>> [1] 2.253687e-12 Likelihood Ratio Test (LRT) (My Personal Favourite) The LRT compares the difference in log likelihood between a model which freely estimates the group proportions and a model which only estimates a single proportion (not unlike the chi-square test). This test is a bit overkill in my opinion as other tests are simpler, but hey why not include it? I like it personally because the test statistic is oddly satisfying and easy to remember The math, as before, is irrelevant for our purposes. The test statistic is $$ X^2_G = 2 \log \left( \dfrac{a^a b^b c^c d^d N^N}{n_1^{n_1} n_2^{n_2} m_1^{m_1} m_2^{m_2}} \right) \sim \chi^2_1 $$ In R with some applied algebra to prevent overflow a = 350 b = 1700 c = 1250 d = 3800 N = a+b+c+d n1 = a+c n2 = b+d m1 =a+b m2 =c+d top = c(a,b,c,d,N) bottom = c(n1, n2, m1, m2) X = 2*log(exp(sum(top*log(top)) - sum(bottom*log(bottom)))) # Very close to other tests X >>>[1] 51.26845 p.val = pchisq(X, 1, lower.tail=F) p.val >>>1] 8.05601e-13 Note that there is a discrepancy in the test statistic for the LRT and the other tests. It has been noted that this test statistic converges to teh asymptotic chi square distribution at a slower rate than the chi square test statistic or the Cochran's test statistic. What Test Do I Use My suggestion: Test of proportions. It is equivalent to the chi-square test and has the added benefit of being a) directly interpretable in terms of risk difference, and b) provides a confidence interval for this difference (something you should always be reporting). I've not included theoretical motivations for these tests, though understanding those are not essential but captivating in my own opinion. If you're wondering where I got all this information, the book "Biostatsitical Methods - The Assessment of Relative Risks" by John Lachin takes a painstakingly long time to explain all this to you in chapter 2.
{ "source": [ "https://stats.stackexchange.com/questions/515699", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/315640/" ] }
517,781
I am aware of the basic differences between nonparametric and parametric statistics. In parametric models, we assume the data follows a distribution and fit it onto it using a fixed number of parameters. With KDE for instance, this is not the case because we don't assume that the modeled distribution has a particular shape. I am wondering how this relates to interpolation in general, and to spline interpolation in specific. Are all interpolation approaches considered to be nonparametric, are there "mixed" approaches, what is the case with spline interpolation?
This is a good question. Frequently, one will see smoothing regressions (e.g., splines, but also smoothing GAMs, running lines, LOWESS, etc.) described as nonparametric regression models. These models are nonparametric in the sense that using them does not involve reported quantities like $\widehat{\beta}$ , $\widehat{\theta}$ , etc. (in contrast to linear regression, GLM, etc.). Smoothing models are extremely flexible ways to represent properties of $y$ conditional on one or more $x$ variables, and do not make a priori commitments to, for example, linearity, simple integer polynomial, or similar functional forms relating $y$ to $x$ . On the other hand, these models are parametric, in the mathematical sense that they indeed involve parameters: number of splines, functional form of splines, arrangement of splines, weighting function for data fed to splines, etc. In application, however, these parameters are generally not of substantive interest: they are not the exciting bit of evidence reported by researchers… the smoothed curves (along with CIs and measures of model fit based on deviation of observed values from the curves) are the evidentiary bits. One motivation for this agnosticism about the actual parameters underlying a smoothing model is that different smoothing algorithms tend to give pretty similar results (see Buja, A., Hastie, T., & Tibshirani, R. (1989). Linear Smoothers and Additive Models . The Annals of Statistics , 17(2), 453–510 for a good comparison of several). If I understand you, your "mixed" approaches are what are called "semi-parametric models". Cox regression is one highly-specialized example of such: the baseline hazard function relies on a nonparametric estimator, while the explanatory variables are estimated in a parametric fashion. GAMs—generalized additive models—permit us to decide which $x$ variables' effects on $y$ we will model using smoothers, which we will model using parametric specifications, and which we will model using both all in a single regression.
{ "source": [ "https://stats.stackexchange.com/questions/517781", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/310553/" ] }
518,283
I've just finished a module where we covered the different approaches to statistical problems – mainly Bayesian vs frequentist. The lecturer also announced that she is a frequentist. We covered some paradoxes and generally the quirks of each approach (long run frequencies, prior specification, etc). This has got me thinking – how seriously do I need to consider this? If I want to be a statistician, do I need to align myself with one philosophy? Before I approach a problem, do I need to specifically mention which school of thought I will be applying? And crucially, do I need to be careful that I don't mix frequentist and Bayesian approaches and cause contradictions/paradoxes?
I think that the main takeaway here is this: the mere fact that there are these different philosophies of statistics and disagreement over them implies that translating the "hard numbers" that one gets from applying statistical formulae into "real world" decisions is a non-trivial problem and is fraught with interpretive peril. Frequently, people use statistics to influence their decision-making in the real world. For example, scientists aren't running randomized trials on COVID vaccines right now for funsies: it is because they want to make real world decisions about whether or not to administer a particular vaccine candidate to the populace. Although it may be a logistical challenge to gather up 1000 test subjects and observe them over the course of the vaccine, the math behind all of this is well-defined whether you are a Frequentist or a Bayesian: You take the data you gathered, cram it through the formulae and numbers pop out the other end. However, those numbers can sometimes be difficult to interpret: Their relationship to the real world depends on many non-mathematical things – and this is where the philosophy bit comes in. The real world interpretation depends on how we went about gathering those test subjects. It depends on how likely we anticipated this vaccine to be effective a priori (did we pull a molecule out of a hat, or did we start with a known-effective vaccine-production method?). It depends on (perhaps unintuitively) how many other vaccine candidates we happen to be testing. It depends on etc., etc., etc. Bayesians have attempted to introduce additional mathematical frameworks to help alleviate some of these interpretation problems. I think the fact that the Frequentist methods continue to proliferate shows that these additional frameworks have not been super successful in helping people translate their statistical computations into real world actions (although, to be sure, Bayesian techniques have led to many other advances in the field, not directly related to this specific problem). To answer your specific questions: you don't need to align yourself with one philosophy. It may help to be specific about your approach, but it will generally be totally obvious that you are doing a Bayesian analysis the moment you start talking about priors. Lastly, though, you should consider all of this very seriously, because as a statistician it will be your ethical duty to ensure that the numbers that you provide people are used responsibly – because correctly interpreting those numbers is a hard problem. Whether you interpret your numbers through the lens of Frequentist or Bayesian philosophy isn't a huge deal, but interpretation of your numbers requires familiarity with the relevant philosophy.
{ "source": [ "https://stats.stackexchange.com/questions/518283", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/317657/" ] }
518,294
Is support vector machine with linear kernel the same as a soft margin classifier?
I think that the main takeaway here is this: the mere fact that there are these different philosophies of statistics and disagreement over them implies that translating the "hard numbers" that one gets from applying statistical formulae into "real world" decisions is a non-trivial problem and is fraught with interpretive peril. Frequently, people use statistics to influence their decision-making in the real world. For example, scientists aren't running randomized trials on COVID vaccines right now for funsies: it is because they want to make real world decisions about whether or not to administer a particular vaccine candidate to the populace. Although it may be a logistical challenge to gather up 1000 test subjects and observe them over the course of the vaccine, the math behind all of this is well-defined whether you are a Frequentist or a Bayesian: You take the data you gathered, cram it through the formulae and numbers pop out the other end. However, those numbers can sometimes be difficult to interpret: Their relationship to the real world depends on many non-mathematical things – and this is where the philosophy bit comes in. The real world interpretation depends on how we went about gathering those test subjects. It depends on how likely we anticipated this vaccine to be effective a priori (did we pull a molecule out of a hat, or did we start with a known-effective vaccine-production method?). It depends on (perhaps unintuitively) how many other vaccine candidates we happen to be testing. It depends on etc., etc., etc. Bayesians have attempted to introduce additional mathematical frameworks to help alleviate some of these interpretation problems. I think the fact that the Frequentist methods continue to proliferate shows that these additional frameworks have not been super successful in helping people translate their statistical computations into real world actions (although, to be sure, Bayesian techniques have led to many other advances in the field, not directly related to this specific problem). To answer your specific questions: you don't need to align yourself with one philosophy. It may help to be specific about your approach, but it will generally be totally obvious that you are doing a Bayesian analysis the moment you start talking about priors. Lastly, though, you should consider all of this very seriously, because as a statistician it will be your ethical duty to ensure that the numbers that you provide people are used responsibly – because correctly interpreting those numbers is a hard problem. Whether you interpret your numbers through the lens of Frequentist or Bayesian philosophy isn't a huge deal, but interpretation of your numbers requires familiarity with the relevant philosophy.
{ "source": [ "https://stats.stackexchange.com/questions/518294", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/314930/" ] }
518,344
I am trying to find out whether there is a significant difference in the mean value of a biomarker between two groups. I am using t.test in R. The mean(SD) values are 1142(1079) and 864(922) in the groups. But the p-values of the test shows the difference is not statistically significant. Can someone please help me? I am sharing the dput of the dataframe below. structure(list(ANGPTL7 = c(2.5, 205, 885, 1915, 835, 1685, 625, 1615, 84.9999999999999, 1175, 2695, 235, 1025, 2.5, 2915, 825, 255, 1085, 1815, 2.5, 205, 985, 2.5, 705, 435, 555, 2045, 135, 15, 975, 2285, 1905, 515, 74.9999999999999, 25, 815, 1075, 2.5, 1115, 3115, 64.9999999999999, 64.9999999999999, 325, 595, 285, 2.5, 2.5, 345, 5.00000000000001, 215, 3465, 555, 855, 3745, 25, 305, 2.5, 2.5, 15, 115, 565, 94.9999999999999, 1005, 575, 405, 2.5, 1855, 1795, 145, 2555, 1705, 74.9999999999999, 735, 375, 2.5, 475, 1675, 1105, 345, 385, 3195, 115, 1475, 205, 545, 1265, 485, 1135, 2595, 3305, 305, 575, 1415, 2925, 3125, 2795, 3125, 1775, 1125, 15, 1695, 1225, 1625, 3175, 3185, 1445, 3065, 785, 855, 1115, 145, 595, 435, 185, 345, 2455, 1885), OSA_status = c("Non-OSA", "OSA", "Non-OSA", "OSA", "Non-OSA", "OSA", "Non-OSA", "OSA", "Non-OSA", "Non-OSA", "Non-OSA", "OSA", "OSA", "OSA", "OSA", "OSA", "Non-OSA", "Non-OSA", "OSA", "OSA", "Non-OSA", "Non-OSA", "OSA", "Non-OSA", "OSA", "OSA", "OSA", "OSA", "Non-OSA", "Non-OSA", "Non-OSA", "OSA", "OSA", "OSA", "OSA", "Non-OSA", "OSA", "OSA", "OSA", "Non-OSA", "Non-OSA", "Non-OSA", "Non-OSA", "Non-OSA", "OSA", "Non-OSA", "Non-OSA", "Non-OSA", "OSA", "Non-OSA", "OSA", "OSA", "OSA", "OSA", "Non-OSA", "Non-OSA", "OSA", "OSA", "Non-OSA", "OSA", "OSA", "Non-OSA", "Non-OSA", "Non-OSA", "Non-OSA", "Non-OSA", "OSA", "OSA", "OSA", "OSA", "OSA", "OSA", "OSA", "OSA", "OSA", "OSA", "OSA", "Non-OSA", "OSA", "Non-OSA", "OSA", "Non-OSA", "Non-OSA", "OSA", "Non-OSA", "OSA", "Non-OSA", "OSA", "OSA", "OSA", "OSA", "Non-OSA", "Non-OSA", "Non-OSA", "Non-OSA", "Non-OSA", "OSA", "Non-OSA", "Non-OSA", "Non-OSA", "OSA", "Non-OSA", "OSA", "OSA", "OSA", "OSA", "Non-OSA", "Non-OSA", "OSA", "OSA", "OSA", "OSA", "OSA", "OSA", "Non-OSA", "OSA", "OSA")), row.names = c(NA, -117L), class = c("tbl_df", "tbl", "data.frame")) EDIT This data example is great for R users but a small or large pain for everyone else. This format may or may not be easier, depending. id ANGPTL7 OSA_status 1 2.5 Non-OSA 2 205.0 OSA 3 885.0 Non-OSA 4 1915.0 OSA 5 835.0 Non-OSA 6 1685.0 OSA 7 625.0 Non-OSA 8 1615.0 OSA 9 85.0 Non-OSA 10 1175.0 Non-OSA 11 2695.0 Non-OSA 12 235.0 OSA 13 1025.0 OSA 14 2.5 OSA 15 2915.0 OSA 16 825.0 OSA 17 255.0 Non-OSA 18 1085.0 Non-OSA 19 1815.0 OSA 20 2.5 OSA 21 205.0 Non-OSA 22 985.0 Non-OSA 23 2.5 OSA 24 705.0 Non-OSA 25 435.0 OSA 26 555.0 OSA 27 2045.0 OSA 28 135.0 OSA 29 15.0 Non-OSA 30 975.0 Non-OSA 31 2285.0 Non-OSA 32 1905.0 OSA 33 515.0 OSA 34 75.0 OSA 35 25.0 OSA 36 815.0 Non-OSA 37 1075.0 OSA 38 2.5 OSA 39 1115.0 OSA 40 3115.0 Non-OSA 41 65.0 Non-OSA 42 65.0 Non-OSA 43 325.0 Non-OSA 44 595.0 Non-OSA 45 285.0 OSA 46 2.5 Non-OSA 47 2.5 Non-OSA 48 345.0 Non-OSA 49 5.0 OSA 50 215.0 Non-OSA 51 3465.0 OSA 52 555.0 OSA 53 855.0 OSA 54 3745.0 OSA 55 25.0 Non-OSA 56 305.0 Non-OSA 57 2.5 OSA 58 2.5 OSA 59 15.0 Non-OSA 60 115.0 OSA 61 565.0 OSA 62 95.0 Non-OSA 63 1005.0 Non-OSA 64 575.0 Non-OSA 65 405.0 Non-OSA 66 2.5 Non-OSA 67 1855.0 OSA 68 1795.0 OSA 69 145.0 OSA 70 2555.0 OSA 71 1705.0 OSA 72 75.0 OSA 73 735.0 OSA 74 375.0 OSA 75 2.5 OSA 76 475.0 OSA 77 1675.0 OSA 78 1105.0 Non-OSA 79 345.0 OSA 80 385.0 Non-OSA 81 3195.0 OSA 82 115.0 Non-OSA 83 1475.0 Non-OSA 84 205.0 OSA 85 545.0 Non-OSA 86 1265.0 OSA 87 485.0 Non-OSA 88 1135.0 OSA 89 2595.0 OSA 90 3305.0 OSA 91 305.0 OSA 92 575.0 Non-OSA 93 1415.0 Non-OSA 94 2925.0 Non-OSA 95 3125.0 Non-OSA 96 2795.0 Non-OSA 97 3125.0 OSA 98 1775.0 Non-OSA 99 1125.0 Non-OSA 100 15.0 Non-OSA 101 1695.0 OSA 102 1225.0 Non-OSA 103 1625.0 OSA 104 3175.0 OSA 105 3185.0 OSA 106 1445.0 OSA 107 3065.0 Non-OSA 108 785.0 Non-OSA 109 855.0 OSA 110 1115.0 OSA 111 145.0 OSA 112 595.0 OSA 113 435.0 OSA 114 185.0 OSA 115 345.0 Non-OSA 116 2455.0 OSA 117 1885.0 OSA
I agree with @pikachu that the standard deviations are too large compared with the difference between means for a t test to find a significant difference. Thank you for posting your data. It is always a good idea to take a look at some graphic displays of the data before doing formal tests. Stripcharts of observations in the two groups do not show a meaningful difference in locations relative to the variability of the samples. stripchart(ANGPTL7 ~OSA_status, pch="|", ylim=c(.5,2.5)) Here are boxplots of the two groups. The 'notches' in the the sides of the boxes are nonparametric confidence intervals, calibrated so that overlapping notches tend to indicate no significant difference in location. boxplot(ANGPTL7 ~ OSA_status, notch=T, col="skyblue2", horizontal=T) Even with sample sizes as large as these, I would be reluctant to do a two-sample t test on account of the marked skewness of the data. I would do a nonparametric two-sample Wilcoxon rank sum test (which also shows no significant difference). wilcox.test(ANGPTL7 ~ OSA_status) Wilcoxon rank sum test with continuity correction data: ANGPTL7 by OSA_status W = 1456.5, p-value = 0.2139 alternative hypothesis: true location shift is not equal to 0
{ "source": [ "https://stats.stackexchange.com/questions/518344", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/20213/" ] }
518,378
I'm studying the length of stay for patients in a hospital. I have a sample of n=4533 observations. Each of these observations are assigned to an admin group numbered between 1 and 8, based on the reason they were admitted to hospital. Admin group 2 has the characteristics: n = 193, x̄ = 37.2020725 (days), s.d. = 35.6247163 (days) This is the highest mean of the 8 admin groups. I want to test whether the difference between the other groups means and this mean is significant. If I combine the other 7 admin groups, I get the characteristics: n = 4340, x̄ = 25.5078341, s.d. = 31.1011062 I tried to run a t-test to compare these 2 sets of data, but I ended up getting really small values for standard error and degrees of freedom (less than 1). I'm assuming the t-test is inappropriate for this data, perhaps due to the fact that 1 sample size is significantly larger than the other. Can anyone think of a suitable test to help with what I'm trying to investigate here? Alternatively, should I change my angle and try an ANOVA test (if that is appropriate?), to study whether admin group 2's mean is significantly different from all the other groups' respective means? Hope I made my question clear.
I agree with @pikachu that the standard deviations are too large compared with the difference between means for a t test to find a significant difference. Thank you for posting your data. It is always a good idea to take a look at some graphic displays of the data before doing formal tests. Stripcharts of observations in the two groups do not show a meaningful difference in locations relative to the variability of the samples. stripchart(ANGPTL7 ~OSA_status, pch="|", ylim=c(.5,2.5)) Here are boxplots of the two groups. The 'notches' in the the sides of the boxes are nonparametric confidence intervals, calibrated so that overlapping notches tend to indicate no significant difference in location. boxplot(ANGPTL7 ~ OSA_status, notch=T, col="skyblue2", horizontal=T) Even with sample sizes as large as these, I would be reluctant to do a two-sample t test on account of the marked skewness of the data. I would do a nonparametric two-sample Wilcoxon rank sum test (which also shows no significant difference). wilcox.test(ANGPTL7 ~ OSA_status) Wilcoxon rank sum test with continuity correction data: ANGPTL7 by OSA_status W = 1456.5, p-value = 0.2139 alternative hypothesis: true location shift is not equal to 0
{ "source": [ "https://stats.stackexchange.com/questions/518378", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/317745/" ] }
520,286
According to the Page 87 in Kruschke: Doing Bayesian Data Analysis, the author says that the mean of a distribution is a value that minimizes the variance of a probability distribution, for example, a normal distribution. The following is what is mentioned in the page: It turns out that the value of $ M $ that minimizes $ ∫dx p(x)(x−M)^2$ is $E[X] $ . In other words, the mean of the distribution is the value that minimizes the expected squared deviation. In this way, the mean is a central tendency of the distribution. I have read the paragraph and kind of understood what the author is trying to say but I wonder how this can be written mathematically using the above equation or any other ways.
Budding statisticians in Statistics 101 with no mathematical skills beyond high-school algebra should consider \begin{align} E\left[(X-a)^2\right] &= E\bigr[\big(X-\mu + \mu -a\big)^2\bigr] & {\scriptstyle{\text{Here,}~\mu ~ \text{denotes the mean of} ~ X}}\\ &= E\bigr[\big((X-\mu) + (\mu -a)\big)^2\bigr]\\ &= E\bigr[(X-\mu)^2 + (\mu -a)^2 &{\scriptstyle{(\alpha+\beta)^2 = \alpha^2+\beta^2 + 2\beta\alpha}}\\ & ~~~~~~~~~~~~~~+ 2(\mu-a)(X-\mu)\bigr]\\ &= E\big[(X-\mu)^2\big] + E\big[(\mu -a)^2\big] &{\scriptstyle{\text{Expectation of sum is the sum of}}}\\ & ~~~~~~~~~+ 2(\mu-a)E\big[X-\mu\big] &{\scriptstyle{\text{the expectations, and constants}}}\\ & &{\scriptstyle{\text{can be pulled out ofexpectations}}}\\ &= \operatorname{var}(X) + (\mu -a)^2 + 2(\mu-a)\times 0 &{\scriptstyle{\text{definition of variance; expectation}}}\\ & &{\scriptstyle{\text{of a constant equals the constant;}}}\\ & &{\scriptstyle{E[X-\mu] = E[X] -E[\mu] = \mu -\mu = 0}}\\ &= \operatorname{var}(X) + (\mu -a)^2\\ &\geq \operatorname{var}(X) &{\scriptstyle{\text{with equality holding when}~ a=\mu.}} \end{align} Thus, we have shown that $E\left[(X-a)^2\right] \geq \operatorname{var}(X)$ with equality holding when $a = \mu$ . Look, Ma! No calculus! No derivatives or second derivative tests! Not even geometry and invocations of Pythogoras; just high-school algebra (even pre-algebra might have sufficed), together with a just a smidgen of Stats 101.
{ "source": [ "https://stats.stackexchange.com/questions/520286", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/239221/" ] }
521,835
I've understood the main concepts behind overfitting and underfitting, even though some reasons as to why they occur might not be as clear to me. But what I am wondering is: isn't overfitting "better" than underfitting? If we compare how well the model does on each dataset, we would get something like: Overfitting: Training: good vs. Test: bad Underfitting: Training: bad vs. Test: bad If we have a look at how well each scenario does on the training and test data, it seems that for the overfitting scenario, the model does at least well for the training data. The text in bold is my intuition that, when the model does badly on the training data, it will also do badly on the test data, which seems overall worse to me.
Overfitting is likely to be worse than underfitting. The reason is that there is no real upper limit to the degradation of generalisation performance that can result from over-fitting, whereas there is for underfitting. Consider a non-linear regression model, such as a neural network or polynomial model. Assume we have standardised the response variable. A maximally underfitted solution might completely ignore the training set and have a constant output regardless of the input variables. In this case the expected mean squared error on test data will be approximately the variance of the response variable in the training set. Now consider an over-fitted model that exactly interpolates the training data. To do so, this may require large excursions from the true conditional mean of the data generating process between points in the training set, for example the spurious peak at about x = -5. If the first three training points were closer together on the x-axis, the peak would be likely to be even higher. As a result, the test error for such points can be arbitrarily large, and hence the expected MSE on test data can similarly be arbitrarily large. Source: https://en.wikipedia.org/wiki/Overfitting (it is actually a polynomial model in this case, but see below for an MLP example) Edit: As @Accumulation suggests, here is an example where the extent of overfitting is much greater (10 randomly selected data points from a linear model with Gaussian noise, fitted by a 10th order polynomial fitted to the utmost degree). Happily the random number generator gave some points that were not very well spaced out first time! It is worth making a distinction between "overfitting" and "overparameterisation". Overparameterisation means you have used a model class that is more flexible than necessary to represent the underlying structure of the data, which normally implies a larger number of parameters. "Overfitting" means that you have optimised the parameters of a model in a way that gives a better "fit" to the training sample (i.e. a better value of the training criterion), but to the detriment of generalisation performance. You can have an over-parameterised model that does not overfit the data. Unfortunately the two terms are often used interchangeably, perhaps because in earlier times the only real control of overfitting was achieved by limiting the number of parameters in the model (e.g. feature selection for linear regression models). However regularisation (c.f. ridge regression) decouples overparameterisation from overfitting, but our use of the terminology has not reliably adapted to that change (even though ridge regression is almost as old as I am!). Here is an example that was actually generated using an (overparameterised) MLP
{ "source": [ "https://stats.stackexchange.com/questions/521835", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/318062/" ] }
525,697
In my master's program I learned that when building a ML model you: train the model on the training set compare the performance of this against the validation set tweak the settings and repeat steps 1-2 when you are satisfied, compare the final model against the test (hold out) set When I started working as a DS I raised a question as to the size of the test and validation sets, because it looked as though someone had labeled them wrong. This caused confusion because apparently everyone else used the "test" set in step 2 and held out the "validation" set for step 4. I assumed I had learned it wrong and no harm was done because I just switched the terms to be consistent. However I was restudying some deep learning books and noticed that according to the creator of Keras, I was right all along! Just before I wrote this question I found this one that suggests the OTHER definition of test/validation sets are correct... Is this something that is agreed upon? Is there a divide among the classical ml method and deep learning practitioners as to what the correct terms are? As far as I can tell nobody has really discussed how some statisticians/data scientists use completely opposite definitions for the two terms.
For machine learning, I've predominantly seen the usage OP describes, but I've also encountered lots of confusion coming from this usage. Historically, I guess what happened (at least in my field, analytical chemistry) is that as models became more complex, at some point people noticed that independent data is needed for verification and validation purposes (in our terminology, almost all testing that is routinely done with models would be considered part of verification which in turn is part of the much wider task of method validation). Enter the validation set and methods such as cross validation (with its original purpose of estimating generalization error). Later, people started to use generalization error estimates from what we call internal verification/validation such as cross validation or a random split to refine/optimize their models. Enter hyperparameter tuning. Again, it was realized that estimating generalization error of the refined model needs independent data. And a new name was needed as well, as the usage of "validation set" for the data used for refining/optimizing had already been established. Enter the test set. Thus we have the situation where a so-called validation set is used for model development/optimization/refining and is therefore not suitable any more for the purpose of model verification and validation. Someone with e.g. an analytical chemistry (or engineering) background will certainly refer to the data they use/acquire for method validation purposes as their validation data* - and that is correct usage of the terms in these fields. *(unless they know the different use of terminology in machine learning, in which case they'd usually explain what exactly they are talking about). Personally, in order to avoid the ongoing confusion that comes from this clash of terminology between fields, I've moved to using "optimization data/set" for the data used for hyperparameter tuning (Andrew Ng's development set is fine with me as well) and "verification data/set" for the final independent test data (the testing we typically do is actually verification rather than validation, so that avoids another common mistake: the testing we typically do is not even close to a full method validation in analytical chemistry, and it's good to be aware of that) Another strategy I find helpful to avoid confusion is moving from splitting into 3 data sets back to splitting into training and verification data, and then describing the hyperparameter tuning as part of the training procedure which happens to include another split into data used to fit the model parameters and data used to optimize the hyperparameters.
{ "source": [ "https://stats.stackexchange.com/questions/525697", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/250593/" ] }
525,735
I have a longitudinal outcome of two time points(2018 and 2020), the outcome is a quality of life score generated from a validated instrument, the score ranges from -0.158 to 1, a value of 1 indicate perfect health state, a values of 0 indicate a health state equal to death, negative values indicate a health state worse than death. I have a sample size of 457 longitudinal profile, the distribution of this outcome is heavily skewed to the left and multimodal, 55% of the scores lies in [0.8 ; 1 ]. I tried the linear mixed model using some covariates such as age gender region, the plot of the residuals looked like this : It was clearly that the linear mixed model would not fit well, the log or square root transformations are not applicable since I have negative values, I tried this transformation : transform into scale of 0-1 by a normal linear transformation. apply the logit and transform to the whole real line. then the distribution looked like this and when i fitted a linear mixed model the residuals were like this : both raw variable and the transformed one are not suitable for modelling. How to handle this type of data? A social preference valuations set for EQ-5D health states in Flanders, Belgium Irina Cleemput
For machine learning, I've predominantly seen the usage OP describes, but I've also encountered lots of confusion coming from this usage. Historically, I guess what happened (at least in my field, analytical chemistry) is that as models became more complex, at some point people noticed that independent data is needed for verification and validation purposes (in our terminology, almost all testing that is routinely done with models would be considered part of verification which in turn is part of the much wider task of method validation). Enter the validation set and methods such as cross validation (with its original purpose of estimating generalization error). Later, people started to use generalization error estimates from what we call internal verification/validation such as cross validation or a random split to refine/optimize their models. Enter hyperparameter tuning. Again, it was realized that estimating generalization error of the refined model needs independent data. And a new name was needed as well, as the usage of "validation set" for the data used for refining/optimizing had already been established. Enter the test set. Thus we have the situation where a so-called validation set is used for model development/optimization/refining and is therefore not suitable any more for the purpose of model verification and validation. Someone with e.g. an analytical chemistry (or engineering) background will certainly refer to the data they use/acquire for method validation purposes as their validation data* - and that is correct usage of the terms in these fields. *(unless they know the different use of terminology in machine learning, in which case they'd usually explain what exactly they are talking about). Personally, in order to avoid the ongoing confusion that comes from this clash of terminology between fields, I've moved to using "optimization data/set" for the data used for hyperparameter tuning (Andrew Ng's development set is fine with me as well) and "verification data/set" for the final independent test data (the testing we typically do is actually verification rather than validation, so that avoids another common mistake: the testing we typically do is not even close to a full method validation in analytical chemistry, and it's good to be aware of that) Another strategy I find helpful to avoid confusion is moving from splitting into 3 data sets back to splitting into training and verification data, and then describing the hyperparameter tuning as part of the training procedure which happens to include another split into data used to fit the model parameters and data used to optimize the hyperparameters.
{ "source": [ "https://stats.stackexchange.com/questions/525735", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/320963/" ] }
525,901
It is common knowledge that: $$\begin{equation}\label{3} Var(X) \geq 0 \end{equation}$$ for every random variable $X$ . Despite this, I do not remember seeing a formal proof of this. Is there a proof of the above inequality? What if we include the realm of complex numbers, does this open up the possibility to the above inequality being wrong?
Go to your definition of variance: $$ \operatorname{Var}(X) = \int(x-\mu)^2f(x)\,dx $$ The $(x-\mu)^2$ component is non-negative, and the $f(x)$ component is non-negative, so the integrand, $(x-\mu)^2f(x)$ is non-negative. When you integrate an integrand that is always at the x-axis or above, the area under that curve will be non-negative. This might be a bit easier to see if the variance is written as a sum (for a discrete variable): $$ \operatorname{Var}(X) = \sum_i p(x_i)(x_i -\mu)^2 $$ As before, $p(x_i)\ge 0$ for all $x_i$ , and $(x_i - \mu)^2\ge 0$ for all $x_i$ , so that is a sum of non-negative values.
{ "source": [ "https://stats.stackexchange.com/questions/525901", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/320707/" ] }
527,080
I'm recently working on CNN and I want to know what is the function of temperature in the softmax formula? and why should we use high temperatures to see a softer norm in probability distribution? The formula can be seen below: $$\large P_i=\frac{e^{\frac{y_i}T}}{\sum_{k=1}^n e^{\frac{y_k}T}}$$
The temperature is a way to control the entropy of a distribution, while preserving the relative ranks of each event. If two events $i$ and $j$ have probabilities $p_i$ and $p_j$ in your softmax, then adjusting the temperature preserves this relationship, as long as the temperature is finite: $$p_i > p_j \Longleftrightarrow p'_i > p'_j$$ Heating a distribution increases the entropy, bringing it closer to a uniform distribution. (Try it for yourself: construct a simple distribution like $\mathbf{y}=(3, 4, 5)$ , then divide all $y_i$ values by $T=1000000$ and see how the distribution changes.) Cooling it decreases the entropy, accentuating the common events. I’ll put that another way. It’s common to talk about the inverse temperature $\beta=1/T$ . If $\beta = 0$ , then you've attained a uniform distribution. As $\beta \to \infty$ , you reach a trivial distribution with all mass concentrated on the highest-probability class. This is why softmax is considered a soft relaxation of argmax.
{ "source": [ "https://stats.stackexchange.com/questions/527080", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/273057/" ] }
529,961
In Bayesian statistics, parameters are said to be random variables while data are said to be nonrandom. Yet if we look at the Bayesian updating formula $$ p(\theta|y)=\frac{p(\theta)p(y|\theta)}{p(y)}, $$ we find probability (density or mass) conditioned on the data as well as the conditional and unconditional probability (density or mass) of the data itself. How does it make sense to consider probability (density or mass) conditioned on a constant or probability (density or mass) of a constant?
The Bayesian approach to (parametric) statistical inference starts from a statistical model, ie a family of parametrised distributions, $$X\sim F_\theta,\qquad\theta\in\Theta$$ and it introduces a supplementary probability distribution on the parameter $$\theta\sim\pi(\theta)$$ The posterior distribution on $\theta$ is thus defined as the conditional distribution of $\theta$ conditional on $X=x$ , the observed data. This construction clearly relies on the assumption that the data is a realisation of a random variable with a well-defined distribution . It would otherwise be impossible to define a conditional distribution like the posterior, since there would be no random variable to condition upon. The possible confusion may stem from the fact that a difference between Bayesian and frequentist approaches is that frequentist procedures are evaluated and compared based on their frequency properties, ie by averaging over all possible realisations, instead of conditional on the actual realisation, as the Bayesian approach does. For instance, the frequentist risk of a procedure $\delta$ for a loss function $L(\theta,d)$ is $$R(\theta,\delta) = \mathbb E_\theta[L(\theta,\delta(X))]$$ while the Bayesian posterior loss of a procedure $\delta$ for the prior $\pi$ is $$\rho(\delta(x),\pi) = \mathbb E^\pi[L(\theta,\delta(x))|X=x]$$
{ "source": [ "https://stats.stackexchange.com/questions/529961", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/53690/" ] }
530,559
If I use Bayes' theorem here , event A denoting that 12 employees are female and event B denoting that 8 employees are female, (assuming that an employee has equal chances of being a male or female) I get, $P(A \mid B) = \frac{P(B \mid A) \times P(A)}{P(B)} = \frac{1 \times (0.5)^{12}}{\binom{12}{8}(0.5)^{12}}=1/\binom{12}{8}$ Is this the correct way of doing this? I am especially confused because the gender of the employees are independent of one another , and yet I am using the information that 8 of them are female to determine the probability that all of them are female. I am sorry if I sound confused and do not make sense.
The confusion comes from the fact that there are multiple ways to interpret "Given that 8 employees are female": If it's 8 specific employees - say, the employees in positions 1 thru 8 - then the remaining four have $2^4$ possible gender configurations, only $1$ of which is all-female, giving $\frac{1}{2^4}$ If it's any 8 of the 12 employees , then what's being asked is to look at all configurations of 12 employees, throw out the ones with 5 or more men, and count the proportion that are all female. Notice that under this interpretation, each employee in the valid configurations does not have a 50% chance of being male/female, since we are assuming that there are at least 8 females in each valid configuration. What does have an equal chance is each valid configuration. The reason this is confusing is that our intuition assumes the first interpretation, but the way the question is worded implies the second. There is a famous statistical "paradox" that stems from this same line of reasoning: In a family with two children, one of whom is a girl, what's the probability both are girls? Most people assume the answer is $\frac{1}{2}$ , but it's actually $\frac{1}{3}$ , for the same reason as the original question. If you're still confused, see this answer which gives a more thorough explanation of the paradox and its resolution.
{ "source": [ "https://stats.stackexchange.com/questions/530559", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/259845/" ] }
531,981
I just read this article: Understanding Deep Learning (Still) Requires Rethinking Generalization In section 6.1 I stumbled upon the following sentence Specifically, in the overparameterized regime where the model capacity greatly exceeds the training set size, fitting all the training examples (i.e., interpolating the training set), including noisy ones, is not necessarily at odds with generalization. I do not fully understand the term "interpolating" in the context of fitting training data. Why do we speak of "interpolation" in this context? What does the term exactly mean here? Is there any other term that can be used instead? In my understanding interpolation means the prediction within the training domain for some novel input that was not part of the training set.
Your question already got two nice answers, but I feel that some more context is needed. First, we are talking here about overparametrized models and the double descent phenomenon . By overparametrized models we mean such that have way more parameters than datapoints . For example, Neal (2019), and Neal et al (2018) trained a network with hundreds of thousands of parameters for a sample of 100 MNIST images. The discussed models are so large, that they would be unreasonable for any practical applications. Because they are so large, they are able to fully memorize the training data. Before the double descent phenomenon attracted more attention in the machine learning community, memorizing the training data was assumed to lead to overfitting and poor generalization in general. As already mentioned by @jcken , if a model has a huge number of parameters, it can easily fit a function to the data such that it "connects all the dots" and at prediction time just interpolates between the points. I'll repeat myself, but until recently we would assume that this would lead to overfitting and poor performance. With the insanely huge models, this doesn't have to be the case. The models would still interpolate, but the function would be so flexible that it won't hurt the test set performance. To understand it better, consider the lottery ticket hypothesis . Loosely speaking, it says that if you randomly initialize and train a big machine learning model (deep network), this network would contain a smaller sub-network, the "lottery ticket", such that you could prune the big network while keeping the performance guarantees. The image below (taken from the linked post ), illustrates such pruning. Having a huge number of parameters is like buying piles of lottery tickets, the more you have, the higher your chance of winning. In such a case, you can find a lottery ticket model that interpolates between the datapoints but also generalizes. Another way to think about it is to consider a neural network as a kind of ensemble model . Every neural network has a pre-ultimate layer (image below, adapted from this ), that you can think of as a collection of intermediate representations of your problem. The outputs of this layer are then aggregated (usually using a dense layer) for making the final prediction. This is like ensembling many smaller models. Again, if the smaller models memorized the data, even if each would overfit, by aggregating them, the effects would hopefully cancel out. All the machine learning algorithms kind of interpolate between the datapoints, but if you more parameters than data, you would literally memorize the data and interpolate between them.
{ "source": [ "https://stats.stackexchange.com/questions/531981", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/156349/" ] }
532,414
If we optimize a function $f$ with respect to loss $L$ , which is defined as RMSE; Are we going to get the same solution as optimizing MSE ? Even, if the function $f$ is non-linear (e.g. a neural network) ?
RMSE is a square root of MSE. If you take the square root of a bunch of numbers, their relative ordering would not change. For optimization, what matters is the relative ordering of different solutions. Notice however that if you use penalties for regularization, e.g. $L_1$ or $L_2$ , the solution may be different since the size of penalty relative to the raw loss would change .
{ "source": [ "https://stats.stackexchange.com/questions/532414", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/283959/" ] }
535,041
Let's say we have a game with two players. Both of them know that five samples are drawn from some distribution (not normal). None of them know the parameters of the distribution used to generate the data. The goal of the game is to estimate the mean of the distribution. The player that comes closer to the true mean wins 1\$ (absolute difference between estimated value and actual value is the objective function). If the distribution has a mean that blows up to $\infty$ , the player guessing the larger number wins and for $-\infty$ , the one guessing the smaller number. While the first player is given all five samples, the second one is given just the sum of the samples (and they know there were five of them). What are some examples of distributions where this isn't a fair game and the first player has an advantage? I guess the normal distribution isn't one of them since the sample mean is a sufficient statistic for the true mean. Note: I asked a similar question here: Mean is not a sufficient statistic for the normal distribution when variance is not known? about the normal distribution and it was suggested I ask a new one for non-normal ones. EDIT: Two answers with a uniform distribution. I would love to hear about more examples if people know of any.
For a uniform distribution between $0$ and $2 \mu$ , the player who guesses the sample mean would do worse than one which guesses $\frac{3}{5} \max(x_i)$ (the sample maximum is a sufficient statistic for the mean of a uniform distribution lower bounded by 0). In this particular case, it can be verified numerically. Without loss of generality, we set $\mu = 0.5$ in the simulation. It turns out that about 2/3rds of the time, the 3/5 max estimator does better. Here is a Python simulation demonstrating this. import numpy as np Ntrials = 1000000 xs = np.random.random((5,Ntrials)) sample_mean_error = np.abs(xs.mean(axis=0)-0.5) better_estimator_error = np.abs(0.6*xs.max(axis=0)-0.5) print((sample_mean_error > better_estimator_error).sum())
{ "source": [ "https://stats.stackexchange.com/questions/535041", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/25186/" ] }
536,279
In Section 7.2 of Hastie, Tibshirani, and Friedman (2013) The Elements of Statistic Learning , we have the target variable $Y$ , and a prediction model $\hat{f}(X)$ that has been estimated from a training set $\mathcal{T} = \{Y_1, ..., Y_N, X_1, ..., X_N\}$ . The loss is denoted $L(Y, \hat{f}(X))$ , and then the authors define the test error: \begin{equation} \mathrm{Err}_{\mathcal{T}} = \mathbb{E} \left[ L(Y, \hat{f}(X)) | \mathcal{T} \right] , \end{equation} and the expected test error: \begin{equation} \mathrm{Err} = \mathbb{E} (\mathrm{Err}_{\mathcal{T}}) . \end{equation} The authors then state: Estimation of $\mathrm{Err}_{\mathcal{T}}$ will be our goal... My question : Why do we care more about $\mathrm{Err}_{\mathcal{T}}$ than $\mathrm{Err}$ ? I would have thought that the quantity that measures expected loss, regardless of the training sample used , would be more interesting than the expected loss that conditions on one specific training sample. What am I missing here? Also, I've read this answer here which (based on my possibly incorrect reading) seems to agree with me that $\mathrm{Err}$ is the quantity of interest, but suggests that we often talk about $\mathrm{Err}_{\mathcal{T}}$ because it can be estimated by cross-validation. But this seems to contradict Section 7.12 of the textbook, which (again by my possibly incorrect reading) seems to suggest that cross-validation provides a better estimate of $\mathrm{Err}$ than $\mathrm{Err}_{\mathcal{T}}$ . I'm going around in circles on this one so thought I would ask here.
Why do we care more about $\operatorname{Err}_{\mathcal{T}}$ than Err? I can only guess, but I think it is a reasonable guess. The former concerns the error for the training set we have right now. It answers "If I were to use this dataset to train this model, what kind of error would I expect?". It is easy to think of the type of people who would want to know this quantity (e.g. data scientists, applied statisticians, basically anyone using a model as a means to an end). These people don't care about the properties of the model across new training sets per se , they only care about how the model they made will perform. Contrast this to the latter error, which is the expectation of the former error across all training sets. It answers "Were I to collect an infinite sequence of new training examples, and were I to compute $\operatorname{Err}_{\mathcal{T}}$ for each of those training sets in an infinite sequence, what would be average value of that sequence of errors?". It is easy to think of the type of people who care about this quantity (e.g. researchers, theorists, etc). These people are not concerned with any one instance of a model (in contrast to the people in the previous paragraph), they are interested in the general behavior of a model. So why the former and not the latter? The book is largely concerned with how to fit and validate models when readers have a single dataset in hand and want to know how that model may perform on new data.
{ "source": [ "https://stats.stackexchange.com/questions/536279", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/16319/" ] }
544,711
Is Wikipedia's page on the sigmoid function incorrect? It states that: A common example of a sigmoid function is the logistic function From my knowledge of machine learning, I thought that "the sigmoid function" is defined as the logistic function, $$\sigma(z) = \frac {1} {\left(1 + e^{-z}\right)}\text{.}$$ I have never seen or heard the phrasing that the logistic function is a type of sigmoid function . Furthermore, that Wikipedia page says that other examples of a sigmoid function are the tanh and arctan functions. Again, I've never seen tanh nor arctan described as a type of sigmoid function . These functions are considered to be peers, usually in a context like: We can use various non-linear functions in this neural network, such as the sigmoid, tanh, and ReLU activation functions. What am I missing here? Is the Wikipedia article correct or incorrect? I find that Wikipedia is usually accurate for math terms.
The unsatisfying answer is "It depends who you ask." "Sigmoid", if you break it into parts, just means "S-shaped". The logistic sigmoid function is so prevalent that people tend to gloss over the word "logistic". For machine learning folks, it's become the exemplar of the class, and most call it the sigmoid function. (Is it myopia to call it the sigmoid function?) Still, there are other communities that use S-shaped functions.
{ "source": [ "https://stats.stackexchange.com/questions/544711", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/29072/" ] }
549,012
I've read many times on this site that high order polynomials (generally more than third) shouldn't be used in linear regression, unless there is a substantial justification to do so. I understand the issues about extrapolation (and prediction at the boundaries). Since extrapolation isn't important to me... Are high order polynomials also a bad way of approximating the underlying function within the range of the data points ? (i.e. interpolation) If so, what problems are arising? I don't mind being redirected to a good book or paper about this. Thanks.
I cover this in some detail in Chapter 2 of RMS . Briefly, besides extrapolation problems, ordinary polynomials have these problems: The shape of the fit in one region of the data is influenced by far away points Polynomials cannot fit threshold effects, e.g., a nearly flat curve that suddenly accelerates Polynomials cannot fit logarithmic-looking relationships, e.g., ones that get progressively flatter over a long interval Polynomials can't have a very rapid turn These are reasons that regression splines are so popular, i.e., segmented polynomials tend to work better than unsegmented polynomials. You can also relax a continuity assumption for a spline if you want to have a discontinuous change point in the fit.
{ "source": [ "https://stats.stackexchange.com/questions/549012", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/318559/" ] }
549,020
I have developed a deep learning model, to predict whether an image is affected by a certain disease or not. Accuracies of 99.8%, 88.8%, and 89% have been achieved on the training set, testing set, and validation set respectfully. I’m going to publish my research work in a journal, therefore, whichever accuracy will be the accuracy of my deep model? If I say the accuracy of 99.8% is the accuracy of my model, is it justified?
I cover this in some detail in Chapter 2 of RMS . Briefly, besides extrapolation problems, ordinary polynomials have these problems: The shape of the fit in one region of the data is influenced by far away points Polynomials cannot fit threshold effects, e.g., a nearly flat curve that suddenly accelerates Polynomials cannot fit logarithmic-looking relationships, e.g., ones that get progressively flatter over a long interval Polynomials can't have a very rapid turn These are reasons that regression splines are so popular, i.e., segmented polynomials tend to work better than unsegmented polynomials. You can also relax a continuity assumption for a spline if you want to have a discontinuous change point in the fit.
{ "source": [ "https://stats.stackexchange.com/questions/549020", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/338371/" ] }