idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
1,401 | tanh activation function vs sigmoid activation function | Generally speaking, $\tanh$ has two main advantages over a sigmoid function:
It has a slightly bigger derivative than the sigmoid (at least for the area around 0), which helps it to cope a bit better with the “vanishing gradients” problem of deep neural networks. Here is a plot of the derivatives of both functions:
It is symmetric around 0, which helps it to avoid the “bias shift” problem that sigmoid suffer from (which causes the weight vectors to move in diagonals, or “zig-zag”, which slows down learning).
Sigmoid has 1 main advantage over $\tanh$, which is that it can represent a binary probability - hence can be used as the output of the final layer in binary classification problems.
You can check out this video I made on YouTube which explains a bit further about these problems.
Elaboration on the bias shift problem:
Consider a case of activation functions like Sigmoids which only output positive values. Now let’s focus on a single layer $a_l$. Let’s look at the weight vector associated with the first next neuron: $z_{(l+1),1}=W_{l,1}\cdot a_l + b_{l,1}$.
The gradient w.r.t. this vector will be (according to the chain rule) $a_l \cdot \frac{\partial \mathcal L}{\partial z_{(l+1),1}}$. That is the gradient up to $z_{(l+1),1}$ (which is a scalar) times the gradient of $z_{(l+1),1}$ w.r.t. $W_{l1}$ which is just $a_l$.
We know that the $a_l$ neurons are all $\ge 0$, so this $W_{l1}$ vector updates depend on $sign(\frac{\partial \mathcal L}{\partial z_{(l+1),1}}).$
This means that the vector either increase or decrease for all elements $\Rightarrow$ it can only move in Zig-Zag / diagonals, which is not very efficient.
This is sometimes called the “bias shift” problem. It also happens when the activations output values which are far from 0 (though to a less extent). | tanh activation function vs sigmoid activation function | Generally speaking, $\tanh$ has two main advantages over a sigmoid function:
It has a slightly bigger derivative than the sigmoid (at least for the area around 0), which helps it to cope a bit better | tanh activation function vs sigmoid activation function
Generally speaking, $\tanh$ has two main advantages over a sigmoid function:
It has a slightly bigger derivative than the sigmoid (at least for the area around 0), which helps it to cope a bit better with the “vanishing gradients” problem of deep neural networks. Here is a plot of the derivatives of both functions:
It is symmetric around 0, which helps it to avoid the “bias shift” problem that sigmoid suffer from (which causes the weight vectors to move in diagonals, or “zig-zag”, which slows down learning).
Sigmoid has 1 main advantage over $\tanh$, which is that it can represent a binary probability - hence can be used as the output of the final layer in binary classification problems.
You can check out this video I made on YouTube which explains a bit further about these problems.
Elaboration on the bias shift problem:
Consider a case of activation functions like Sigmoids which only output positive values. Now let’s focus on a single layer $a_l$. Let’s look at the weight vector associated with the first next neuron: $z_{(l+1),1}=W_{l,1}\cdot a_l + b_{l,1}$.
The gradient w.r.t. this vector will be (according to the chain rule) $a_l \cdot \frac{\partial \mathcal L}{\partial z_{(l+1),1}}$. That is the gradient up to $z_{(l+1),1}$ (which is a scalar) times the gradient of $z_{(l+1),1}$ w.r.t. $W_{l1}$ which is just $a_l$.
We know that the $a_l$ neurons are all $\ge 0$, so this $W_{l1}$ vector updates depend on $sign(\frac{\partial \mathcal L}{\partial z_{(l+1),1}}).$
This means that the vector either increase or decrease for all elements $\Rightarrow$ it can only move in Zig-Zag / diagonals, which is not very efficient.
This is sometimes called the “bias shift” problem. It also happens when the activations output values which are far from 0 (though to a less extent). | tanh activation function vs sigmoid activation function
Generally speaking, $\tanh$ has two main advantages over a sigmoid function:
It has a slightly bigger derivative than the sigmoid (at least for the area around 0), which helps it to cope a bit better |
1,402 | Does an unbalanced sample matter when doing logistic regression? | Balance in the Training Set
For logistic regression models unbalanced training data affects only the estimate of the model intercept (although this of course skews all the predicted probabilities, which in turn compromises your predictions). Fortunately the intercept correction is straightforward: Provided you know, or can guess, the true proportion of 0s and 1s and know the proportions in the training set you can apply a rare events correction to the intercept. Details are in King and Zeng (2001) [PDF].
These 'rare event corrections' were designed for case control research designs, mostly used in epidemiology, that select cases by choosing a fixed, usually balanced number of 0 cases and 1 cases, and then need to correct for the resulting sample selection bias. Indeed, you might train your classifier the same way. Pick a nice balanced sample and then correct the intercept to take into account the fact that you've selected on the dependent variable to learn more about rarer classes than a random sample would be able to tell you.
Making Predictions
On a related but distinct topic: Don't forget that you should be thresholding intelligently to make predictions. It is not always best to predict 1 when the model probability is greater 0.5. Another threshold may be better. To this end you should look into the Receiver Operating Characteristic (ROC) curves of your classifier, not just its predictive success with a default probability threshold. | Does an unbalanced sample matter when doing logistic regression? | Balance in the Training Set
For logistic regression models unbalanced training data affects only the estimate of the model intercept (although this of course skews all the predicted probabilities, whi | Does an unbalanced sample matter when doing logistic regression?
Balance in the Training Set
For logistic regression models unbalanced training data affects only the estimate of the model intercept (although this of course skews all the predicted probabilities, which in turn compromises your predictions). Fortunately the intercept correction is straightforward: Provided you know, or can guess, the true proportion of 0s and 1s and know the proportions in the training set you can apply a rare events correction to the intercept. Details are in King and Zeng (2001) [PDF].
These 'rare event corrections' were designed for case control research designs, mostly used in epidemiology, that select cases by choosing a fixed, usually balanced number of 0 cases and 1 cases, and then need to correct for the resulting sample selection bias. Indeed, you might train your classifier the same way. Pick a nice balanced sample and then correct the intercept to take into account the fact that you've selected on the dependent variable to learn more about rarer classes than a random sample would be able to tell you.
Making Predictions
On a related but distinct topic: Don't forget that you should be thresholding intelligently to make predictions. It is not always best to predict 1 when the model probability is greater 0.5. Another threshold may be better. To this end you should look into the Receiver Operating Characteristic (ROC) curves of your classifier, not just its predictive success with a default probability threshold. | Does an unbalanced sample matter when doing logistic regression?
Balance in the Training Set
For logistic regression models unbalanced training data affects only the estimate of the model intercept (although this of course skews all the predicted probabilities, whi |
1,403 | Does an unbalanced sample matter when doing logistic regression? | The problem is not that the classes are imbalanced per se, it is that there may not be sufficient patterns belonging to the minority class to adequately represent its distribution. This means that the problem can arise for any classifier (even if you have a synthetic problem and you know you have the true model), not just logistic regression. The good thing is that as more data become available, the "class imbalance" problem usually goes away. Having said which, 4:1 is not all that imbalanced.
If you use a balanced dataset, the important thing is to remember that the output of the model is now an estimate of the a-posteriori probability, assuming the classes are equally common, and so you may end up biasing the model too far. I would weight the patterns belonging to each class differently and choose the weights by minimising the cross-entropy on a test set with the correct operational class frequencies.
Alternatively (see the comments) it might be better to weight the positive and negative classes so they contribute equally to the training criterion (so there isn't a class imbalance problem in the estimation of the model parameters), but afterwards to rescale the posterior probabilities estimated by the classifier in order to compensate for the difference in the (effective) training set class frequencies and those in operational conditions (see this answer to a related question) | Does an unbalanced sample matter when doing logistic regression? | The problem is not that the classes are imbalanced per se, it is that there may not be sufficient patterns belonging to the minority class to adequately represent its distribution. This means that th | Does an unbalanced sample matter when doing logistic regression?
The problem is not that the classes are imbalanced per se, it is that there may not be sufficient patterns belonging to the minority class to adequately represent its distribution. This means that the problem can arise for any classifier (even if you have a synthetic problem and you know you have the true model), not just logistic regression. The good thing is that as more data become available, the "class imbalance" problem usually goes away. Having said which, 4:1 is not all that imbalanced.
If you use a balanced dataset, the important thing is to remember that the output of the model is now an estimate of the a-posteriori probability, assuming the classes are equally common, and so you may end up biasing the model too far. I would weight the patterns belonging to each class differently and choose the weights by minimising the cross-entropy on a test set with the correct operational class frequencies.
Alternatively (see the comments) it might be better to weight the positive and negative classes so they contribute equally to the training criterion (so there isn't a class imbalance problem in the estimation of the model parameters), but afterwards to rescale the posterior probabilities estimated by the classifier in order to compensate for the difference in the (effective) training set class frequencies and those in operational conditions (see this answer to a related question) | Does an unbalanced sample matter when doing logistic regression?
The problem is not that the classes are imbalanced per se, it is that there may not be sufficient patterns belonging to the minority class to adequately represent its distribution. This means that th |
1,404 | Does an unbalanced sample matter when doing logistic regression? | Think about the underlying distributions of the two samples. Do you have enough sample to measure both sub- populations without a massive amount of bias in the smaller sample?
See here for a longer explanation.
https://statisticalhorizons.com/logistic-regression-for-rare-events | Does an unbalanced sample matter when doing logistic regression? | Think about the underlying distributions of the two samples. Do you have enough sample to measure both sub- populations without a massive amount of bias in the smaller sample?
See here for a longer ex | Does an unbalanced sample matter when doing logistic regression?
Think about the underlying distributions of the two samples. Do you have enough sample to measure both sub- populations without a massive amount of bias in the smaller sample?
See here for a longer explanation.
https://statisticalhorizons.com/logistic-regression-for-rare-events | Does an unbalanced sample matter when doing logistic regression?
Think about the underlying distributions of the two samples. Do you have enough sample to measure both sub- populations without a massive amount of bias in the smaller sample?
See here for a longer ex |
1,405 | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left? | Ross describes three versions of this "paradox" in the Example 6a in his textbook. In each version, 10 balls are added to the urn and 1 ball is removed at each step of the procedure.
In the first version, $10n$-th ball is removed at the $n$-th step. There are infinitely many balls left after midnight because all balls with numbers not ending in zero are still in there.
In the second version, $n$-th ball is removed at the $n$-th step. There are zero balls left after midnight because each ball is eventually going to be removed at the corresponding step.
In the third version, balls are removed uniformly at random. Ross computes the probability of each ball to be removed by step $n$ and finds that it converges to $1$ as $n\to\infty$ (note that this is not evident! one actually has to perform the computation). This means, by Boole's inequality, that the probability of having zero balls in the end is also $1$.
You are saying that this last conclusion is not intuitive and hard to explain; this is wonderfully supported by many confused answers and comments in this very thread. However, the conclusion of the second version is exactly as un-intuitive! And it has absolutely nothing to do with probability or statistics. I think that after one accepts the second version, there is nothing particularly surprising about the third version anymore.
So whereas the "probabilistic" discussion must be about the third version [see very insightful answers by @paw88789, @Paul, and @ekvall], the "philosophical" discussion should rather focus on the second version which is much easier and is similar in spirit to the Hilbert's hotel.
The second version is known as the Ross-Littlewood paradox. I link to the Wikipedia page, but the discussion there is horribly confusing and I do not recommend reading it at all. Instead, take a look at this MathOverflow thread from years ago. It is closed by now but contains several very perceptive answers. A short summary of the answers that I find most crucial is as follows.
We can define a set $S_n$ of the balls present in the urn after step $n$. We have that $S_1=\{2,\ldots 10\}$, $S_2=\{3,\ldots 20\}$, etc. There is a mathematically well-defined notion of the limit of a sequence of sets and one can rigorously prove that the limit of this sequence exists and is the empty set $\varnothing$. Indeed, what balls can be in the limit set? Only the ones that are never removed. But every ball is eventually removed. So the limit is empty. We can write $S_n \to \varnothing$.
At the same time, the number $|S_n|$ of the balls in the set $S_n$, also known as the cardinality of this set, is equal to $10n-n=9n$. The sequence $9n$ is obviously diverging, meaning that the cardinality converges to the cardinality of $\mathbb N$, also known as aleph-zero $\aleph_0$. So we can write that $|S_n|\to \aleph_0$.
The "paradox" now is that these two statements seem to contradict each other:
\begin{align}
S_n &\to \varnothing \\
|S_n| &\to \aleph_0 \ne 0
\end{align}
But of course there is no real paradox and no contradiction. Nobody said that taking cardinality is a "continuous" operation on sets, so we cannot exchange it with the limit:$$\lim |S_n| \ne |\lim S_n|.$$ In other words, from the fact that $|S_n|=9n$ for all integer $n\in \mathbb N$ we cannot conclude that $|S_\omega|$ (the value at the first ordinal) is equal to $\infty$. Instead, $|S_\omega|$ has to be computed directly and turns out to be zero.
So I think what we get out of this really is the conclusion that taking cardinalities is a discontinous operation... [@HarryAltman]
So I think this paradox is just the human tendency to assume that "simple" operations are continuous. [@NateEldredge]
This is easier to understand with functions instead of sets. Consider a characteristic (aka indicator) function $f_n(x)$ of set $S_n$ which is defined to be equal to one on the $[n, 10n]$ interval and zero elsewhere. The first ten functions look like that (compare the ASCII art from @Hurkyl's answer):
$\quad\quad\quad$
Everybody will agree that for each point $a\in\mathbb R$, we have $\lim f_n(a) = 0$. This by definition means that functions $f_n(x)$ converge to the function $g(x)=0$. Again, everybody will agree to that. However, observe that the integrals of these functions $\int_0^\infty f(x)dx = 9n$ get larger and larger and the sequence of integrals diverges. In other words,
$$\lim\int f_n(x)dx \ne \int \lim f_n(x) dx.$$
This is a completely standard and familiar analysis result. But it is an exact reformulation of our paradox!
A good way to formalize the problem is to describe the state of the jug not as a set (a subset of $\mathbb N$), because those are hard to take limits of, but as its characteristic function. The first "paradox" is that pointwise limits are not the same as uniform limits. [@TheoJohnson-Freyd]
The crucial point is that "at midnight noon" the whole infinite sequence has already passed, i.e. we made a "trasfinite jump" and arrived to the transfinite state $f_\omega = \lim f_n(x)$. The value of the integral "at midnight noon" has to be the value of the integral of $\lim f_n$, not the other way around.
Please note that some of the answers in this thread are misleading despite being highly upvoted.
In particular, @cmaster computes $\lim_{n\to\infty} \operatorname{ballCount}(S_n)$ which is indeed infinite, but this is not what the paradox asks about. The paradox asks about what happens after the whole infinite sequence of steps; this is a transfinite construction and so we need to be computing $\operatorname{ballCount}(S_\omega)$ which is equal to zero as explained above. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma | Ross describes three versions of this "paradox" in the Example 6a in his textbook. In each version, 10 balls are added to the urn and 1 ball is removed at each step of the procedure.
In the first ver | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left?
Ross describes three versions of this "paradox" in the Example 6a in his textbook. In each version, 10 balls are added to the urn and 1 ball is removed at each step of the procedure.
In the first version, $10n$-th ball is removed at the $n$-th step. There are infinitely many balls left after midnight because all balls with numbers not ending in zero are still in there.
In the second version, $n$-th ball is removed at the $n$-th step. There are zero balls left after midnight because each ball is eventually going to be removed at the corresponding step.
In the third version, balls are removed uniformly at random. Ross computes the probability of each ball to be removed by step $n$ and finds that it converges to $1$ as $n\to\infty$ (note that this is not evident! one actually has to perform the computation). This means, by Boole's inequality, that the probability of having zero balls in the end is also $1$.
You are saying that this last conclusion is not intuitive and hard to explain; this is wonderfully supported by many confused answers and comments in this very thread. However, the conclusion of the second version is exactly as un-intuitive! And it has absolutely nothing to do with probability or statistics. I think that after one accepts the second version, there is nothing particularly surprising about the third version anymore.
So whereas the "probabilistic" discussion must be about the third version [see very insightful answers by @paw88789, @Paul, and @ekvall], the "philosophical" discussion should rather focus on the second version which is much easier and is similar in spirit to the Hilbert's hotel.
The second version is known as the Ross-Littlewood paradox. I link to the Wikipedia page, but the discussion there is horribly confusing and I do not recommend reading it at all. Instead, take a look at this MathOverflow thread from years ago. It is closed by now but contains several very perceptive answers. A short summary of the answers that I find most crucial is as follows.
We can define a set $S_n$ of the balls present in the urn after step $n$. We have that $S_1=\{2,\ldots 10\}$, $S_2=\{3,\ldots 20\}$, etc. There is a mathematically well-defined notion of the limit of a sequence of sets and one can rigorously prove that the limit of this sequence exists and is the empty set $\varnothing$. Indeed, what balls can be in the limit set? Only the ones that are never removed. But every ball is eventually removed. So the limit is empty. We can write $S_n \to \varnothing$.
At the same time, the number $|S_n|$ of the balls in the set $S_n$, also known as the cardinality of this set, is equal to $10n-n=9n$. The sequence $9n$ is obviously diverging, meaning that the cardinality converges to the cardinality of $\mathbb N$, also known as aleph-zero $\aleph_0$. So we can write that $|S_n|\to \aleph_0$.
The "paradox" now is that these two statements seem to contradict each other:
\begin{align}
S_n &\to \varnothing \\
|S_n| &\to \aleph_0 \ne 0
\end{align}
But of course there is no real paradox and no contradiction. Nobody said that taking cardinality is a "continuous" operation on sets, so we cannot exchange it with the limit:$$\lim |S_n| \ne |\lim S_n|.$$ In other words, from the fact that $|S_n|=9n$ for all integer $n\in \mathbb N$ we cannot conclude that $|S_\omega|$ (the value at the first ordinal) is equal to $\infty$. Instead, $|S_\omega|$ has to be computed directly and turns out to be zero.
So I think what we get out of this really is the conclusion that taking cardinalities is a discontinous operation... [@HarryAltman]
So I think this paradox is just the human tendency to assume that "simple" operations are continuous. [@NateEldredge]
This is easier to understand with functions instead of sets. Consider a characteristic (aka indicator) function $f_n(x)$ of set $S_n$ which is defined to be equal to one on the $[n, 10n]$ interval and zero elsewhere. The first ten functions look like that (compare the ASCII art from @Hurkyl's answer):
$\quad\quad\quad$
Everybody will agree that for each point $a\in\mathbb R$, we have $\lim f_n(a) = 0$. This by definition means that functions $f_n(x)$ converge to the function $g(x)=0$. Again, everybody will agree to that. However, observe that the integrals of these functions $\int_0^\infty f(x)dx = 9n$ get larger and larger and the sequence of integrals diverges. In other words,
$$\lim\int f_n(x)dx \ne \int \lim f_n(x) dx.$$
This is a completely standard and familiar analysis result. But it is an exact reformulation of our paradox!
A good way to formalize the problem is to describe the state of the jug not as a set (a subset of $\mathbb N$), because those are hard to take limits of, but as its characteristic function. The first "paradox" is that pointwise limits are not the same as uniform limits. [@TheoJohnson-Freyd]
The crucial point is that "at midnight noon" the whole infinite sequence has already passed, i.e. we made a "trasfinite jump" and arrived to the transfinite state $f_\omega = \lim f_n(x)$. The value of the integral "at midnight noon" has to be the value of the integral of $\lim f_n$, not the other way around.
Please note that some of the answers in this thread are misleading despite being highly upvoted.
In particular, @cmaster computes $\lim_{n\to\infty} \operatorname{ballCount}(S_n)$ which is indeed infinite, but this is not what the paradox asks about. The paradox asks about what happens after the whole infinite sequence of steps; this is a transfinite construction and so we need to be computing $\operatorname{ballCount}(S_\omega)$ which is equal to zero as explained above. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma
Ross describes three versions of this "paradox" in the Example 6a in his textbook. In each version, 10 balls are added to the urn and 1 ball is removed at each step of the procedure.
In the first ver |
1,406 | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left? | Hurkyl (in an answer) and Dilip Sarwate (in a comment) give two common deterministic variants of this puzzle. In both variants, at step $k$, balls $10k-9$ through $10k$ are added to the pile ($k=1,2,...$).
In Hurkyl's variation, ball $k$ is removed. In this variant, in can be definitively argued that there are no balls left because ball $n$ is removed at step $n$.
In Dilip Sarwate's variation, ball $10k$ is removed at step $k$, and so in this variant, all balls that are not multiples of $10$ remain. In this variant, there are infinitely many balls in the urn at the end.
With these two variants as edge cases, we see that lots of different things can happen when doing this process. For instance, you could arrange to have any finite set of balls remaining at the end, by doing Hurkyl's process but skipping the removal of certain balls. In fact for any set $B$ with countably infinite complement (in the (positive) natural numbers), you can have that set of balls remaining at the end of the process.
We could look at the random variation of the problem (given in the original post) as selecting a function $f:\mathbb{N}\to\mathbb{N}$ with the conditions that (i) $f$ is one-to-one and (ii) $f(k)\le 10k$ for all $k\in \mathbb{N}$.
The argument given in the Sheldon Ross book (referenced in the post) shows that almost all (in the probabilistic sense) such functions are in fact onto functions (surjections).
I see this as being somewhat analogous to the situation of selecting a number, $x$ from a uniform distribution on $[0,1]$ and asking what is the probability that the number is in the Cantor set (I am using the Cantor set rather than say the rational numbers because the Cantor set is uncountable). The probability is $0$ even though there are many (uncountably many) numbers in the Cantor set that could have been chosen. In the ball removing problem, the set of sequences in which there are any balls left is playing the role of the Cantor set.
Edit: BenMillwood correctly points out that there are some finite sets of balls that cannot be the remaining set. For instance, $1,2,...,10$ cannot be the remaining set. You can have at most $90\%$ of the first $10n$ balls remaining for $n=1,2,3,...$. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma | Hurkyl (in an answer) and Dilip Sarwate (in a comment) give two common deterministic variants of this puzzle. In both variants, at step $k$, balls $10k-9$ through $10k$ are added to the pile ($k=1,2, | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left?
Hurkyl (in an answer) and Dilip Sarwate (in a comment) give two common deterministic variants of this puzzle. In both variants, at step $k$, balls $10k-9$ through $10k$ are added to the pile ($k=1,2,...$).
In Hurkyl's variation, ball $k$ is removed. In this variant, in can be definitively argued that there are no balls left because ball $n$ is removed at step $n$.
In Dilip Sarwate's variation, ball $10k$ is removed at step $k$, and so in this variant, all balls that are not multiples of $10$ remain. In this variant, there are infinitely many balls in the urn at the end.
With these two variants as edge cases, we see that lots of different things can happen when doing this process. For instance, you could arrange to have any finite set of balls remaining at the end, by doing Hurkyl's process but skipping the removal of certain balls. In fact for any set $B$ with countably infinite complement (in the (positive) natural numbers), you can have that set of balls remaining at the end of the process.
We could look at the random variation of the problem (given in the original post) as selecting a function $f:\mathbb{N}\to\mathbb{N}$ with the conditions that (i) $f$ is one-to-one and (ii) $f(k)\le 10k$ for all $k\in \mathbb{N}$.
The argument given in the Sheldon Ross book (referenced in the post) shows that almost all (in the probabilistic sense) such functions are in fact onto functions (surjections).
I see this as being somewhat analogous to the situation of selecting a number, $x$ from a uniform distribution on $[0,1]$ and asking what is the probability that the number is in the Cantor set (I am using the Cantor set rather than say the rational numbers because the Cantor set is uncountable). The probability is $0$ even though there are many (uncountably many) numbers in the Cantor set that could have been chosen. In the ball removing problem, the set of sequences in which there are any balls left is playing the role of the Cantor set.
Edit: BenMillwood correctly points out that there are some finite sets of balls that cannot be the remaining set. For instance, $1,2,...,10$ cannot be the remaining set. You can have at most $90\%$ of the first $10n$ balls remaining for $n=1,2,3,...$. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma
Hurkyl (in an answer) and Dilip Sarwate (in a comment) give two common deterministic variants of this puzzle. In both variants, at step $k$, balls $10k-9$ through $10k$ are added to the pile ($k=1,2, |
1,407 | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left? | Enumaris' answer is perfectly right on the diverging limits problem. Nevertheless, the question can actually be answered in an unambiguous way. So, my answer will show you precisely where the zero balls solution goes wrong, and why the intuitive solution is the correct one.
It is true, that for any ball $n$, the probability of it being in the urn at the end $P(n)$ is zero. To be precise, it's only the limit that's zero: $P(n) = \lim_{N\to\infty} P(n, N) = 0$.
Now, you try to compute the sum
$$\lim_{N\to\infty} \operatorname{ballCount}(N) = \lim_{N\to\infty} \sum_{n=1}^{n\leq 10N} P(n,N).$$
The broken calculation jumps right in to that $P(n,N)$ part, saying that's zero in the limit, so the sum contains only terms of zero, so the sum is zero itself:
$$\begin{align}
\lim_{N\to\infty}\operatorname{ballCount}(N)
&= \lim_{N\to\infty} \sum_{n=1}^{n\leq 10N} P(n,N) \\
\text{broken step here }\longrightarrow
&= \lim_{N\to\infty} \sum_{n=1}^{n\leq 10N} \lim_{N\to\infty} P(n,N) \\
&= \lim_{N\to\infty} \sum_{n=1}^{n\leq 10N} P(n) \\
&= \lim_{N\to\infty} \sum_{n=1}^{n\leq 10N} 0 \\
&= \lim_{N\to\infty} 10 N\times 0 \\
&= 0
\end{align}$$
However, this is illegally splitting the $\lim$ into two independent parts. You cannot simply move the $\lim$ into the sum if the bounds of the sum depend on the parameter of the $\lim$. You must solve the $\lim$ as a whole.
Thus, the only valid way to solve this $\lim$ is to solve the sum first, using the fact that $\sum_{n=1}^{n\leq 10N} P(n,N) = 9N$ for any finite $N$.
$$\begin{align}
\lim_{N\to\infty} \operatorname{ballCount}(N)
&= \lim_{N\to\infty} \sum_{n=1}^{n\leq 10N} P(n,N) \\
&= \lim_{N\to\infty} 9N \\
&= \infty
\end{align}$$
The intuitive solution did precisely that, it's the "clever" solution that's fundamentally broken. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma | Enumaris' answer is perfectly right on the diverging limits problem. Nevertheless, the question can actually be answered in an unambiguous way. So, my answer will show you precisely where the zero bal | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left?
Enumaris' answer is perfectly right on the diverging limits problem. Nevertheless, the question can actually be answered in an unambiguous way. So, my answer will show you precisely where the zero balls solution goes wrong, and why the intuitive solution is the correct one.
It is true, that for any ball $n$, the probability of it being in the urn at the end $P(n)$ is zero. To be precise, it's only the limit that's zero: $P(n) = \lim_{N\to\infty} P(n, N) = 0$.
Now, you try to compute the sum
$$\lim_{N\to\infty} \operatorname{ballCount}(N) = \lim_{N\to\infty} \sum_{n=1}^{n\leq 10N} P(n,N).$$
The broken calculation jumps right in to that $P(n,N)$ part, saying that's zero in the limit, so the sum contains only terms of zero, so the sum is zero itself:
$$\begin{align}
\lim_{N\to\infty}\operatorname{ballCount}(N)
&= \lim_{N\to\infty} \sum_{n=1}^{n\leq 10N} P(n,N) \\
\text{broken step here }\longrightarrow
&= \lim_{N\to\infty} \sum_{n=1}^{n\leq 10N} \lim_{N\to\infty} P(n,N) \\
&= \lim_{N\to\infty} \sum_{n=1}^{n\leq 10N} P(n) \\
&= \lim_{N\to\infty} \sum_{n=1}^{n\leq 10N} 0 \\
&= \lim_{N\to\infty} 10 N\times 0 \\
&= 0
\end{align}$$
However, this is illegally splitting the $\lim$ into two independent parts. You cannot simply move the $\lim$ into the sum if the bounds of the sum depend on the parameter of the $\lim$. You must solve the $\lim$ as a whole.
Thus, the only valid way to solve this $\lim$ is to solve the sum first, using the fact that $\sum_{n=1}^{n\leq 10N} P(n,N) = 9N$ for any finite $N$.
$$\begin{align}
\lim_{N\to\infty} \operatorname{ballCount}(N)
&= \lim_{N\to\infty} \sum_{n=1}^{n\leq 10N} P(n,N) \\
&= \lim_{N\to\infty} 9N \\
&= \infty
\end{align}$$
The intuitive solution did precisely that, it's the "clever" solution that's fundamentally broken. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma
Enumaris' answer is perfectly right on the diverging limits problem. Nevertheless, the question can actually be answered in an unambiguous way. So, my answer will show you precisely where the zero bal |
1,408 | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left? | This argument is focused on the tendency for infinite sets and sequences to behave in unitnuitive ways. This is no more surprising than the Hilbert Hotel. In such a case, you will indeed have taken out an infinite number of balls, but you will have put an infinite number in. Consider the Hilbert Hotel in reverse. You can remove an infinite number of guests from the hotel, and still have an infinite number left.
Whether this is physically realizable is another question entirely.
As such, I would consider it not necessarily ill formed, but rather put in the wrong book. This sort of counting question belongs in a set theory course, not a probability course. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma | This argument is focused on the tendency for infinite sets and sequences to behave in unitnuitive ways. This is no more surprising than the Hilbert Hotel. In such a case, you will indeed have taken | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left?
This argument is focused on the tendency for infinite sets and sequences to behave in unitnuitive ways. This is no more surprising than the Hilbert Hotel. In such a case, you will indeed have taken out an infinite number of balls, but you will have put an infinite number in. Consider the Hilbert Hotel in reverse. You can remove an infinite number of guests from the hotel, and still have an infinite number left.
Whether this is physically realizable is another question entirely.
As such, I would consider it not necessarily ill formed, but rather put in the wrong book. This sort of counting question belongs in a set theory course, not a probability course. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma
This argument is focused on the tendency for infinite sets and sequences to behave in unitnuitive ways. This is no more surprising than the Hilbert Hotel. In such a case, you will indeed have taken |
1,409 | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left? | I think it helps to remove the superfluous temporal component of the problem.
The more basic variant of this paradox is to always remove the lowest numbered ball. For ease of drawing, I will also only add two balls at each step.
The procedure describes how to fill out an infinite two-dimensional grid:
.*........
..**......
...***.... ....
....****..
.....*****
: : :
: : :
where each row is formed from the previous by adding two asterisks on the right then removing the leftmost.
The questions one then asks are:
How many columns end with repeated asterisks rather than repeated dots?
In my opinion, the idea to mistakenly equate this result with "the limit of the number of asterisks in each row" is much less compelling. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma | I think it helps to remove the superfluous temporal component of the problem.
The more basic variant of this paradox is to always remove the lowest numbered ball. For ease of drawing, I will also only | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left?
I think it helps to remove the superfluous temporal component of the problem.
The more basic variant of this paradox is to always remove the lowest numbered ball. For ease of drawing, I will also only add two balls at each step.
The procedure describes how to fill out an infinite two-dimensional grid:
.*........
..**......
...***.... ....
....****..
.....*****
: : :
: : :
where each row is formed from the previous by adding two asterisks on the right then removing the leftmost.
The questions one then asks are:
How many columns end with repeated asterisks rather than repeated dots?
In my opinion, the idea to mistakenly equate this result with "the limit of the number of asterisks in each row" is much less compelling. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma
I think it helps to remove the superfluous temporal component of the problem.
The more basic variant of this paradox is to always remove the lowest numbered ball. For ease of drawing, I will also only |
1,410 | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left? | Several posters have been concerned the computations in Ross may not be rigorous. This answer addresses that by proving the existence of a probability space where all sets of outcomes considered by Ross are indeed measurable, and then repeats the vital parts of Ross's computations.
Finding a suitable probability space
To make Ross's conclusion that there are no balls in the urn at 12 P.M., almost surely, rigorous, we need the existence of a probability space $(\Omega, \mathcal F, P)$ where the event "no balls in the urn at 12 P.M." can be constructed formally and shown to be measurable. To that end, we shall use Theorem 33 [Ionescu - Tulcea] in these lecture notes, slightly reworded, and a construction suggested by @NateEldredge in a comment to the question.
Theorem. (Ionescu - Tulcea Extension Theorem) Consider a sequence
of measurable spaces $(\Xi_n, \mathcal X_n), n = 1, 2, \dots$. Suppose that for each $n$, there exists a probability kernel $\kappa_n$ from $(\Xi_1, \mathcal X_1) \times \dots \times(\Xi_{n-1}, \mathcal X_{n-1})$ to $(\Xi_n, \mathcal X_n)$ (taking $\kappa_1$ to be a kernel insensitive to its first argument, i.e., a probability measure). Then there exists a sequence of random variables $X_n, n = 1, 2, \dots$ taking values in the corresponding $\Xi_n$, such that, for every $n$, the joint distribution of $(X_1, \dots, X_n)$ is that implied by the kernels $\kappa_1, \dots, \kappa_n$.
We let $X_n$ denote the label of the ball removed at the $n$th withdrawal. It's clear that the (infinite) process $X = (X_1, X_2, \dots)$, if it exists, tells us everything we need to know to mimic Ross's arguments. For example, knowing $X_1, \dots, X_m$ for some integer $m \geq 0$ is the same as knowing the number of balls in the urn after withdrawal $m$: they are precisely the added balls with labels $\{1, 2, \dots, 10m\}$, minus the removed balls $\{X_1, \dots, X_m\}$. More generally, events describing which, and how many, balls are in the urn after any given withdrawal can be stated in terms of the process $X$.
To conform with Ross's experiment we need that, for every $n\geq 2$, the distribution of $X_n \mid X_{n-1}, \dots, X_{1}$ is uniform on $\{1, 2, \dots, 10n\} \setminus {X_1, \dots, X_{n-1}}$. We also need the distribution of $X_1$ to be uniform on $\{1,\dots, 10\}$. To prove that an infinite process $X = (X_1, X_2, \dots)$ with these finite-dimensional distributions indeed exists, we check the conditions of the Ionescu-Tulcea Extension Theorem. For any integer $n$, let $\mathcal I_n = \{1, 2, \dots, n\}$ and define the measurable spaces $(\Xi_n, \mathcal X_n) = (\mathcal I_{10n}, 2^{\mathcal I_{10n}})$, where $2^B$ denotes the power set of the set $B$. Define the measure $\kappa_1$ on $(\Xi_1, \mathcal X_1)$ to be the one that puts mass $1/10$ on all elements of $\Xi_1$. For any $n \geq 2$, and $(x_1, \dots, x_{n-1}) \in \Xi_1 \times \dots \times \Xi_{n-1}$ define $\kappa_n(x_1, \dots, x_{n-1}, \cdot)$ to be the probability kernel that puts equal mass on all points in $\Xi_n \setminus \{x_1, \dots, x_{n-1}\}$, and mass zero on all other points, i.e. on the integers $x_i \in \Xi_n, i = 1, \dots, n - 1$. By construction, the probability kernels agree with the uniform removal probability specified by Ross. Thus, the infinite process $X$ and the probability space $(\Omega, \mathcal F, P)$, the existence of which are given by the theorem, give us a way to formally carry out Ross's argument.
Let $E_{in}$ denote the set of outcomes such that ball $i$ is in the urn after withdrawal $n$. In terms of our stochastic process $X$ this means that, for all $i$ and $n$ such that $i \leq 10n$ we define $E_{in} = \cap_{j = 1}^n\{\omega: X_j(\omega) \neq i\}$, i.e. ball $i$ was not removed in any of the draws up to and including the $n$th. For $i > 10n$ we can clearly define $E_{in} = \emptyset$ since ball $i$ has not yet been added to the turn. For every $j$ and $i$, the set $\{\omega: X_j(\omega) \neq i\}$ is measurable since $X_j$ is a random variable (measurable). Thus, $E_{in}$ is measurable as the finite interesection of measurable sets.
We are interested in the set of outcomes such that there are no balls in the urn at 12 P.M. That is, the set of outcomes such that for every integer $i = 1, 2\dots$, ball $i$ is not in the urn at 12 P.M. For every $i$, let $E_i$ be the set of outcomes ($\omega \in \Omega$) such that ball $i$ is in the urn at 12 P.M. We can construct $E_i$ formally using our $E_{in}$ as follows. That $i$ is in the urn at 12 P.M. is equivalent to it being in the urn after every withdrawal made after it was added to the urn, so $E_i = \cap_{n:i\leq 10n}E_{in}$. The set of outcomes $E_i$ is now measurable as the countable intersection of measurable sets, for every $i$.
The outcomes for which there is at least one ball in the urn at 12 P.M. are those for which at least one of the $E_i$ happen, i.e. $E = \cup_{i = 1}^\infty E_i$. The set of outcomes $E$ is measurable as the countable union of measurable sets. Now, $\Omega \setminus E$ is the event that there are no balls in the urn at 12 P.M., which is indeed measurable as the complement of a measurable set. We conclude that all desired sets of outcomes are measurable and we can move on to computing their probabilities, as Ross does.
Computing the probability $P(\Omega \setminus E)$
We first note since the family of events $E_i, i = 1, 2, \dots$ is countable, we have by countable sub-additivity of measures that
$$
P(E) \leq \sum_{i = 1}^\infty P(E_i) = \lim_{N\to \infty}\sum_{i = 1}^N P(E_i).
$$
For ease of notation, let's denote the real number $P(E_i) = a_i$ for all $i$. Clearly, to show that $P(E) = 0$ it suffices to show that $\sum_{i = 1}^N a_i = 0$ for all $N$. This is equivalent to showing that $a_i = 0$ for every $i$, which we shall do now.
To that end, note that for all $n$ such that ball $i$ has been added to the urn, i.e. $10n \geq i$, $E_{in} \supseteq E_{i(n + 1)}$. This is so because if ball $i$ is in the urn at step $n + 1$, it is also in the urn at step $n$. In other words, the sets $E_{in}$, form a decreasing sequence for all $n$ such that $10n \geq i$. For ease of notation, let $a_{in} = P(E_{in})$. Ross proves that $a_{1n} \to 0$ as $n \to \infty$ and states that this can also be shown for all other $i$, which I will take as true. The proof consists of showing that $a_{in} = \prod_{k = i}^n[9k / (9k + 1)]$ and $\lim_{n \to \infty}a_{in} = 0$ for all $i$, an elementary but lenghty calculation I will not repeat here. Armed with this result, and the fact that the family of events $E_{in}$, $10n > i$ is countable for every i, continuity of measures gives
$$a_i = P(\cap_{n: 10n > i}E_{in}) = \lim_{n \to \infty} P(E_{in}) = \lim_{n \to \infty}a_{in} = 0.$$
We conclude that $P(E) = 0$, and thus $P(\Omega\setminus E) = 1$ as claimed. QED.
Some common misunderstandings:
One answer is concerned with the fact that (in my notation) $\lim_{N \to \infty}\sum_{i = 1}^N \lim_{n \to \infty }a_{in} \neq \lim_{N \to \infty}\sum_{i = 1}^N a_{iN}$. This, however, has no bearing on the validity of the solution bevause the quantity on the right hand side is not the one of interest per the provided argument.
There has been some concern that the limit cannot be moved inside the sum, or in other words cannot be interchanged with the sum in the sense that it may be the case that $\sum_{i = 1}^\infty \lim_{n \to \infty}a_{in} \neq \lim_{n \to \infty}\sum_{i = 1}^\infty a_{in}$. Like the previous remark, this is irrelevant to the solution because the quantity on the right hand side is not the one of interest. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma | Several posters have been concerned the computations in Ross may not be rigorous. This answer addresses that by proving the existence of a probability space where all sets of outcomes considered by Ro | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left?
Several posters have been concerned the computations in Ross may not be rigorous. This answer addresses that by proving the existence of a probability space where all sets of outcomes considered by Ross are indeed measurable, and then repeats the vital parts of Ross's computations.
Finding a suitable probability space
To make Ross's conclusion that there are no balls in the urn at 12 P.M., almost surely, rigorous, we need the existence of a probability space $(\Omega, \mathcal F, P)$ where the event "no balls in the urn at 12 P.M." can be constructed formally and shown to be measurable. To that end, we shall use Theorem 33 [Ionescu - Tulcea] in these lecture notes, slightly reworded, and a construction suggested by @NateEldredge in a comment to the question.
Theorem. (Ionescu - Tulcea Extension Theorem) Consider a sequence
of measurable spaces $(\Xi_n, \mathcal X_n), n = 1, 2, \dots$. Suppose that for each $n$, there exists a probability kernel $\kappa_n$ from $(\Xi_1, \mathcal X_1) \times \dots \times(\Xi_{n-1}, \mathcal X_{n-1})$ to $(\Xi_n, \mathcal X_n)$ (taking $\kappa_1$ to be a kernel insensitive to its first argument, i.e., a probability measure). Then there exists a sequence of random variables $X_n, n = 1, 2, \dots$ taking values in the corresponding $\Xi_n$, such that, for every $n$, the joint distribution of $(X_1, \dots, X_n)$ is that implied by the kernels $\kappa_1, \dots, \kappa_n$.
We let $X_n$ denote the label of the ball removed at the $n$th withdrawal. It's clear that the (infinite) process $X = (X_1, X_2, \dots)$, if it exists, tells us everything we need to know to mimic Ross's arguments. For example, knowing $X_1, \dots, X_m$ for some integer $m \geq 0$ is the same as knowing the number of balls in the urn after withdrawal $m$: they are precisely the added balls with labels $\{1, 2, \dots, 10m\}$, minus the removed balls $\{X_1, \dots, X_m\}$. More generally, events describing which, and how many, balls are in the urn after any given withdrawal can be stated in terms of the process $X$.
To conform with Ross's experiment we need that, for every $n\geq 2$, the distribution of $X_n \mid X_{n-1}, \dots, X_{1}$ is uniform on $\{1, 2, \dots, 10n\} \setminus {X_1, \dots, X_{n-1}}$. We also need the distribution of $X_1$ to be uniform on $\{1,\dots, 10\}$. To prove that an infinite process $X = (X_1, X_2, \dots)$ with these finite-dimensional distributions indeed exists, we check the conditions of the Ionescu-Tulcea Extension Theorem. For any integer $n$, let $\mathcal I_n = \{1, 2, \dots, n\}$ and define the measurable spaces $(\Xi_n, \mathcal X_n) = (\mathcal I_{10n}, 2^{\mathcal I_{10n}})$, where $2^B$ denotes the power set of the set $B$. Define the measure $\kappa_1$ on $(\Xi_1, \mathcal X_1)$ to be the one that puts mass $1/10$ on all elements of $\Xi_1$. For any $n \geq 2$, and $(x_1, \dots, x_{n-1}) \in \Xi_1 \times \dots \times \Xi_{n-1}$ define $\kappa_n(x_1, \dots, x_{n-1}, \cdot)$ to be the probability kernel that puts equal mass on all points in $\Xi_n \setminus \{x_1, \dots, x_{n-1}\}$, and mass zero on all other points, i.e. on the integers $x_i \in \Xi_n, i = 1, \dots, n - 1$. By construction, the probability kernels agree with the uniform removal probability specified by Ross. Thus, the infinite process $X$ and the probability space $(\Omega, \mathcal F, P)$, the existence of which are given by the theorem, give us a way to formally carry out Ross's argument.
Let $E_{in}$ denote the set of outcomes such that ball $i$ is in the urn after withdrawal $n$. In terms of our stochastic process $X$ this means that, for all $i$ and $n$ such that $i \leq 10n$ we define $E_{in} = \cap_{j = 1}^n\{\omega: X_j(\omega) \neq i\}$, i.e. ball $i$ was not removed in any of the draws up to and including the $n$th. For $i > 10n$ we can clearly define $E_{in} = \emptyset$ since ball $i$ has not yet been added to the turn. For every $j$ and $i$, the set $\{\omega: X_j(\omega) \neq i\}$ is measurable since $X_j$ is a random variable (measurable). Thus, $E_{in}$ is measurable as the finite interesection of measurable sets.
We are interested in the set of outcomes such that there are no balls in the urn at 12 P.M. That is, the set of outcomes such that for every integer $i = 1, 2\dots$, ball $i$ is not in the urn at 12 P.M. For every $i$, let $E_i$ be the set of outcomes ($\omega \in \Omega$) such that ball $i$ is in the urn at 12 P.M. We can construct $E_i$ formally using our $E_{in}$ as follows. That $i$ is in the urn at 12 P.M. is equivalent to it being in the urn after every withdrawal made after it was added to the urn, so $E_i = \cap_{n:i\leq 10n}E_{in}$. The set of outcomes $E_i$ is now measurable as the countable intersection of measurable sets, for every $i$.
The outcomes for which there is at least one ball in the urn at 12 P.M. are those for which at least one of the $E_i$ happen, i.e. $E = \cup_{i = 1}^\infty E_i$. The set of outcomes $E$ is measurable as the countable union of measurable sets. Now, $\Omega \setminus E$ is the event that there are no balls in the urn at 12 P.M., which is indeed measurable as the complement of a measurable set. We conclude that all desired sets of outcomes are measurable and we can move on to computing their probabilities, as Ross does.
Computing the probability $P(\Omega \setminus E)$
We first note since the family of events $E_i, i = 1, 2, \dots$ is countable, we have by countable sub-additivity of measures that
$$
P(E) \leq \sum_{i = 1}^\infty P(E_i) = \lim_{N\to \infty}\sum_{i = 1}^N P(E_i).
$$
For ease of notation, let's denote the real number $P(E_i) = a_i$ for all $i$. Clearly, to show that $P(E) = 0$ it suffices to show that $\sum_{i = 1}^N a_i = 0$ for all $N$. This is equivalent to showing that $a_i = 0$ for every $i$, which we shall do now.
To that end, note that for all $n$ such that ball $i$ has been added to the urn, i.e. $10n \geq i$, $E_{in} \supseteq E_{i(n + 1)}$. This is so because if ball $i$ is in the urn at step $n + 1$, it is also in the urn at step $n$. In other words, the sets $E_{in}$, form a decreasing sequence for all $n$ such that $10n \geq i$. For ease of notation, let $a_{in} = P(E_{in})$. Ross proves that $a_{1n} \to 0$ as $n \to \infty$ and states that this can also be shown for all other $i$, which I will take as true. The proof consists of showing that $a_{in} = \prod_{k = i}^n[9k / (9k + 1)]$ and $\lim_{n \to \infty}a_{in} = 0$ for all $i$, an elementary but lenghty calculation I will not repeat here. Armed with this result, and the fact that the family of events $E_{in}$, $10n > i$ is countable for every i, continuity of measures gives
$$a_i = P(\cap_{n: 10n > i}E_{in}) = \lim_{n \to \infty} P(E_{in}) = \lim_{n \to \infty}a_{in} = 0.$$
We conclude that $P(E) = 0$, and thus $P(\Omega\setminus E) = 1$ as claimed. QED.
Some common misunderstandings:
One answer is concerned with the fact that (in my notation) $\lim_{N \to \infty}\sum_{i = 1}^N \lim_{n \to \infty }a_{in} \neq \lim_{N \to \infty}\sum_{i = 1}^N a_{iN}$. This, however, has no bearing on the validity of the solution bevause the quantity on the right hand side is not the one of interest per the provided argument.
There has been some concern that the limit cannot be moved inside the sum, or in other words cannot be interchanged with the sum in the sense that it may be the case that $\sum_{i = 1}^\infty \lim_{n \to \infty}a_{in} \neq \lim_{n \to \infty}\sum_{i = 1}^\infty a_{in}$. Like the previous remark, this is irrelevant to the solution because the quantity on the right hand side is not the one of interest. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma
Several posters have been concerned the computations in Ross may not be rigorous. This answer addresses that by proving the existence of a probability space where all sets of outcomes considered by Ro |
1,411 | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left? | This answer aims to do four things:
Review Ross's mathematical formulation of the problem, showing how it follows directly and unambiguously from the problem description.
Defend the position that Ross's paradoxical solution is both mathematically sound and relevant to our understanding of the physical world, whether or not it is 100% physically realizable.
Discuss certain fallacious arguments rooted in physical intuition, and show that the oft-stated "physical" solution of infinite balls at noon is not only in contradiction to mathematics, but to physics as well.
Describe a physical implementation of the problem which may make Ross's solution more intuitive. Start here for the answer to Carlos's original question.
1. How to Describe the Problem Mathematically
We will unpack the initial "infinite process modeling" step of Ross's argument (p. 46). Here is the statement we will focus on justifying:
Define $E_n$ to be the event that ball number 1 is still in
the urn after the first n withdrawals have been made... The event that ball number 1 is in the urn at 12 P.M. is just the event $\bigcap_{n=1}^\infty E_n$.
Before we unpack Ross's statement, let's consider how it is even possible to understand the urn's contents at noon, after an infinite sequence of operations. How could we possibly know what is in the urn? Well, let's think about a specific ball $b$; you can imagine $b=1$ or $1000$ or whatever you want. If ball $b$ was taken out at some stage of the process before noon, certainly it won't be in the urn at noon. And conversely, if a given ball was in the urn at every single stage of the process up until noon (after it was added), then it was in the urn at noon. Let's write these statements out formally:
A ball $b$ is in the urn at noon if and only if it was in the urn at every stage $n \in \{n_b, n_b + 1, n_b + 2, ...\}$ before noon, where $n_b$ is the stage the ball was added to the urn.
Now let's unpack Ross's statement - what does $\bigcap_{n=1}^\infty E_n$ mean in plain English? Let's take a single realization $x$ of the urn process and talk it out:
$x \in E_1$ means that ball 1 is in the urn after stage 1 of the process.
$x \in E_1 \bigcap E_2$ means that ball 1 is in the urn after stages 1 and 2 of the process.
$x \in E_1 \bigcap E_2 \bigcap E_3$ means that ball 1 is in the urn after stages 1, 2, and 3 of the process.
For any $k \in \{1, 2, 3, ...\}$, $x \in \bigcap_{k=1}^n E_k$ means that the ball is in the urn after stages $1$ thru $n$.
Clearly, then, $x \in \bigcap_{k \in \{1, 2, 3...\}} E_k$ means that, in realization $x$ of this urn process, ball 1 is in the urn after stages 1, 2, 3, et cetera: all finite stages $k$ before noon. The infinite intersection $\bigcap_{n = 1}^\infty E_n$ is just another way of writing that, so $\bigcap_{n = 1}^\infty E_n$ contains precisely the realizations of the process where ball 1 was in the urn at all stages before noon. An event is just a defined set of realizations of a process, so the last sentence is precisely equivalent to saying that $\bigcap_{n = 1}^\infty E_n$ is the event that ball 1 was in the urn at all stages before noon, for this random process.
Now, the punchline: by our "if and only if" statement above, this is exactly the same as saying that ball 1 was in the urn at noon! So $\bigcap_{n = 1}^\infty E_n$ is the event that ball 1 is in the urn at noon, just as Ross originally stated. QED
In the derivation above, everything we said is equally valid for both the deterministic and probabilistic versions, because deterministic modeling is a special case of probabilistic modeling in which the sample space has one element. No measure theoretic or probability concepts were even used, beyond the words "event" and "realization" (which are just jargon for "set" and "element").
2. The Paradoxical Solution is Mathematically Sound and Relevant to Physics
After this setup point, the deterministic and probabilistic variants diverge. In the deterministic variant (version 2 from amoeba's post), we know ball 1 is taken out on the first step, so $E_1 = \emptyset$ and the infinite intersection, of course, is also empty. Similarly, any other ball $b$ is taken out at stage $b$ and is not present at noon. Thus the urn cannot contain any numbered ball $b$ at noon and must therefore be empty.
In the probabilistic variant, the same phenomenon happens, just in a softer "in-expectation" sense. The probability of any given ball's being present dwindles to zero as we approach noon, and at the limiting time of noon, the ball is almost surely not present. Since each ball is present with probability zero, and the sum of infinitely many zeros is still zero, there are almost surely no balls in the urn at noon. All of this is shown completely rigorously by Ross; details can be filled in with a knowledge of graduate-level measure theory, as @ekvall's answer shows.
If you accept the standard arguments about mathematical objects expressed as infinite sequences, for example $0.999... = 1$, the argument here should be just as acceptable, as it relies on the exact same principles. The only question remaining is whether the mathematical solution applies to the real world, or just the platonic world of mathematics. This question is complex and is discussed further in section 4.
That said, there is no reason to presuppose that the infinite urn problem is unphysical, or to reject it as irrelevant even if it is unphysical. Many physical insights have been gained from studying infinite structures and processes, for example, infinite wires and percolation lattices. Not all of these systems are necessarily physically realizable, but their theory shapes the rest of physics. Calculus itself is "unphysical" in some ways, because we don't know if it is possible to physically realize the arbitrarily small distances and times that are often its subject of study. That doesn't stop us from putting calculus to incredibly good use in the theoretical and applied sciences.
3. The Unphysicality of Solutions Based on "Physical Intuition"
For those who still believe that Ross's math is wrong or physically inaccurate in the deterministic variant, and the true physical solution is infinitely many balls: regardless of what you think happens at noon, it is impossible to deny the situation before noon: every numbered ball added to the urn eventually gets removed. So if you think there are somehow still infinitely many balls in the urn at noon, you must admit that not one of those balls can be a ball added before noon. So those balls must have come from somewhere else: you are asserting that infinitely many balls, unrelated to the original problem process, suddenly pop into existence at precisely noon to rescue the continuity of cardinality from being violated. As unphysical as the "empty set" solution might seem intuitively, this alternative is objectively and demonstrably unphysical. Infinite collections of objects do not pop into being in an instant just to satisfy poor human intuitions about infinity.
The common fallacy here seems to be that we can just look at the number of balls as time approaches noon, and assume that the divergent trend yields infinitely many balls at noon, without regard to exactly which balls are being taken in and out. There has even been an attempt to justify this with the "principle of indifference", which states that the answer shouldn't depend on whether the balls are labeled or not.
Indeed, the answer does not depend on whether the balls are labeled or not, but that is an argument for Ross's solution, not against it. From the perspective of classical physics, the balls are effectively labeled whether you think of them as labeled or not. They have distinct, permanent identities which are equivalent to labels, and a truly physical analysis must account for this, whether or not numbers are literally written on the balls. The labels themselves do not directly affect how the solution comes out, but they are needed to describe exactly how the balls are moved around. Some procedures leave balls in the urn forever, others provably remove every ball that is added, and labels are needed to even describe the difference between these procedures. Attempting to ignore the labels is not "physical", it's just neglecting to understand the physical problem precisely enough to solve it. (The same goes for complicated variants that reshuffle the labels at each stage. What matters is which balls are in the urn, not the labels someone has placed or replaced on them. This can be determined by ignoring the complicated relabeling scheme entirely and simply using a single unchanging labeling scheme, the one of Ross's original problem.)
The only way distinguishability would fail to be true is if the "balls" were quantum mechanical particles. In this case, the indifference principle fails spectacularly. Quantum physics tells us that indistinguishable particles behave completely differently than distinguishable ones. This has incredibly fundamental consequences for the structure of our universe, such as the Pauli exclusion principle, which is perhaps the single most important principle of chemistry. No one has attempted to analyze a quantum version of this paradox yet.
4. Describing the Solution Physically
We have seen how vague "physical" intuitions can lead us astray on this problem. Conversely, it turns out that a more physically precise description of the problem helps us understand why the mathematical solution is actually the one that makes the most physical sense.
Consider an infinite Newtonian Universe governed by the laws of classical mechanics. This Universe contains two objects: an infinite Shelf and an infinite Urn, which start at the Origin of the Universe and run alongside one another, one feet apart, forever and ever. The Shelf lies on the line $y = 0$ feet, while the Urn lies on the line $y = 1$ feet. Along the Shelf are laid infinitely many identical balls, evenly spaced one foot apart, the first being one foot from the Origin (so ball $n$ is on the line $x = n$ feet). The Urn - which is really just like the Shelf, but a bit more ornate, closed over, and generally Urnish - is empty.
An Aisle connects the Shelf and Urn at the bottom, and on top of the Aisle, at the Origin, sits an Endeavor robot with an infinite power supply. Beginning at 11 AM, Endeavor activates and begins zooming back and forth in the Aisle, transferring balls between Urn and Shelf according to Ross-Littlewood's programmed instructions:
When the program commands ball $n$ to be inserted into the Urn, the ball $n$ feet from the Origin is transferred from the Shelf to the Urn.
When the program commands ball $n$ to be removed from the Urn, the ball $n$ feet from the Origin is transferred from the Urn to the Shelf.
In either case, the transfer is made straight across, so the ball remains $n$ feet from the Origin. The process unfolds as specified in the Ross-Littlewood problem:
At 11:00 AM, Endeavor transfers balls 1-10 from Shelf to Urn, then moves one of the Urn balls back to Shelf.
At 11:30 AM, Endeavor transfers balls 11-20 from Shelf to Urn, then moves one of the Urn balls back to Shelf.
At 11:45 AM, Endeavor transfers balls 21-30 from Shelf to Urn, then moves one of the Urn balls back to Shelf.
et cetera...
As the process continues, each new step requires longer trips up and down the Aisle, and only half the time to make the trips. Thus, Endeavor must move up and down the Aisle exponentially faster as noon closes in. But it always keeps up with the program, because it has an infinite power supply and can move as fast as needed. Eventually, noon arrives.
What happens in this more vividly imagined version of the paradox? Watched from above, the approach towards noon is truly spectacular. Within the Urn, a Wave of balls appears to propagate outward from the Origin. The Wave's size and speed grow without bound as noon approaches. If we were to take pictures immediately after each step what would the layout of balls would look like? In the deterministic case, they would look exactly like the step functions in amoeba's answer. The ball positions $(x, y)$ would follow precisely the curves he has plotted. In the probabilistic case, it would look roughly similar, but with more straggling near the Origin.
When noon arrives, we take stock of what has happened. In the deterministic version, each ball was transferred from the Shelf to the Urn exactly once, then moved back at a later step, with both transfers happening before noon. At noon, the Universe must be back to its original 11 AM state. The Wave is no more. Each ball is back exactly where it started. Nothing has changed. The Urn is empty. In the probabilistic version the same thing happens, except now the result is only almost sure rather than sure.
In either case, "physical objections" and complaints about infinity seem to vanish into thin air. Of course the Urn is empty at noon. How could we have imagined otherwise?
The only remaining mystery is the fate of Endeavor. Its displacement from the Origin and its velocity became arbitrarily large as noon approached, so at noon, Endeavor is nowhere to be found in our infinite Newtonian Universe. The loss of Endeavor is the only violation of physics which has occurred during the process.
At this point, one could object that Endeavor is not physically possible, since its speed grows without bound and would eventually violate the relativistic limit, the speed of light. However, we can change the scenario slightly to resolve this issue. Instead of a single robot, we could have infinitely many robots, each responsible for a single ball. We could program them beforehand to ensure perfect coordination and timing according to Ross's instructions.
Is this variation 100% physical? Probably not, because the robots would have to operate with arbitrarily precise timing. As we approach noon, the precision demanded would eventually fall below the Planck time and create quantum mechanical issues. But ultimately, an infinite wire and an infinite percolation lattice might not be all that physical either. That doesn't stop us from studying infinite systems and processes and determining what would happen if the obstructing physical constraints were suspended.
4a. Why Count Monotonicity is Violated
A number of Ross skeptics have questioned how it is possible that the number of balls in the urn increases without bound as we approach noon, then is zero at noon. Ultimately we must believe in rigorous analysis over our own intuition, which is often wrong, but there is a variation of the paradox that helps illuminate this mystery.
Suppose that instead of infinitely many balls, we have $10N$ balls labeled 1, 2, 3, up to $10N$, and we issue the following addition to the rules for the ball mover:
If the instructions ask you to move a ball that does not exist, ignore that instruction.
Note that the original problem is unchanged if we add to it this instruction, since the instruction will never be activated with infinitely many balls. Thus, we can think of the original problem and this new family of problems to be part of the same family, with the same rules. Examining the finite $N$ family, especially for very large $N$, can help us to understand the "N = $\infty$" case.
In this variation, the balls accumulate 9 per step as before, but only up to step $N$ of the process. Then the numbers for balls to be added no longer correspond to actual balls, and we can only comply with the instruction to remove balls, and the process stops after $9N$ additional steps, for a total of $10N$ steps. If $N$ is very large, the removal-only phase occurs very close to noon, when the tasks are being done very rapidly, and the urn is emptied out very quickly.
Now suppose we do this variation of the experiment for each value of $N$ and graph the ball count over time, $f_N(t)$, where $t$ ranges from 0 to 1 hour after 11AM (i.e. 11AM to noon). Typically $f_N(t)$ rises for a while, then falls back to zero at or before $t=1$. In the limit as $N$ approaches infinity, the graph rises ever higher and the fall is ever more rapid. By noon the urn is always empty: $f_N(1) = 0$. In the limiting graph, $f(t) = \lim_{N \rightarrow \infty} f_N(t)$, the curve approaches infinity for $t < 1$ but $f(1) = 0$. This is precisely the result derived in Ross's proof: the ball count diverges to infinity before noon, but is zero at noon. In other words, Ross's solution preserves continuity with respect to N: the pointwise limit of the ball count as $N \rightarrow \infty$ matches the ball count in the infinite ball case.
I do not consider this a primary argument for Ross's solution, but it may be helpful for those who are puzzled about why the ball count goes up forever, than crashes to zero at noon. While strange, it is the limiting behavior of the finite version of the problem as $N \rightarrow \infty$, and thus does not come as a "sudden shock" in the infinite case.
A Final Reflection
Why has this problem proven to be such a tar-pit for so many? My speculation is that our physical intuition is much vaguer than we think it is, and we often draw conclusions based on imprecise and incomplete mental conceptions. For example, if I ask you to think of a square that is also a circle, you may imagine something squarish and circlish, but it won't be precisely both of those things - that would be impossible. The human mind can easily mash together vague, contradictory concepts into a single mental picture. If the concepts are less familiar, like the Infinite, we can convince ourselves that these vague mental mashups are actually conceptions of the Real Thing.
This is precisely what happens in the urn problem. We do not really conceive of the whole thing at once; we think about bits and pieces of it, like how many balls there are over time. We wave away supposedly irrelevant technicalities, like what happens to each humble little ball over time, or how exactly an "urn" can hold infinitely many balls. We neglect to set out all the details precisely, not realizing that the result is a mashup of inconsistent, incompatible mental models.
Mathematics is designed to rescue us from this condition. It disciplines and steels us in the face of the unfamiliar and the exotic. It demands that we think twice about the things that "must" be true... right? It reminds us that no matter how strange things get, one and one is still two, a ball is either in an urn or it is not, and a statement is either true or false. If we persevere, these principles eventually bring clarity to most of our problems.
Those who subordinate mathematical analysis to "physical" or "common-sense" intuitions do so at their peril. Hand-waving about intuitions is only the start of physics. Historically, all successful branches of physics have eventually founded themselves on rigorous mathematics, which culls away incorrect physical intuitions, strengthens correct ones, and enables the rigorous study of ideal systems, such as the infinite current-carrying wire, which illuminate the behavior of the more complicated, messy real world. Ross-Littlewood is a physical problem, typically interpreted as one of classical mechanics, and classical mechanics has a completely mature and rigorous mathematical foundation. We should rely upon mathematical modeling and analysis for our intuitions about the world of classical physics, not the other way around. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma | This answer aims to do four things:
Review Ross's mathematical formulation of the problem, showing how it follows directly and unambiguously from the problem description.
Defend the position that R | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left?
This answer aims to do four things:
Review Ross's mathematical formulation of the problem, showing how it follows directly and unambiguously from the problem description.
Defend the position that Ross's paradoxical solution is both mathematically sound and relevant to our understanding of the physical world, whether or not it is 100% physically realizable.
Discuss certain fallacious arguments rooted in physical intuition, and show that the oft-stated "physical" solution of infinite balls at noon is not only in contradiction to mathematics, but to physics as well.
Describe a physical implementation of the problem which may make Ross's solution more intuitive. Start here for the answer to Carlos's original question.
1. How to Describe the Problem Mathematically
We will unpack the initial "infinite process modeling" step of Ross's argument (p. 46). Here is the statement we will focus on justifying:
Define $E_n$ to be the event that ball number 1 is still in
the urn after the first n withdrawals have been made... The event that ball number 1 is in the urn at 12 P.M. is just the event $\bigcap_{n=1}^\infty E_n$.
Before we unpack Ross's statement, let's consider how it is even possible to understand the urn's contents at noon, after an infinite sequence of operations. How could we possibly know what is in the urn? Well, let's think about a specific ball $b$; you can imagine $b=1$ or $1000$ or whatever you want. If ball $b$ was taken out at some stage of the process before noon, certainly it won't be in the urn at noon. And conversely, if a given ball was in the urn at every single stage of the process up until noon (after it was added), then it was in the urn at noon. Let's write these statements out formally:
A ball $b$ is in the urn at noon if and only if it was in the urn at every stage $n \in \{n_b, n_b + 1, n_b + 2, ...\}$ before noon, where $n_b$ is the stage the ball was added to the urn.
Now let's unpack Ross's statement - what does $\bigcap_{n=1}^\infty E_n$ mean in plain English? Let's take a single realization $x$ of the urn process and talk it out:
$x \in E_1$ means that ball 1 is in the urn after stage 1 of the process.
$x \in E_1 \bigcap E_2$ means that ball 1 is in the urn after stages 1 and 2 of the process.
$x \in E_1 \bigcap E_2 \bigcap E_3$ means that ball 1 is in the urn after stages 1, 2, and 3 of the process.
For any $k \in \{1, 2, 3, ...\}$, $x \in \bigcap_{k=1}^n E_k$ means that the ball is in the urn after stages $1$ thru $n$.
Clearly, then, $x \in \bigcap_{k \in \{1, 2, 3...\}} E_k$ means that, in realization $x$ of this urn process, ball 1 is in the urn after stages 1, 2, 3, et cetera: all finite stages $k$ before noon. The infinite intersection $\bigcap_{n = 1}^\infty E_n$ is just another way of writing that, so $\bigcap_{n = 1}^\infty E_n$ contains precisely the realizations of the process where ball 1 was in the urn at all stages before noon. An event is just a defined set of realizations of a process, so the last sentence is precisely equivalent to saying that $\bigcap_{n = 1}^\infty E_n$ is the event that ball 1 was in the urn at all stages before noon, for this random process.
Now, the punchline: by our "if and only if" statement above, this is exactly the same as saying that ball 1 was in the urn at noon! So $\bigcap_{n = 1}^\infty E_n$ is the event that ball 1 is in the urn at noon, just as Ross originally stated. QED
In the derivation above, everything we said is equally valid for both the deterministic and probabilistic versions, because deterministic modeling is a special case of probabilistic modeling in which the sample space has one element. No measure theoretic or probability concepts were even used, beyond the words "event" and "realization" (which are just jargon for "set" and "element").
2. The Paradoxical Solution is Mathematically Sound and Relevant to Physics
After this setup point, the deterministic and probabilistic variants diverge. In the deterministic variant (version 2 from amoeba's post), we know ball 1 is taken out on the first step, so $E_1 = \emptyset$ and the infinite intersection, of course, is also empty. Similarly, any other ball $b$ is taken out at stage $b$ and is not present at noon. Thus the urn cannot contain any numbered ball $b$ at noon and must therefore be empty.
In the probabilistic variant, the same phenomenon happens, just in a softer "in-expectation" sense. The probability of any given ball's being present dwindles to zero as we approach noon, and at the limiting time of noon, the ball is almost surely not present. Since each ball is present with probability zero, and the sum of infinitely many zeros is still zero, there are almost surely no balls in the urn at noon. All of this is shown completely rigorously by Ross; details can be filled in with a knowledge of graduate-level measure theory, as @ekvall's answer shows.
If you accept the standard arguments about mathematical objects expressed as infinite sequences, for example $0.999... = 1$, the argument here should be just as acceptable, as it relies on the exact same principles. The only question remaining is whether the mathematical solution applies to the real world, or just the platonic world of mathematics. This question is complex and is discussed further in section 4.
That said, there is no reason to presuppose that the infinite urn problem is unphysical, or to reject it as irrelevant even if it is unphysical. Many physical insights have been gained from studying infinite structures and processes, for example, infinite wires and percolation lattices. Not all of these systems are necessarily physically realizable, but their theory shapes the rest of physics. Calculus itself is "unphysical" in some ways, because we don't know if it is possible to physically realize the arbitrarily small distances and times that are often its subject of study. That doesn't stop us from putting calculus to incredibly good use in the theoretical and applied sciences.
3. The Unphysicality of Solutions Based on "Physical Intuition"
For those who still believe that Ross's math is wrong or physically inaccurate in the deterministic variant, and the true physical solution is infinitely many balls: regardless of what you think happens at noon, it is impossible to deny the situation before noon: every numbered ball added to the urn eventually gets removed. So if you think there are somehow still infinitely many balls in the urn at noon, you must admit that not one of those balls can be a ball added before noon. So those balls must have come from somewhere else: you are asserting that infinitely many balls, unrelated to the original problem process, suddenly pop into existence at precisely noon to rescue the continuity of cardinality from being violated. As unphysical as the "empty set" solution might seem intuitively, this alternative is objectively and demonstrably unphysical. Infinite collections of objects do not pop into being in an instant just to satisfy poor human intuitions about infinity.
The common fallacy here seems to be that we can just look at the number of balls as time approaches noon, and assume that the divergent trend yields infinitely many balls at noon, without regard to exactly which balls are being taken in and out. There has even been an attempt to justify this with the "principle of indifference", which states that the answer shouldn't depend on whether the balls are labeled or not.
Indeed, the answer does not depend on whether the balls are labeled or not, but that is an argument for Ross's solution, not against it. From the perspective of classical physics, the balls are effectively labeled whether you think of them as labeled or not. They have distinct, permanent identities which are equivalent to labels, and a truly physical analysis must account for this, whether or not numbers are literally written on the balls. The labels themselves do not directly affect how the solution comes out, but they are needed to describe exactly how the balls are moved around. Some procedures leave balls in the urn forever, others provably remove every ball that is added, and labels are needed to even describe the difference between these procedures. Attempting to ignore the labels is not "physical", it's just neglecting to understand the physical problem precisely enough to solve it. (The same goes for complicated variants that reshuffle the labels at each stage. What matters is which balls are in the urn, not the labels someone has placed or replaced on them. This can be determined by ignoring the complicated relabeling scheme entirely and simply using a single unchanging labeling scheme, the one of Ross's original problem.)
The only way distinguishability would fail to be true is if the "balls" were quantum mechanical particles. In this case, the indifference principle fails spectacularly. Quantum physics tells us that indistinguishable particles behave completely differently than distinguishable ones. This has incredibly fundamental consequences for the structure of our universe, such as the Pauli exclusion principle, which is perhaps the single most important principle of chemistry. No one has attempted to analyze a quantum version of this paradox yet.
4. Describing the Solution Physically
We have seen how vague "physical" intuitions can lead us astray on this problem. Conversely, it turns out that a more physically precise description of the problem helps us understand why the mathematical solution is actually the one that makes the most physical sense.
Consider an infinite Newtonian Universe governed by the laws of classical mechanics. This Universe contains two objects: an infinite Shelf and an infinite Urn, which start at the Origin of the Universe and run alongside one another, one feet apart, forever and ever. The Shelf lies on the line $y = 0$ feet, while the Urn lies on the line $y = 1$ feet. Along the Shelf are laid infinitely many identical balls, evenly spaced one foot apart, the first being one foot from the Origin (so ball $n$ is on the line $x = n$ feet). The Urn - which is really just like the Shelf, but a bit more ornate, closed over, and generally Urnish - is empty.
An Aisle connects the Shelf and Urn at the bottom, and on top of the Aisle, at the Origin, sits an Endeavor robot with an infinite power supply. Beginning at 11 AM, Endeavor activates and begins zooming back and forth in the Aisle, transferring balls between Urn and Shelf according to Ross-Littlewood's programmed instructions:
When the program commands ball $n$ to be inserted into the Urn, the ball $n$ feet from the Origin is transferred from the Shelf to the Urn.
When the program commands ball $n$ to be removed from the Urn, the ball $n$ feet from the Origin is transferred from the Urn to the Shelf.
In either case, the transfer is made straight across, so the ball remains $n$ feet from the Origin. The process unfolds as specified in the Ross-Littlewood problem:
At 11:00 AM, Endeavor transfers balls 1-10 from Shelf to Urn, then moves one of the Urn balls back to Shelf.
At 11:30 AM, Endeavor transfers balls 11-20 from Shelf to Urn, then moves one of the Urn balls back to Shelf.
At 11:45 AM, Endeavor transfers balls 21-30 from Shelf to Urn, then moves one of the Urn balls back to Shelf.
et cetera...
As the process continues, each new step requires longer trips up and down the Aisle, and only half the time to make the trips. Thus, Endeavor must move up and down the Aisle exponentially faster as noon closes in. But it always keeps up with the program, because it has an infinite power supply and can move as fast as needed. Eventually, noon arrives.
What happens in this more vividly imagined version of the paradox? Watched from above, the approach towards noon is truly spectacular. Within the Urn, a Wave of balls appears to propagate outward from the Origin. The Wave's size and speed grow without bound as noon approaches. If we were to take pictures immediately after each step what would the layout of balls would look like? In the deterministic case, they would look exactly like the step functions in amoeba's answer. The ball positions $(x, y)$ would follow precisely the curves he has plotted. In the probabilistic case, it would look roughly similar, but with more straggling near the Origin.
When noon arrives, we take stock of what has happened. In the deterministic version, each ball was transferred from the Shelf to the Urn exactly once, then moved back at a later step, with both transfers happening before noon. At noon, the Universe must be back to its original 11 AM state. The Wave is no more. Each ball is back exactly where it started. Nothing has changed. The Urn is empty. In the probabilistic version the same thing happens, except now the result is only almost sure rather than sure.
In either case, "physical objections" and complaints about infinity seem to vanish into thin air. Of course the Urn is empty at noon. How could we have imagined otherwise?
The only remaining mystery is the fate of Endeavor. Its displacement from the Origin and its velocity became arbitrarily large as noon approached, so at noon, Endeavor is nowhere to be found in our infinite Newtonian Universe. The loss of Endeavor is the only violation of physics which has occurred during the process.
At this point, one could object that Endeavor is not physically possible, since its speed grows without bound and would eventually violate the relativistic limit, the speed of light. However, we can change the scenario slightly to resolve this issue. Instead of a single robot, we could have infinitely many robots, each responsible for a single ball. We could program them beforehand to ensure perfect coordination and timing according to Ross's instructions.
Is this variation 100% physical? Probably not, because the robots would have to operate with arbitrarily precise timing. As we approach noon, the precision demanded would eventually fall below the Planck time and create quantum mechanical issues. But ultimately, an infinite wire and an infinite percolation lattice might not be all that physical either. That doesn't stop us from studying infinite systems and processes and determining what would happen if the obstructing physical constraints were suspended.
4a. Why Count Monotonicity is Violated
A number of Ross skeptics have questioned how it is possible that the number of balls in the urn increases without bound as we approach noon, then is zero at noon. Ultimately we must believe in rigorous analysis over our own intuition, which is often wrong, but there is a variation of the paradox that helps illuminate this mystery.
Suppose that instead of infinitely many balls, we have $10N$ balls labeled 1, 2, 3, up to $10N$, and we issue the following addition to the rules for the ball mover:
If the instructions ask you to move a ball that does not exist, ignore that instruction.
Note that the original problem is unchanged if we add to it this instruction, since the instruction will never be activated with infinitely many balls. Thus, we can think of the original problem and this new family of problems to be part of the same family, with the same rules. Examining the finite $N$ family, especially for very large $N$, can help us to understand the "N = $\infty$" case.
In this variation, the balls accumulate 9 per step as before, but only up to step $N$ of the process. Then the numbers for balls to be added no longer correspond to actual balls, and we can only comply with the instruction to remove balls, and the process stops after $9N$ additional steps, for a total of $10N$ steps. If $N$ is very large, the removal-only phase occurs very close to noon, when the tasks are being done very rapidly, and the urn is emptied out very quickly.
Now suppose we do this variation of the experiment for each value of $N$ and graph the ball count over time, $f_N(t)$, where $t$ ranges from 0 to 1 hour after 11AM (i.e. 11AM to noon). Typically $f_N(t)$ rises for a while, then falls back to zero at or before $t=1$. In the limit as $N$ approaches infinity, the graph rises ever higher and the fall is ever more rapid. By noon the urn is always empty: $f_N(1) = 0$. In the limiting graph, $f(t) = \lim_{N \rightarrow \infty} f_N(t)$, the curve approaches infinity for $t < 1$ but $f(1) = 0$. This is precisely the result derived in Ross's proof: the ball count diverges to infinity before noon, but is zero at noon. In other words, Ross's solution preserves continuity with respect to N: the pointwise limit of the ball count as $N \rightarrow \infty$ matches the ball count in the infinite ball case.
I do not consider this a primary argument for Ross's solution, but it may be helpful for those who are puzzled about why the ball count goes up forever, than crashes to zero at noon. While strange, it is the limiting behavior of the finite version of the problem as $N \rightarrow \infty$, and thus does not come as a "sudden shock" in the infinite case.
A Final Reflection
Why has this problem proven to be such a tar-pit for so many? My speculation is that our physical intuition is much vaguer than we think it is, and we often draw conclusions based on imprecise and incomplete mental conceptions. For example, if I ask you to think of a square that is also a circle, you may imagine something squarish and circlish, but it won't be precisely both of those things - that would be impossible. The human mind can easily mash together vague, contradictory concepts into a single mental picture. If the concepts are less familiar, like the Infinite, we can convince ourselves that these vague mental mashups are actually conceptions of the Real Thing.
This is precisely what happens in the urn problem. We do not really conceive of the whole thing at once; we think about bits and pieces of it, like how many balls there are over time. We wave away supposedly irrelevant technicalities, like what happens to each humble little ball over time, or how exactly an "urn" can hold infinitely many balls. We neglect to set out all the details precisely, not realizing that the result is a mashup of inconsistent, incompatible mental models.
Mathematics is designed to rescue us from this condition. It disciplines and steels us in the face of the unfamiliar and the exotic. It demands that we think twice about the things that "must" be true... right? It reminds us that no matter how strange things get, one and one is still two, a ball is either in an urn or it is not, and a statement is either true or false. If we persevere, these principles eventually bring clarity to most of our problems.
Those who subordinate mathematical analysis to "physical" or "common-sense" intuitions do so at their peril. Hand-waving about intuitions is only the start of physics. Historically, all successful branches of physics have eventually founded themselves on rigorous mathematics, which culls away incorrect physical intuitions, strengthens correct ones, and enables the rigorous study of ideal systems, such as the infinite current-carrying wire, which illuminate the behavior of the more complicated, messy real world. Ross-Littlewood is a physical problem, typically interpreted as one of classical mechanics, and classical mechanics has a completely mature and rigorous mathematical foundation. We should rely upon mathematical modeling and analysis for our intuitions about the world of classical physics, not the other way around. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma
This answer aims to do four things:
Review Ross's mathematical formulation of the problem, showing how it follows directly and unambiguously from the problem description.
Defend the position that R |
1,412 | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left? | On the one hand, you could try to explain it like this: "think of the
probability of any ball i being on the urn at 12 P.M. During the
infinite random draws, it will eventually be removed. Since this holds
for all balls, none of them can be there at the end".
I don't find this argument convincing. If this argument works, then the following argument works: Every year, some people are born (say a constant fraction of the total population), and some people die (suppose a constant fraction). Then, since in the limit any particular person is almost surely dead, then the human race must go extinct! Now, the human race may go extinct for other reasons, but this argument is garbage.
It doesn't make any sense for this problem to have one solution when the balls are numbered and for it to have a totally different answer when the balls are anonymous. By symmetry, arbitrary labels should not affect the solution. Jaynes called this argument the principle of indifference, which I accept.
In other words, if someone told you that they put ten balls into an urn and remove one repeatedly, and how full is the urn in the limit, would your answer be "It depends on whether the balls are numbered"? Of course not. That urn's contents diverge just like the urn in this problem.
Therefore, I think the solution lies in how we formalize the problem. From the usual definition of set-theoretic limit, we have
$$\liminf_{n \to \infty} S_n = \bigcup_{n \ge 1} \bigcap_{j \geq n} S_j.$$
$$\limsup_{n \to \infty} S_n = \bigcap_{n \ge 1} \bigcup_{j \geq n} S_j$$
Let the limit of the cardinality of the set be
$$k\triangleq \lim_{n\to\infty}|S_n|$$
and the cardinality of the $\liminf$-limit of the set be
$$l \triangleq \left|\liminf_{n\to\infty} (S_n)\right|.$$
I propose that set-theoretic limit be redefined so that:
\begin{align}
\lim_{n\to\infty} S_n &\triangleq
\begin{cases}
\liminf_{n\to\infty} (S_n) &\text{if } \liminf_{n\to\infty} (S_n) = \limsup_{n\to\infty} (S_n), k \text{ exists, and }k=l \\
\alpha_k &\text{if }\liminf_{n\to\infty} (S_n) = \limsup_{n\to\infty} (S_n), k\text{ exists, and }k \ne l \\
\text{undefined} &\text{otherwise.}
\end{cases}
\end{align}
This special “anonymous set” $\alpha_k$ describes what happens at infinity. Just as $\infty$ stands in for the limiting behavior of numbers, $\alpha$ stands in for the limiting behavior of sets. Namely, we have $i \notin \alpha_k \forall i$, and $|\alpha_k| = k$. The benefit of this formalism is that it gives us continuity of cardinality and consistency with the principle of indifference.
For the urn problem, we have $S_n = \{n+1, \dotsc, 10n\}$ is the set of balls in the urn. And $$\lim_{n\to\infty} S_n = \alpha_{\infty}.$$
Thus, the elements don't "fall off a cliff" at infinity, which doesn't make sense any more than it makes sense for humanity to go extinct merely because no man is immortal.
Similarly, suppose we modify the problem so that at each step one ball is added and the lowest-numbered ball is removed. Then, how many balls are in the urn in the limit? Anonymous sets give the intuitive answer:
$$\lim_{n\to\infty}\{n\} = \alpha_1.$$
I recognize that mathematicians can disagree about resolutions to this paradox, but to me, this is the most intuitive resolution. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma | On the one hand, you could try to explain it like this: "think of the
probability of any ball i being on the urn at 12 P.M. During the
infinite random draws, it will eventually be removed. Since t | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left?
On the one hand, you could try to explain it like this: "think of the
probability of any ball i being on the urn at 12 P.M. During the
infinite random draws, it will eventually be removed. Since this holds
for all balls, none of them can be there at the end".
I don't find this argument convincing. If this argument works, then the following argument works: Every year, some people are born (say a constant fraction of the total population), and some people die (suppose a constant fraction). Then, since in the limit any particular person is almost surely dead, then the human race must go extinct! Now, the human race may go extinct for other reasons, but this argument is garbage.
It doesn't make any sense for this problem to have one solution when the balls are numbered and for it to have a totally different answer when the balls are anonymous. By symmetry, arbitrary labels should not affect the solution. Jaynes called this argument the principle of indifference, which I accept.
In other words, if someone told you that they put ten balls into an urn and remove one repeatedly, and how full is the urn in the limit, would your answer be "It depends on whether the balls are numbered"? Of course not. That urn's contents diverge just like the urn in this problem.
Therefore, I think the solution lies in how we formalize the problem. From the usual definition of set-theoretic limit, we have
$$\liminf_{n \to \infty} S_n = \bigcup_{n \ge 1} \bigcap_{j \geq n} S_j.$$
$$\limsup_{n \to \infty} S_n = \bigcap_{n \ge 1} \bigcup_{j \geq n} S_j$$
Let the limit of the cardinality of the set be
$$k\triangleq \lim_{n\to\infty}|S_n|$$
and the cardinality of the $\liminf$-limit of the set be
$$l \triangleq \left|\liminf_{n\to\infty} (S_n)\right|.$$
I propose that set-theoretic limit be redefined so that:
\begin{align}
\lim_{n\to\infty} S_n &\triangleq
\begin{cases}
\liminf_{n\to\infty} (S_n) &\text{if } \liminf_{n\to\infty} (S_n) = \limsup_{n\to\infty} (S_n), k \text{ exists, and }k=l \\
\alpha_k &\text{if }\liminf_{n\to\infty} (S_n) = \limsup_{n\to\infty} (S_n), k\text{ exists, and }k \ne l \\
\text{undefined} &\text{otherwise.}
\end{cases}
\end{align}
This special “anonymous set” $\alpha_k$ describes what happens at infinity. Just as $\infty$ stands in for the limiting behavior of numbers, $\alpha$ stands in for the limiting behavior of sets. Namely, we have $i \notin \alpha_k \forall i$, and $|\alpha_k| = k$. The benefit of this formalism is that it gives us continuity of cardinality and consistency with the principle of indifference.
For the urn problem, we have $S_n = \{n+1, \dotsc, 10n\}$ is the set of balls in the urn. And $$\lim_{n\to\infty} S_n = \alpha_{\infty}.$$
Thus, the elements don't "fall off a cliff" at infinity, which doesn't make sense any more than it makes sense for humanity to go extinct merely because no man is immortal.
Similarly, suppose we modify the problem so that at each step one ball is added and the lowest-numbered ball is removed. Then, how many balls are in the urn in the limit? Anonymous sets give the intuitive answer:
$$\lim_{n\to\infty}\{n\} = \alpha_1.$$
I recognize that mathematicians can disagree about resolutions to this paradox, but to me, this is the most intuitive resolution. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma
On the one hand, you could try to explain it like this: "think of the
probability of any ball i being on the urn at 12 P.M. During the
infinite random draws, it will eventually be removed. Since t |
1,413 | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left? | The problem is either ill-formed or not in first-order logic.
Root cause: execution of the "last" step will write an infinite number of digits on a ball, causing that step to take itself an infinite time to execute.
The ability to execute an infinite process with an infinite step implies the ability to solve all first-order logic problems (Gödel is therefore false) by execution of the following sequence H (for theorem X):
Z = asymptotic_coroutine(
FOR N = 1...∞
FOR P = 1...N
Convert number P to string S by characters.
IF S is a proof for theorem X
THEN
OUTPUT "yes" and HALT
) + asymptotic_coroutine(
FOR N = 1...∞
FOR P = 1...N
Convert number P to string S by characters.
IF S is a proof for theorem ¬X
THEN
OUTPUT "no" and HALT
)
IF Z = ""
THEN Z = "independent"
IF Z = "yesno" ∨ Z = "noyes"
THEN Z = "paradox"
OUTPUT Z
where the infinite step is unspooling the output
The program inside the asymptotic_coroutine is merely an exhaustive search for a theorem that proves (or disproves) X. Converting P to S results in "aa", "ab", "ac", ... "a∨", ... where every symbol that can appear in a theorem is generated. This results in generating all theorems of length logcharacters N in turn. Since N grows without limit in the outer loop this will eventually generate all theorems.
The side that is false will never terminate but we don't have to care about that because we are allowed to execute infinite steps. In fact we depend on being able to do this to detect independence as both sides will never finish. Except for one thing. We allowed an infinite number of steps to execute in a finite time by asymptotic increase of execution speed. This is the surprising part. The asymptotic_coroutine that never finishes and never generates output has "finished"* after the asymptotic time and still has never generated any output.
*If we placed an OUTPUT after the FOR N = 1...∞ it would not be reached but we are not going to do that.
The strong form of Gödel's Incompleteness Theorem may be stated "For every first-order logic system F there is a statement GF that is true in F but cannot be proven to be true in F." But proof method H cannot fail to prove all must-be-true statements in F(H).
Dilemma: ¬Gödel ∨ ¬(infinite steps are allowed)
Therefore:
Dilemma: ¬Gödel ∨ ¬(315502 is well formed in first order logic) | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma | The problem is either ill-formed or not in first-order logic.
Root cause: execution of the "last" step will write an infinite number of digits on a ball, causing that step to take itself an infinite t | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left?
The problem is either ill-formed or not in first-order logic.
Root cause: execution of the "last" step will write an infinite number of digits on a ball, causing that step to take itself an infinite time to execute.
The ability to execute an infinite process with an infinite step implies the ability to solve all first-order logic problems (Gödel is therefore false) by execution of the following sequence H (for theorem X):
Z = asymptotic_coroutine(
FOR N = 1...∞
FOR P = 1...N
Convert number P to string S by characters.
IF S is a proof for theorem X
THEN
OUTPUT "yes" and HALT
) + asymptotic_coroutine(
FOR N = 1...∞
FOR P = 1...N
Convert number P to string S by characters.
IF S is a proof for theorem ¬X
THEN
OUTPUT "no" and HALT
)
IF Z = ""
THEN Z = "independent"
IF Z = "yesno" ∨ Z = "noyes"
THEN Z = "paradox"
OUTPUT Z
where the infinite step is unspooling the output
The program inside the asymptotic_coroutine is merely an exhaustive search for a theorem that proves (or disproves) X. Converting P to S results in "aa", "ab", "ac", ... "a∨", ... where every symbol that can appear in a theorem is generated. This results in generating all theorems of length logcharacters N in turn. Since N grows without limit in the outer loop this will eventually generate all theorems.
The side that is false will never terminate but we don't have to care about that because we are allowed to execute infinite steps. In fact we depend on being able to do this to detect independence as both sides will never finish. Except for one thing. We allowed an infinite number of steps to execute in a finite time by asymptotic increase of execution speed. This is the surprising part. The asymptotic_coroutine that never finishes and never generates output has "finished"* after the asymptotic time and still has never generated any output.
*If we placed an OUTPUT after the FOR N = 1...∞ it would not be reached but we are not going to do that.
The strong form of Gödel's Incompleteness Theorem may be stated "For every first-order logic system F there is a statement GF that is true in F but cannot be proven to be true in F." But proof method H cannot fail to prove all must-be-true statements in F(H).
Dilemma: ¬Gödel ∨ ¬(infinite steps are allowed)
Therefore:
Dilemma: ¬Gödel ∨ ¬(315502 is well formed in first order logic) | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma
The problem is either ill-formed or not in first-order logic.
Root cause: execution of the "last" step will write an infinite number of digits on a ball, causing that step to take itself an infinite t |
1,414 | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left? | What's the best explanation we can give to them to solve these
conflicting intuitions?
Here's the best answer, and it has very little to do with probabilities. All balls have numbers, let's call them birth numbers. The birth numbers start from B1, B2, B3... and go to infinity, because we really never stop. We get closer to 12:00AM but keep adding and removing balls, that's why there is not a final number of a ball. This is a very important consideration, btw.
We put the balls into a box in 10 ball batches, such as batch #7: B71, B72,...,B80. Let's forget about these for a minute, and focus on the balls that are removed from the box. They come at a random order. I'll explain why randomness is important later, but for now all it means is that any ball with a brith number from B1 to B10k that's still in the box at step K can be drawn out. We're going to index the balls that we remove by the order in which they were removed, let's call them death numbers: D1, D2, D3 ... DK.
By 12:00AM we put infinite number of balls into a box, and surely we never ran out of balls to remove from it. Why? Because we first put 10 balls, THEN ONLY remove one. So, there's always a ball to remove. This means that we also removed infinite number of balls by 12:00AM.
This also means that each removed ball was indexed from 1 to infinity, i.e. we could pair each removed ball to a ball that was put in the box: B1 to D1, B2 to D2, etc. This means that we removed as many balls as we put in, because each birth number was paired with each death number.
Now that was the solution. Why does it defeat our intuition? It's elementary, Dr Watson. The reason is because we surely know that for all K this holds:
$$K<10K$$
That's why after K steps, we should not be able to remove all ball from the box, because we put 10K balls and removed only K of them. Right?
There is a little problem. The matter is that when $K=\infty$, this is no longer true:
$$10\times\infty\nless\infty$$
That's why the intuition breaks down.
Now, if the balls were not removed at random. Two thing may happen as in @amoeba's canonical answer. First, let's say we were putting 10 balls then immediately removing the last one. It's as if we were putting only nine balls in. This will match our intuition, and at 12:00AM there will be infinite number of balls. How come? Because we were not removing balls randomly, we were following the algorithm where the birth numbers were paired to death numbers as $B10K=DK$ at the time of removal. So, we paired each removed ball to one of the balls that we put in: $B10\to D1,B20\to D2,B30\to D3,\dots$
, this means a ton of balls were never ever paired B1,B2,...,B9,B11,... etc.
The second thing that may happen with non random ball removal is also related to pairing at removal: we correlate BK=DK. We can do this by removing a ball with BK at each step K, which ensures that BK is paired to DK. This way each removed ball is paired with each ball that we put in, i.e. the same end result like in the random draw of removed balls. Obviously, this means that there are no balls left in the box after 12:00AM.
I just have shown that the problem has very little to do with probabilities per se. It has everything to do with powers of infinite countable (?) sets. The only real problem that I avoided discussing is whether the sets are truly countable. You see when you get closer to 12:00AM your rate of ball inserts is increasing rather quickly, to put it mildly. So, it's not so trivial to devise whether the number of balls that we put into the box is actually countable.
Unraveling
Now, I'm going to unravel this canonical solution of the paradox, and get back to our intuition.
How is is possible that we put 10 balls in, remove one and still run out of all the balls at 12 hour? Here's what really is happening. 12 hour is unreachable.
Let as reformulate the problem. We don't halve time intervals anymore. We put and remove balls every minute. Isn't this exactly the same as in the original problem? Yes and no.
Yes, because nowhere in my exposition above I referred explicitly to time but at the very end. I was counting the steps k. So, we can keep counting the steps and dead balls by k.
No, because now we're never going to stop. We'll keep adding and removing balls till the end of time, which never arrives. While in the original problem the end is at 12 hour.
This explains how our intuition fails. Although we put balls at 9x rate of removal, because time never ends, every ball that we put in will be removed eventually! It may take infinite number of minutes, but it's Ok, because we have infinite number of minutes remaining. That's the true solution of the problem.
In this formulation would you ask "how many balls are in the box after infinity is over?" No! Because it's a nonsensical question. That's why the original question is nonsensical too. Or you could call it ill-posed.
Now, if you go back to the original problem, then the end of time apparently happens. It's at 12. The fact that we stopped putting balls in means that time just ended, and we reached beyond the end of it. So, the true answer to the question is that 12 o'clock should never occur. It's unreachable. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma | What's the best explanation we can give to them to solve these
conflicting intuitions?
Here's the best answer, and it has very little to do with probabilities. All balls have numbers, let's call th | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left?
What's the best explanation we can give to them to solve these
conflicting intuitions?
Here's the best answer, and it has very little to do with probabilities. All balls have numbers, let's call them birth numbers. The birth numbers start from B1, B2, B3... and go to infinity, because we really never stop. We get closer to 12:00AM but keep adding and removing balls, that's why there is not a final number of a ball. This is a very important consideration, btw.
We put the balls into a box in 10 ball batches, such as batch #7: B71, B72,...,B80. Let's forget about these for a minute, and focus on the balls that are removed from the box. They come at a random order. I'll explain why randomness is important later, but for now all it means is that any ball with a brith number from B1 to B10k that's still in the box at step K can be drawn out. We're going to index the balls that we remove by the order in which they were removed, let's call them death numbers: D1, D2, D3 ... DK.
By 12:00AM we put infinite number of balls into a box, and surely we never ran out of balls to remove from it. Why? Because we first put 10 balls, THEN ONLY remove one. So, there's always a ball to remove. This means that we also removed infinite number of balls by 12:00AM.
This also means that each removed ball was indexed from 1 to infinity, i.e. we could pair each removed ball to a ball that was put in the box: B1 to D1, B2 to D2, etc. This means that we removed as many balls as we put in, because each birth number was paired with each death number.
Now that was the solution. Why does it defeat our intuition? It's elementary, Dr Watson. The reason is because we surely know that for all K this holds:
$$K<10K$$
That's why after K steps, we should not be able to remove all ball from the box, because we put 10K balls and removed only K of them. Right?
There is a little problem. The matter is that when $K=\infty$, this is no longer true:
$$10\times\infty\nless\infty$$
That's why the intuition breaks down.
Now, if the balls were not removed at random. Two thing may happen as in @amoeba's canonical answer. First, let's say we were putting 10 balls then immediately removing the last one. It's as if we were putting only nine balls in. This will match our intuition, and at 12:00AM there will be infinite number of balls. How come? Because we were not removing balls randomly, we were following the algorithm where the birth numbers were paired to death numbers as $B10K=DK$ at the time of removal. So, we paired each removed ball to one of the balls that we put in: $B10\to D1,B20\to D2,B30\to D3,\dots$
, this means a ton of balls were never ever paired B1,B2,...,B9,B11,... etc.
The second thing that may happen with non random ball removal is also related to pairing at removal: we correlate BK=DK. We can do this by removing a ball with BK at each step K, which ensures that BK is paired to DK. This way each removed ball is paired with each ball that we put in, i.e. the same end result like in the random draw of removed balls. Obviously, this means that there are no balls left in the box after 12:00AM.
I just have shown that the problem has very little to do with probabilities per se. It has everything to do with powers of infinite countable (?) sets. The only real problem that I avoided discussing is whether the sets are truly countable. You see when you get closer to 12:00AM your rate of ball inserts is increasing rather quickly, to put it mildly. So, it's not so trivial to devise whether the number of balls that we put into the box is actually countable.
Unraveling
Now, I'm going to unravel this canonical solution of the paradox, and get back to our intuition.
How is is possible that we put 10 balls in, remove one and still run out of all the balls at 12 hour? Here's what really is happening. 12 hour is unreachable.
Let as reformulate the problem. We don't halve time intervals anymore. We put and remove balls every minute. Isn't this exactly the same as in the original problem? Yes and no.
Yes, because nowhere in my exposition above I referred explicitly to time but at the very end. I was counting the steps k. So, we can keep counting the steps and dead balls by k.
No, because now we're never going to stop. We'll keep adding and removing balls till the end of time, which never arrives. While in the original problem the end is at 12 hour.
This explains how our intuition fails. Although we put balls at 9x rate of removal, because time never ends, every ball that we put in will be removed eventually! It may take infinite number of minutes, but it's Ok, because we have infinite number of minutes remaining. That's the true solution of the problem.
In this formulation would you ask "how many balls are in the box after infinity is over?" No! Because it's a nonsensical question. That's why the original question is nonsensical too. Or you could call it ill-posed.
Now, if you go back to the original problem, then the end of time apparently happens. It's at 12. The fact that we stopped putting balls in means that time just ended, and we reached beyond the end of it. So, the true answer to the question is that 12 o'clock should never occur. It's unreachable. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma
What's the best explanation we can give to them to solve these
conflicting intuitions?
Here's the best answer, and it has very little to do with probabilities. All balls have numbers, let's call th |
1,415 | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left? | I want to make a reformulation that is as easy as possible to make the answer of 0 more intuitive, starting from the simplified example that balls are not removed randomly, but ball $n$ is removed at the $n$-th step.
Consider this: I put all balls into the urn at the beginning. In step 1, I take out ball 1. In step 2, I take out ball 2, and so on. Any doubt that the urn will be empty after infinite steps?
Okay. But if I don't put all balls into the urn at first, but only some balls, how could the urn be fuller in the end? | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma | I want to make a reformulation that is as easy as possible to make the answer of 0 more intuitive, starting from the simplified example that balls are not removed randomly, but ball $n$ is removed at | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left?
I want to make a reformulation that is as easy as possible to make the answer of 0 more intuitive, starting from the simplified example that balls are not removed randomly, but ball $n$ is removed at the $n$-th step.
Consider this: I put all balls into the urn at the beginning. In step 1, I take out ball 1. In step 2, I take out ball 2, and so on. Any doubt that the urn will be empty after infinite steps?
Okay. But if I don't put all balls into the urn at first, but only some balls, how could the urn be fuller in the end? | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma
I want to make a reformulation that is as easy as possible to make the answer of 0 more intuitive, starting from the simplified example that balls are not removed randomly, but ball $n$ is removed at |
1,416 | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left? | Let x be the number of balls that have been removed and y be the number of balls remaining. After each cycle y=9x. As x>0, y>0. There will be infinitely many balls in the urn at 12PM.
The reason that solutions based on probabilities lead to difficulties is that the probabilities from infinite series are tricky. ET Jaynes wrote about a few different apparent paradoxes of probability, like this one, in his book Probability Theory: The Logic of Science. I do not have my copy at hand, but the first part of the book is available online from Larry Bretthorst here. The following quote is from the preface.
Yet when all is said and done we find, to our own surprise, that
little more than a loose philosophical agreement remains; on many
technical issues we disagree strongly with de Finetti. It appears to
us that his way of treating infinite sets has opened up a Pandora’s
box of useless and unnecessary paradoxes; nonconglomerability and
finite additivity are examples discussed in Chapter 15.
Infinite set paradoxing has become a morbid infection that is today
spreading in a way that threatens the very life of probability theory,
and requires immediate surgical removal. In our system, after this
surgery, such paradoxes are avoided automatically; they cannot arise
from correct application of our basic rules, because those rules admit
only finite sets and infinite sets that arise as well-defined and
well-behaved limits of finite sets. The paradoxing was caused by (1)
jumping directly into an infinite set without specifying any limiting
process to define its properties; and then (2) asking questions whose
answers depend on how the limit was approached.
For example, the question: “What is the probability that an integer is
even?” can have any answer we please in (0, 1), depending on what
limiting process is to define the “set of all integers” (just as a
conditionally convergent series can be made to converge to any number
we please, depending on the order in which we arrange the terms).
In our view, an infinite set cannot be said to possess any “existence”
and mathematical prop- erties at all—at least, in probability
theory—until we have specified the limiting process that is to
generate it from a finite set. In other words, we sail under the
banner of Gauss, Kronecker, and Poincar ́e rather than Cantor,
Hilbert, and Bourbaki. We hope that readers who are shocked by this
will study the indictment of Bourbakism by the mathematician Morris
Kline (1980), and then bear with us long enough to see the advantages
of our approach. Examples appear in almost every Chapter.
The use of limits in the answer of @enumaris (+1) provides a way around the trickiness of infinities in probability. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma | Let x be the number of balls that have been removed and y be the number of balls remaining. After each cycle y=9x. As x>0, y>0. There will be infinitely many balls in the urn at 12PM.
The reason that | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left?
Let x be the number of balls that have been removed and y be the number of balls remaining. After each cycle y=9x. As x>0, y>0. There will be infinitely many balls in the urn at 12PM.
The reason that solutions based on probabilities lead to difficulties is that the probabilities from infinite series are tricky. ET Jaynes wrote about a few different apparent paradoxes of probability, like this one, in his book Probability Theory: The Logic of Science. I do not have my copy at hand, but the first part of the book is available online from Larry Bretthorst here. The following quote is from the preface.
Yet when all is said and done we find, to our own surprise, that
little more than a loose philosophical agreement remains; on many
technical issues we disagree strongly with de Finetti. It appears to
us that his way of treating infinite sets has opened up a Pandora’s
box of useless and unnecessary paradoxes; nonconglomerability and
finite additivity are examples discussed in Chapter 15.
Infinite set paradoxing has become a morbid infection that is today
spreading in a way that threatens the very life of probability theory,
and requires immediate surgical removal. In our system, after this
surgery, such paradoxes are avoided automatically; they cannot arise
from correct application of our basic rules, because those rules admit
only finite sets and infinite sets that arise as well-defined and
well-behaved limits of finite sets. The paradoxing was caused by (1)
jumping directly into an infinite set without specifying any limiting
process to define its properties; and then (2) asking questions whose
answers depend on how the limit was approached.
For example, the question: “What is the probability that an integer is
even?” can have any answer we please in (0, 1), depending on what
limiting process is to define the “set of all integers” (just as a
conditionally convergent series can be made to converge to any number
we please, depending on the order in which we arrange the terms).
In our view, an infinite set cannot be said to possess any “existence”
and mathematical prop- erties at all—at least, in probability
theory—until we have specified the limiting process that is to
generate it from a finite set. In other words, we sail under the
banner of Gauss, Kronecker, and Poincar ́e rather than Cantor,
Hilbert, and Bourbaki. We hope that readers who are shocked by this
will study the indictment of Bourbakism by the mathematician Morris
Kline (1980), and then bear with us long enough to see the advantages
of our approach. Examples appear in almost every Chapter.
The use of limits in the answer of @enumaris (+1) provides a way around the trickiness of infinities in probability. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma
Let x be the number of balls that have been removed and y be the number of balls remaining. After each cycle y=9x. As x>0, y>0. There will be infinitely many balls in the urn at 12PM.
The reason that |
1,417 | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left? | It's worth reading amoeba's answer that is just excellent and clarifies the problem very much. I don't exactly disagree with his answer but want to point out that the solution of the problem is based on a certain convention. What is interesting is that this sort of problem shows that this convention, while often used, is questionable.
Just as he says there is a technical point about proving that for each ball the probability to stay in the urn forever is 0. Apart from this point, the problem is not about probabilities. A deterministic equivalent may be given. It is much easier to understand. The key idea is: since every ball is absent from the urn from some point in time, the urn at the end is empty. If you represent the presence in the urn of each ball by a sequence of zeros and ones, each sequence is 0 from a certain range, thus its limit is 0.
Now the problem can be simplified even more. I call the moments 1, 2, 3.... for simplicity:
moment 1: put ball 1 in the urn
moment 2: remove it
moment 3: put ball 2 in the urn
moment 4: remove it
moment 5: put ball 3 in the urn
...
What balls at the end (noon) ? With the same idea, same answer: none.
But fundamentally, there is no way to know, because the problem does not say what happens at noon. Actually, it is possible that at the end of times, Pikachu comes suddenly in the urn. Or maybe balls all suddenly collapse and merge into one big ball. Not meaning that this is meant to be realistic, it's just not specified.
The problem can only be answered if a certain convention tells us how to go to the limit: a continuity assumption. The state of the urn at noon is the limit of its states before. Where should we look for a continuity assumption that would help us answer to the question?
In physical laws? Physical laws ensure a certain continuity. I think of a simplistic classical model, not calling on to real modern physics. But fundamentally, physical laws would bring exactly the same questions as the mathematical ones: the way we choose to describe continuity for physical laws relies on asking the question mathematically: what is continuous, how?
We have to look for a continuity assumption in a more abstract way. The usual idea is to define the state of the urn as a function from the set of balls into $\{0;1\}$. 0 means absent, 1 means present. And to define continuity, we use product topology, aka pointwise convergence. We say that the state at noon, is the limit of the states before noon according to this topology. With this topology, there is a limit, and it is 0: an empty urn.
But now we modify the problem a little in order to challenge this topology:
moment 1: put ball 1 in the urn
moment 2: remove it
moment 3: put ball 1 in the urn
moment 4: remove it
moment 5: put ball 1 in the urn
...
For the same topology, the sequence of states has no limit. That's where I start to see the paradox as a true paradox. For me this modified problem is essentially the same. Imagine you are the urn. You see balls coming and going. If you can't read the number on it, whether it is the same ball or another one does not change what's happening to you. Instead of seeing balls as individual distinct elements, you see them as a quantity of matter coming in and out. The continuity could naturally be defined by looking at variations of the quantity of matter. And there is indeed no limit. In a way this problem is the same as the original problem where you decide to ignore the ball identity, thus leading to a different metric and a different notion of convergence. And even if you could see the number on the balls, the state could be seen as just a flickering presence with a growing number.
In one case, the limit of the sequence of your states is "empty", in the other case the limit is undefined.
The formalization of the problem with the product topology fundamentally relies on separating what happens to each different ball, and thus creating a metric reflecting the "distinguishablitiy". Only because of this separation, a limit can be defined. The fact that this separation is so fundamental to the answer but not fundamental for describing "what's going on" in the urn (a point that is endlessly arguable), makes me think the solution is the consequence of a convention rather than a fundamental truth.
For me, the problem, when considered as purely abstract has a solution as long as the missing information is provided: that the state at noon is the limit of the previous states and limit in what sense. However, when thinking of this problem intuitively, the limit of the sequence of states is not something you can think in a single manner. Fundamentally, I think there is no way to answer. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma | It's worth reading amoeba's answer that is just excellent and clarifies the problem very much. I don't exactly disagree with his answer but want to point out that the solution of the problem is based | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left?
It's worth reading amoeba's answer that is just excellent and clarifies the problem very much. I don't exactly disagree with his answer but want to point out that the solution of the problem is based on a certain convention. What is interesting is that this sort of problem shows that this convention, while often used, is questionable.
Just as he says there is a technical point about proving that for each ball the probability to stay in the urn forever is 0. Apart from this point, the problem is not about probabilities. A deterministic equivalent may be given. It is much easier to understand. The key idea is: since every ball is absent from the urn from some point in time, the urn at the end is empty. If you represent the presence in the urn of each ball by a sequence of zeros and ones, each sequence is 0 from a certain range, thus its limit is 0.
Now the problem can be simplified even more. I call the moments 1, 2, 3.... for simplicity:
moment 1: put ball 1 in the urn
moment 2: remove it
moment 3: put ball 2 in the urn
moment 4: remove it
moment 5: put ball 3 in the urn
...
What balls at the end (noon) ? With the same idea, same answer: none.
But fundamentally, there is no way to know, because the problem does not say what happens at noon. Actually, it is possible that at the end of times, Pikachu comes suddenly in the urn. Or maybe balls all suddenly collapse and merge into one big ball. Not meaning that this is meant to be realistic, it's just not specified.
The problem can only be answered if a certain convention tells us how to go to the limit: a continuity assumption. The state of the urn at noon is the limit of its states before. Where should we look for a continuity assumption that would help us answer to the question?
In physical laws? Physical laws ensure a certain continuity. I think of a simplistic classical model, not calling on to real modern physics. But fundamentally, physical laws would bring exactly the same questions as the mathematical ones: the way we choose to describe continuity for physical laws relies on asking the question mathematically: what is continuous, how?
We have to look for a continuity assumption in a more abstract way. The usual idea is to define the state of the urn as a function from the set of balls into $\{0;1\}$. 0 means absent, 1 means present. And to define continuity, we use product topology, aka pointwise convergence. We say that the state at noon, is the limit of the states before noon according to this topology. With this topology, there is a limit, and it is 0: an empty urn.
But now we modify the problem a little in order to challenge this topology:
moment 1: put ball 1 in the urn
moment 2: remove it
moment 3: put ball 1 in the urn
moment 4: remove it
moment 5: put ball 1 in the urn
...
For the same topology, the sequence of states has no limit. That's where I start to see the paradox as a true paradox. For me this modified problem is essentially the same. Imagine you are the urn. You see balls coming and going. If you can't read the number on it, whether it is the same ball or another one does not change what's happening to you. Instead of seeing balls as individual distinct elements, you see them as a quantity of matter coming in and out. The continuity could naturally be defined by looking at variations of the quantity of matter. And there is indeed no limit. In a way this problem is the same as the original problem where you decide to ignore the ball identity, thus leading to a different metric and a different notion of convergence. And even if you could see the number on the balls, the state could be seen as just a flickering presence with a growing number.
In one case, the limit of the sequence of your states is "empty", in the other case the limit is undefined.
The formalization of the problem with the product topology fundamentally relies on separating what happens to each different ball, and thus creating a metric reflecting the "distinguishablitiy". Only because of this separation, a limit can be defined. The fact that this separation is so fundamental to the answer but not fundamental for describing "what's going on" in the urn (a point that is endlessly arguable), makes me think the solution is the consequence of a convention rather than a fundamental truth.
For me, the problem, when considered as purely abstract has a solution as long as the missing information is provided: that the state at noon is the limit of the previous states and limit in what sense. However, when thinking of this problem intuitively, the limit of the sequence of states is not something you can think in a single manner. Fundamentally, I think there is no way to answer. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma
It's worth reading amoeba's answer that is just excellent and clarifies the problem very much. I don't exactly disagree with his answer but want to point out that the solution of the problem is based |
1,418 | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left? | The aim of this post is to argue for the OPs last option that we need a better formulation. Or at least, Ross proof is not as clear cut as it may seem at first, and certainly, the proof is not so intuitive that is in a good position to be in an introduction course for theory of probability. It requires much explanation both in understanding the paradoxical aspects, and once that has been cleared explanation at the points where Ross' proof passes very quickly, making it difficult to see which axioms, theorems, and implicit interpretations that the proof depends on.
Related to this aspect it is very amusing to read Teun Koetsier's final words in "Didactiek met oneindig veel pingpongballen?"
Als we niet oppassen dan wordt het 'Paradoxes a window to confusion'.
Translated "If we aren't carefull then it becomes 'Paradoxes a window to confusion'"
Below is a description of the "regular" arguments that may pass in discussions on supertasks, and more specifically the deterministic Ross-Littlewood paradox. After this, when we set all this discussion aside, a view is given of the special case of the probabilistic Ross-Littlewood paradox as providing additional elements, which however get lost and confusing in the wider setting with supertasks.
Three deterministic cases and discussion on supertasks
The Ross-Littlewood paradox knows many different outcomes depending on the manner in which the balls are displaced from the urn. To investigate these, let's kick off by using the exact problem description as Littlewood describes as the 5th problem in his 1953 manuscript
Version 1 The set of balls remaining in the urn is empty
The Ross-Littlewood paradox, or Littlewood-Ross paradox, first appeared as the 5th problem in Littlewood's 1953 manuscript "a mathematician's miscellany"
An infinity paradox. Balls numbered 1, 2, ... (or for a mathematician the numbers themselves) are put into a box as follows. At 1 minute to
noon the numbers 1 to 10 are put in, and the number 1 is taken out. At
1/2 minute to noon numbers 11 to 20 are put in and the number 2 is
taken out and so on. How many are in the box at noon?
Littlewood is short about this problem, but gives a nice representation as the set of points:
$P_{1} + P_{2}+ ... + P_{10} - P_1 + P_{11} + ... + P_{20} - P_2 + ...$
for which it is easily noticed that it is 'null'.
Version 2 The set of balls remaining in the urn has infinite size
Ross (1976) adds two more versions to this paradox. First we look at the first addition:
Suppose that we possess an infinitely large urn and an infinite
collection of balls labeled ball number 1, number 2, number 3, and so
on. Consider an experiment performed as follows: At 1 minute to 12
P.M. , balls numbered 1 through 10 are placed in the urn and ball
number 10 is withdrawn. (Assume that the withdrawal takes no time.) At
12 minute to 12 P.M. , balls numbered 11 through 20 are placed in the
urn and ball number 20 is withdrawn. At 14 minute to 12 P.M. , balls
numbered 21 through 30 are placed in the urn and ball number 30 is
withdrawn. At 18 minute to 12 P.M. , and so on. The question of
interest is, How many balls are in the urn at 12 P.M. ?
Obviously the answer is infinity since this procedure leaves all the balls with numbers $x \mod 10 \neq 0$ in the urn, which are infinitely many.
Before we move on to Ross' second addition, which included probabilities, we move on to another case.
Version 3 The set of balls remaining in the urn is a finite set of arbitrary size
The urn can have any number of balls at 12 p.m. depending on the procedure of displacing the balls. This variation has been described in by Tymoczko and Henle (1995) as the tennis ball problem.
Tom is in a large box, empty except for himself. Jim is standing
outside the box with an infinite number of tennis balls (numbered 1,
2, 3, ....). Jim throws balls 1 and 2 into the box. Tom picks up a
tennis ball and throws it out. Next Jim throws in balls 3 and 4. Tom
picks up a ball and throws it out. Next Jim throws in balls 5 and 6.
Tom picks up a ball and throws it out. This process goes on an
infinite number of times until Jim has thrown all the balls in. Once
again, we ask you to accept accomplishing an infinite number of tasks
in a finite period of time. Here is the question: How many balls are
in the box with Tom when the action is over?
The answer is somewhat disturbing: It depends. Not enough information
has been given to answer the question. There might be an infinite
number of balls left, or there might be none.
In the textbook example they argue for the two cases, either infinite or finite (Tymoczko and Henle, leave the intermediate case as an exercise), however the problem is taken further in several journal articles in which the problem is generalized such that we can get any number depending on the procedure followed.
Especially interesting are the articles on the combinatorial aspects of the problem (where the focus is, however, not on the aspects at infinity). For instance counting the number of possible sets that we can have at any time. In the case of adding 2 balls and removing 1 each step the results are simple and there the number of possible sets in the n-th step is the n+1-th catalan number. E.g. 2 possibilties {1},{2} in the first step, 5 possibilities {1,3}{1,4}{2,3}{2,4} and {3,4} in the second step, 14 in the third, 42 in the fourth, etcetera (see Merlin, Sprugnoli and Verri 2002, The tennis ball problem). This result has been generalized to different numbers of adding and substracting balls but this goes too far for this post now.
Arguments based on the concept of supertasks
Before getting to the theory of probability, many arguments can already be made against the deterministic cases and the possibility of completing the supertask. Also, one can question whether the set theoretic treatment is a valid representation of the kinematic representation of the supertask. I do not wish to argue whether these arguments are good or bad. I mention them to highlight that the probabilistic case can be contrasted with these 'supertask'-arguments and can be seen as containing additional elements that have nothing to do with supertasks. The probabilistic case has a unique and separate element (the reasoning with theory of probability) that is neither proven or refuted by arguing against or for the case of supertasks.
Continuity arguments: These arguments are often more conceptual. For instance the idea that the supertask can not be finished such as Aksakal and Joshua argue in their answers, and a clear demonstration of these notions is Thomson's lamp, which in the case of the Ross Littlewood paradox would be like asking, was the last removed number odd or even?
Physical arguments: There exist also arguments that challenge the mathematical construction as being relevant to the physical realization of the problem. We can have a rigorous mathematical treatement of a problem, but a question remains whether this really has bearing on a mechanistic execution of the task (beyond the simplistic notions such as breaking certain barriers of the physical world as speed limits or energy/space requirements).
One argument might be that the set-theoretic limit is a mathematical
concept that not necessarily describes the physical reality
For example consider the following different problem: The urn has a ball inside which we do not move. Each step we erase the number previously written on the ball and rewrite a new, lower, number on it. Will the urn be empty after infinitely many steps? In this case it seems a bit more absurd to use the set theoretic limit, which is the empty set. This limit is nice as a mathematical reasoning, but does it represent the physical nature of the problem? If we allow balls to disappear from urns because of abstract mathematical reasoning (which, maybe should be considered more as a different problem) then just as well we might make the entire urn disappear?
Also, the differentiation of the balls and assigning them an ordering seems "unphysical" (it is relevant to the mathematical treatment of sets, but do the balls in the urn behave like those sets?). If we would reshuffle the balls at each step (e.g. each step randomly switch a ball from the discarded pile with a ball from the remaining pile of infinite balls), thus forgetting the numbering based on either when they enter the urn or the number they got from the beginning, then the arguments based on set theoretic limits makes no sense anymore because the sets do not converge (there is no stable solution once a ball has been discarded from the urn, it can return again).
From the perspective of performing the physical tasks of
filling and emptying the urn it seems like it should not matter
whether or not we have numbers on the balls. This makes the set
theoretic reasoning more like a mathematical thought about infinite
sets rather than the actual process.
Anyway, If we insist on the use of these infinite paradoxes for didactic purposes, and thus, before we get to the theory of probability, we first need to fight for getting an acceptable idea of (certain) supertasks accepted by the most skeptical/stubborn thinkers, then it may be interesting to use the correspondence between the Zeno's paradox and the Ross-Littlewood paradox described by Allis and Koetsier (1995) and shortly described below.
In their analogy Achilles is trying to catch up the turtle while both of them cross flags that are placed in such a way, with distance $$F(n)=2^{-10 \log n}$$ such that the distance of Achilles with $n$ flags is twice the distance of the turtle with $10n$ flags, namely $F(n) = 2 F(10n)$. Then until 12.pm. the difference in the flags that the turtle and Achilles will have past is growing. But, eventually at 12 p.m. nobody except the Eleatics would argue that they Achilles and the turtle reached the same point and (thus) have zero flags in between them.
The probabilistic case and how it adds new aspects to the problem.
The second version added by Ross (in his textbook), removes the balls based on random selection
Let us now suppose that whenever a ball is to be withdrawn, that ball
is randomly selected from among those present. That is, suppose that
at 1 minute to 12 P.M. balls numbered 1 through 10 are placed in the
urn and a ball is randomly selected and withdrawn, and so on. In this
case, how many balls are in the urn at 12 P.M. ?
Ross solution is that the probability is 1 for the urn being empty. However, while Ross' argumentation seems sound and rigorous, one might wonder what kind of axioms are necessary for this and which of the used theorems might be placed under stress by implicit assumptions that might be not founded in those axioms (for instance the presupposition that the events at noon can be assigned probabilities).
Ross' calculation is in short a combination of two elements that divides the event of a non-empty urn into countably many subsets/events and proves that for each of these events the probability is zero:
For, $F_i$, the event that ball number $i$ is in the urn at 12 p.m., we have $P(F_1) = 0 $
For, $P(\bigcup_1^\infty F_i)$, the probability that the urn is not empty at 12 p.m. we have
$P(\bigcup_1^\infty F_i) \leq \sum_1^\infty P(F_i) = 0$
The probabilistic case of the Ross-Littlewood paradox, without reasoning about supertasks
In the most naked form of the paradox, stripping it from any problems with the performance of supertasks, we may wonder about the "simpler" problem of subtracting infinite sets. For instance in the three versions we get:
$$\begin{array} \\
S_{added} &= \lbrace 1,2,3,4,5,6,7,8,9,10 \rbrace + \lbrace10k \text{ with } k \in \mathbb{N} \rbrace \\
S_{removed,1} &= \lbrace k \text{ with } k \in \mathbb{N} \rbrace \\
S_{removed,2} &= \lbrace 10k \text{ with } k \in \mathbb{N} \rbrace \\
S_{removed,3} &= \lbrace k \text{ with } k \in \mathbb{N} \rbrace \setminus \lbrace a_1,a_2,a_3,... \text{ with } a_i \in \mathbb{N} \rbrace \end{array}$$
and the problem reduces to a set subtraction like $S_{added}-S_{removed,1} = \emptyset$.
Any infinite sequence, $S_{RL} =\lbrace a_k \text{ without repetitions and } a_k < 10k \rbrace$ , is a (equally) possible sequence that describes the order in which the balls can be removed in a probabilistic realization of the Ross-Littlewood problem. Lets call these infinite sequences RL-sequences.
Now, the more general question, without the paradoxical reasoning about supertasks, is about the density of RL sequences that do not contain the entire set $\mathbb{N}$
A graphical view of the problem.
nested, fractal, structure
Before the edited version of this answer I had made an argument that used the existence of an injective map from 'the infinite sequences that empty the urn' to 'the infinite sequences that do not contain the number 1'.
That is not a valid argument. Compare for instance with the density of the set of squares. There are infinitely many squares (and there is the bijective relation $n \mapsto n^2$ and $n^2 \mapsto n$), yet the set of squares have density zero in $\mathbb{N}$.
The image below creates a better view how, with each extra step, the probability of ball 1 in the urn is decreasing (and we can argue the same for all other balls). Even though the cardinality of the subset of all RL-sequences (the sequences of displaced balls) is equal to the cardinality of all the RL-sequences (the image displays a sort of fractal structure and the tree contains infinitely many copies of itselve).
growth of sample space, number of paths
The image shows all the possible realizations for the first five steps, with the scheme for the tennis ball problem (the tennis ball problem, each step: add 2 remove 1, grows less fast and is easier to display). The turquoise and purple lines display all possible paths that may unfold (imagine at each step $n$ we throw a dice of size $n+1$ and based on it's result we select one of the $n+1$ paths, or in other words based on the results we remove one of the $n+1$ balls in the urn).
The number of possible urn compositions (the boxes) increase as the n+1-th Catalan number $C_{n+1}$, and the total number of paths increase as the factorial $(n+1)!$. For the case of the urn compositions with ball number 1 inside (colored dark gray) and the paths leading to these boxes (purple), the numbers unfolds exactly the same however this time it is the n-th catalan number and the factorial $n!$.
density of paths that leave ball $n$ inside
So, for the paths that lead to an urn with the ball number 1 inside, the density is $\frac{(n)!}{(n+1)!}$ and decreases as $n$ becomes larger. While there are many realization that lead to finding ball number $n$ in the box, the probability approaches zero (I would argue that this does not make it impossible, but just almost surely not happening, and the main trick in Ross' argument is that the union of countable many null events is also a null event).
Example of paths for the first five steps in tennis ball problem (each step: add 2 remove 1)
Ross' arguments for a certainly empty urn.
Ross defines the events (subsets of the sample space), $E_{in}$, that a ball numbered $i$ is in the urn at step $n$. (in his textbook he actually leaves out the subscript $i$ and argues for ball 1).
Proof step 1)
Ross uses his proposition 6.1. for increasing or decreasing sequences of events (e.g. decreasing is equivalent to $E_1 \supset E_2 \supset E_3 \supset E_4 \supset ...$).
Proposition 6.1: If $\lbrace E_n, n\geq 1 \rbrace$ is either an increasing or a decreasing sequence of events, then $$\lim_{n \to \infty} P(E_n) = P(\lim_{n \to \infty} E_n) $$
Using this proposition Ross states that the probability for observing ball $i$ at 12 p.m. (which is the event $lim_{n \to \infty} E_{in}$) is equal to
$$lim_{n \to \infty} P (E_{in})$$
Allis and Koetsier argue that this is one of those implicit assumptions. The supertask itselve does not (logically) imply what happens at 12 p.m. and solutions to the problem have to make implicit assumptions, which is in this case that we can use the principle of continuity on the set of balls inside the urn to state what happens at infinity. If a (set-theoretic) limit to infinity is a particular value, then at infinity we will have that particular value (there can be no sudden jump).
An interesting variant of the Ross-Littlewood paradox is when we also randomly return balls that had been discarded before. In that there won't be convergence (like Thomson's lamp) and we can not as easily define the limit of the sequences $E_{in}$ (which is not decreasing anymore).
Proof step 2)
The limit is calculated. This is a simple algebraic step.
$$ lim_{n \to \infty} P (E_{in}) = \prod_{k=i}^{\infty} \frac{9k}{9k+1} = 0$$
Proof step 3)
It is argued that step 1 and 2 works for all $i$ by a simple statement
"Similarly, we can show that $P(F_i)=0$ for all $i$"
where $F_i$ is the event that ball $i$ has been taken out of the urn when we have reached 12 p.m.
While this may be true, we may wonder about the product expression whose lower index now goes to infinity:
$$lim_{i \to \infty}(lim_{n \to \infty} P (E_{in})) = lim_{i \to \infty}\prod_{k=i}^{\infty} \frac{9k}{9k+1} = ... ?$$
I have not so much to say about it except that I hope that someone can explain to me whether it works.
It would also be nice to obtain better intuitive examples about the notion that the decreasing sequences $E_{in}, E_{in+1}, E_{in+2}, ...$, which are required for proposition 6.1, can not all start with the step number index, $n$, being equal to 1. This index should be increasing to infinity (which is not just the number of steps becoming infinite, but also the random selection of the ball that is to be discarded becomes infinite and the number of balls for which we observe the limit becomes infinite). While this technicality might be tackled (and maybe already has been done in the other answers, either implicitly or explicitly), a thorough and intuitive, explanation might be very helpful.
In this step 3 it becomes rather technical, while Ross is very short about it. Ross presupposes the existence of a probability space (or at least is not explicit about it) in which we can apply these operations at infinity, just the same way as we can apply the operations in finite subspaces.
The answer by ekvall provides a construction, using the extension theorem due to Ionescu-Tulcea, resulting in an infinite product space $\sum_{k=0}^\infty \Omega_i \bigotimes_{k=0}^\infty \mathcal{A}_i$ in which we can express the events $P(E_i)$ by the infinite product of probability kernels, resulting in the $P=0$.
However it is not spelled out in an intuitive sense. How can we show intuitively that the event space $E_{i}$ works? That it's complement is the null set (and not a number 1 with infinitly many zeros, such as is the solution in the adjusted version of the Ross-Littlewood problem by Allis and Koetsier) and that it is a probability space?
Proof step 4)
Boole's inequality is used to finalize the proof.
$$P\left( \bigcup_1^\infty F_i \right) \leq \sum_1^\infty P(F_i) = 0$$
The inequality is proven for sets of events which are finite or infinite countable. This is true for the $F_i$.
This proof by Ross is not a proof in a constuctivist sense. Instead of proving that the probability is almost 1 for the urn to be empty at 12 p.m., it is proving that the probability is almost 0 for the urn to be filled with any ball with a finite number on it.
Recollection
The deterministic Ross-Littlewood paradox contains explicitly the empty set (this is how this post started). This makes it less surprising that the probabilistic version ends up with the empty set, and the result (whether it is true or not) is not so much more paradoxical as the non-probabilistic RL versions. An interesting thought experiment is the following version of the RL problem:
Imagine starting with an urn that is full with infinitely many balls, and start randomly discarding balls from with it. This supertask, if it ends, must logically empty the urn. Since, if it was not empty we could have continued. (This thought experiment, however, stretches the notion of a supertask and has a vaguely defined end. Is it when the urn is empty or when we reach 12 p.m.?)
There is something unsatisfying about the technique of Ross' proof, or at least some better intuition and explanation with other examples might be needed in order to be able to fully appreciate the beauty of the proof. The 4 steps together form a mechanism that can be generalized and possibly applied to generate many other paradoxes (Although I have tried I did not succeed).
We may be able to generate a theorem such that for any other suitable sample space which increases in size towards infinity (the sample space of the RL problem has $card(2^\mathbb{N})$). If we can define a countable set of events $E_{ij}$ which are a decreasing sequence with a limit 0 as the step $j$ increases, then the probability of the event that is the union of those events goes to zero as we approach infinity. If we can make the union of the events to be the entire space (in the RL example the empty vase was not included in the union whose probability goes to zero, so no severe paradox occurred) then we can make a more severe paradox which challenges the consistency of the axioms in combination with transfinite deduction.
One such example (or an attempt to create on) is the infinitely often splitting of a bread into smaller pieces (in order to fulfill the mathematical conditions let's say we only make the splits into pieces that have the size of a positive rational number). For this example we can define events (at step x we have a piece of size x), which are decreasing sequences and the limit of the probability for the events goes to zero (likewise as the RL paradox, the decreasing sequences only occur further and further in time, and there is pointwise but not and uniform convergence).
We would have to conclude that when we finish this supertask that
the bread has disappeared. We can go into different directions
here. 1) We could say that the solution is the empty set (although
this solution is much less pleasant than in the RL paradox, because
the empty set is not part of the sample space) 2) We could say there
are infinitely many undefined pieces (e.g. the size of infinitely
small) 3) or maybe we would have to conclude (after performing Ross'
proof and finding empty) that this is not a supertask that can be
completed? That the notion of finishing such a supertask can be made
but does not necessarily "exist" (a sort of Russell's paradox).
A quote from Besicovitch printed in Littlewood's miscellany:
"a mathematician's reputation rests on the number of bad proofs he has given".
Allis, V., Koetsier, T. (1995), On Some Paradoxes of the Infinite II, The British Journal for the Philosophy of Science, pp. 235-247
Koetsier, T. (2012), Didactiek met oneindig veel pingpongballen, Nieuw Archief voor Wiskunde, 5/13 nr4, pp. 258-261 (dutch original, translation is possible via google and other methods)
Littlewood, J.E. (1953), A mathematician's Miscellany, pp. 5 (free link via archive.org)
Merlin, D., Sprugnoli, R., and Verri M.C. (2002), The tennis ball problem, Journal of Combinatorial Theory, pp. 307-344
Ross, S.M. (1976), A first course in probability, (section 2.7)
Tymoczko, T. and Henle, J. (1995 original) (1999 2nd edition reference on google), Sweet Reason: a field guide to modern logic | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma | The aim of this post is to argue for the OPs last option that we need a better formulation. Or at least, Ross proof is not as clear cut as it may seem at first, and certainly, the proof is not so intu | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left?
The aim of this post is to argue for the OPs last option that we need a better formulation. Or at least, Ross proof is not as clear cut as it may seem at first, and certainly, the proof is not so intuitive that is in a good position to be in an introduction course for theory of probability. It requires much explanation both in understanding the paradoxical aspects, and once that has been cleared explanation at the points where Ross' proof passes very quickly, making it difficult to see which axioms, theorems, and implicit interpretations that the proof depends on.
Related to this aspect it is very amusing to read Teun Koetsier's final words in "Didactiek met oneindig veel pingpongballen?"
Als we niet oppassen dan wordt het 'Paradoxes a window to confusion'.
Translated "If we aren't carefull then it becomes 'Paradoxes a window to confusion'"
Below is a description of the "regular" arguments that may pass in discussions on supertasks, and more specifically the deterministic Ross-Littlewood paradox. After this, when we set all this discussion aside, a view is given of the special case of the probabilistic Ross-Littlewood paradox as providing additional elements, which however get lost and confusing in the wider setting with supertasks.
Three deterministic cases and discussion on supertasks
The Ross-Littlewood paradox knows many different outcomes depending on the manner in which the balls are displaced from the urn. To investigate these, let's kick off by using the exact problem description as Littlewood describes as the 5th problem in his 1953 manuscript
Version 1 The set of balls remaining in the urn is empty
The Ross-Littlewood paradox, or Littlewood-Ross paradox, first appeared as the 5th problem in Littlewood's 1953 manuscript "a mathematician's miscellany"
An infinity paradox. Balls numbered 1, 2, ... (or for a mathematician the numbers themselves) are put into a box as follows. At 1 minute to
noon the numbers 1 to 10 are put in, and the number 1 is taken out. At
1/2 minute to noon numbers 11 to 20 are put in and the number 2 is
taken out and so on. How many are in the box at noon?
Littlewood is short about this problem, but gives a nice representation as the set of points:
$P_{1} + P_{2}+ ... + P_{10} - P_1 + P_{11} + ... + P_{20} - P_2 + ...$
for which it is easily noticed that it is 'null'.
Version 2 The set of balls remaining in the urn has infinite size
Ross (1976) adds two more versions to this paradox. First we look at the first addition:
Suppose that we possess an infinitely large urn and an infinite
collection of balls labeled ball number 1, number 2, number 3, and so
on. Consider an experiment performed as follows: At 1 minute to 12
P.M. , balls numbered 1 through 10 are placed in the urn and ball
number 10 is withdrawn. (Assume that the withdrawal takes no time.) At
12 minute to 12 P.M. , balls numbered 11 through 20 are placed in the
urn and ball number 20 is withdrawn. At 14 minute to 12 P.M. , balls
numbered 21 through 30 are placed in the urn and ball number 30 is
withdrawn. At 18 minute to 12 P.M. , and so on. The question of
interest is, How many balls are in the urn at 12 P.M. ?
Obviously the answer is infinity since this procedure leaves all the balls with numbers $x \mod 10 \neq 0$ in the urn, which are infinitely many.
Before we move on to Ross' second addition, which included probabilities, we move on to another case.
Version 3 The set of balls remaining in the urn is a finite set of arbitrary size
The urn can have any number of balls at 12 p.m. depending on the procedure of displacing the balls. This variation has been described in by Tymoczko and Henle (1995) as the tennis ball problem.
Tom is in a large box, empty except for himself. Jim is standing
outside the box with an infinite number of tennis balls (numbered 1,
2, 3, ....). Jim throws balls 1 and 2 into the box. Tom picks up a
tennis ball and throws it out. Next Jim throws in balls 3 and 4. Tom
picks up a ball and throws it out. Next Jim throws in balls 5 and 6.
Tom picks up a ball and throws it out. This process goes on an
infinite number of times until Jim has thrown all the balls in. Once
again, we ask you to accept accomplishing an infinite number of tasks
in a finite period of time. Here is the question: How many balls are
in the box with Tom when the action is over?
The answer is somewhat disturbing: It depends. Not enough information
has been given to answer the question. There might be an infinite
number of balls left, or there might be none.
In the textbook example they argue for the two cases, either infinite or finite (Tymoczko and Henle, leave the intermediate case as an exercise), however the problem is taken further in several journal articles in which the problem is generalized such that we can get any number depending on the procedure followed.
Especially interesting are the articles on the combinatorial aspects of the problem (where the focus is, however, not on the aspects at infinity). For instance counting the number of possible sets that we can have at any time. In the case of adding 2 balls and removing 1 each step the results are simple and there the number of possible sets in the n-th step is the n+1-th catalan number. E.g. 2 possibilties {1},{2} in the first step, 5 possibilities {1,3}{1,4}{2,3}{2,4} and {3,4} in the second step, 14 in the third, 42 in the fourth, etcetera (see Merlin, Sprugnoli and Verri 2002, The tennis ball problem). This result has been generalized to different numbers of adding and substracting balls but this goes too far for this post now.
Arguments based on the concept of supertasks
Before getting to the theory of probability, many arguments can already be made against the deterministic cases and the possibility of completing the supertask. Also, one can question whether the set theoretic treatment is a valid representation of the kinematic representation of the supertask. I do not wish to argue whether these arguments are good or bad. I mention them to highlight that the probabilistic case can be contrasted with these 'supertask'-arguments and can be seen as containing additional elements that have nothing to do with supertasks. The probabilistic case has a unique and separate element (the reasoning with theory of probability) that is neither proven or refuted by arguing against or for the case of supertasks.
Continuity arguments: These arguments are often more conceptual. For instance the idea that the supertask can not be finished such as Aksakal and Joshua argue in their answers, and a clear demonstration of these notions is Thomson's lamp, which in the case of the Ross Littlewood paradox would be like asking, was the last removed number odd or even?
Physical arguments: There exist also arguments that challenge the mathematical construction as being relevant to the physical realization of the problem. We can have a rigorous mathematical treatement of a problem, but a question remains whether this really has bearing on a mechanistic execution of the task (beyond the simplistic notions such as breaking certain barriers of the physical world as speed limits or energy/space requirements).
One argument might be that the set-theoretic limit is a mathematical
concept that not necessarily describes the physical reality
For example consider the following different problem: The urn has a ball inside which we do not move. Each step we erase the number previously written on the ball and rewrite a new, lower, number on it. Will the urn be empty after infinitely many steps? In this case it seems a bit more absurd to use the set theoretic limit, which is the empty set. This limit is nice as a mathematical reasoning, but does it represent the physical nature of the problem? If we allow balls to disappear from urns because of abstract mathematical reasoning (which, maybe should be considered more as a different problem) then just as well we might make the entire urn disappear?
Also, the differentiation of the balls and assigning them an ordering seems "unphysical" (it is relevant to the mathematical treatment of sets, but do the balls in the urn behave like those sets?). If we would reshuffle the balls at each step (e.g. each step randomly switch a ball from the discarded pile with a ball from the remaining pile of infinite balls), thus forgetting the numbering based on either when they enter the urn or the number they got from the beginning, then the arguments based on set theoretic limits makes no sense anymore because the sets do not converge (there is no stable solution once a ball has been discarded from the urn, it can return again).
From the perspective of performing the physical tasks of
filling and emptying the urn it seems like it should not matter
whether or not we have numbers on the balls. This makes the set
theoretic reasoning more like a mathematical thought about infinite
sets rather than the actual process.
Anyway, If we insist on the use of these infinite paradoxes for didactic purposes, and thus, before we get to the theory of probability, we first need to fight for getting an acceptable idea of (certain) supertasks accepted by the most skeptical/stubborn thinkers, then it may be interesting to use the correspondence between the Zeno's paradox and the Ross-Littlewood paradox described by Allis and Koetsier (1995) and shortly described below.
In their analogy Achilles is trying to catch up the turtle while both of them cross flags that are placed in such a way, with distance $$F(n)=2^{-10 \log n}$$ such that the distance of Achilles with $n$ flags is twice the distance of the turtle with $10n$ flags, namely $F(n) = 2 F(10n)$. Then until 12.pm. the difference in the flags that the turtle and Achilles will have past is growing. But, eventually at 12 p.m. nobody except the Eleatics would argue that they Achilles and the turtle reached the same point and (thus) have zero flags in between them.
The probabilistic case and how it adds new aspects to the problem.
The second version added by Ross (in his textbook), removes the balls based on random selection
Let us now suppose that whenever a ball is to be withdrawn, that ball
is randomly selected from among those present. That is, suppose that
at 1 minute to 12 P.M. balls numbered 1 through 10 are placed in the
urn and a ball is randomly selected and withdrawn, and so on. In this
case, how many balls are in the urn at 12 P.M. ?
Ross solution is that the probability is 1 for the urn being empty. However, while Ross' argumentation seems sound and rigorous, one might wonder what kind of axioms are necessary for this and which of the used theorems might be placed under stress by implicit assumptions that might be not founded in those axioms (for instance the presupposition that the events at noon can be assigned probabilities).
Ross' calculation is in short a combination of two elements that divides the event of a non-empty urn into countably many subsets/events and proves that for each of these events the probability is zero:
For, $F_i$, the event that ball number $i$ is in the urn at 12 p.m., we have $P(F_1) = 0 $
For, $P(\bigcup_1^\infty F_i)$, the probability that the urn is not empty at 12 p.m. we have
$P(\bigcup_1^\infty F_i) \leq \sum_1^\infty P(F_i) = 0$
The probabilistic case of the Ross-Littlewood paradox, without reasoning about supertasks
In the most naked form of the paradox, stripping it from any problems with the performance of supertasks, we may wonder about the "simpler" problem of subtracting infinite sets. For instance in the three versions we get:
$$\begin{array} \\
S_{added} &= \lbrace 1,2,3,4,5,6,7,8,9,10 \rbrace + \lbrace10k \text{ with } k \in \mathbb{N} \rbrace \\
S_{removed,1} &= \lbrace k \text{ with } k \in \mathbb{N} \rbrace \\
S_{removed,2} &= \lbrace 10k \text{ with } k \in \mathbb{N} \rbrace \\
S_{removed,3} &= \lbrace k \text{ with } k \in \mathbb{N} \rbrace \setminus \lbrace a_1,a_2,a_3,... \text{ with } a_i \in \mathbb{N} \rbrace \end{array}$$
and the problem reduces to a set subtraction like $S_{added}-S_{removed,1} = \emptyset$.
Any infinite sequence, $S_{RL} =\lbrace a_k \text{ without repetitions and } a_k < 10k \rbrace$ , is a (equally) possible sequence that describes the order in which the balls can be removed in a probabilistic realization of the Ross-Littlewood problem. Lets call these infinite sequences RL-sequences.
Now, the more general question, without the paradoxical reasoning about supertasks, is about the density of RL sequences that do not contain the entire set $\mathbb{N}$
A graphical view of the problem.
nested, fractal, structure
Before the edited version of this answer I had made an argument that used the existence of an injective map from 'the infinite sequences that empty the urn' to 'the infinite sequences that do not contain the number 1'.
That is not a valid argument. Compare for instance with the density of the set of squares. There are infinitely many squares (and there is the bijective relation $n \mapsto n^2$ and $n^2 \mapsto n$), yet the set of squares have density zero in $\mathbb{N}$.
The image below creates a better view how, with each extra step, the probability of ball 1 in the urn is decreasing (and we can argue the same for all other balls). Even though the cardinality of the subset of all RL-sequences (the sequences of displaced balls) is equal to the cardinality of all the RL-sequences (the image displays a sort of fractal structure and the tree contains infinitely many copies of itselve).
growth of sample space, number of paths
The image shows all the possible realizations for the first five steps, with the scheme for the tennis ball problem (the tennis ball problem, each step: add 2 remove 1, grows less fast and is easier to display). The turquoise and purple lines display all possible paths that may unfold (imagine at each step $n$ we throw a dice of size $n+1$ and based on it's result we select one of the $n+1$ paths, or in other words based on the results we remove one of the $n+1$ balls in the urn).
The number of possible urn compositions (the boxes) increase as the n+1-th Catalan number $C_{n+1}$, and the total number of paths increase as the factorial $(n+1)!$. For the case of the urn compositions with ball number 1 inside (colored dark gray) and the paths leading to these boxes (purple), the numbers unfolds exactly the same however this time it is the n-th catalan number and the factorial $n!$.
density of paths that leave ball $n$ inside
So, for the paths that lead to an urn with the ball number 1 inside, the density is $\frac{(n)!}{(n+1)!}$ and decreases as $n$ becomes larger. While there are many realization that lead to finding ball number $n$ in the box, the probability approaches zero (I would argue that this does not make it impossible, but just almost surely not happening, and the main trick in Ross' argument is that the union of countable many null events is also a null event).
Example of paths for the first five steps in tennis ball problem (each step: add 2 remove 1)
Ross' arguments for a certainly empty urn.
Ross defines the events (subsets of the sample space), $E_{in}$, that a ball numbered $i$ is in the urn at step $n$. (in his textbook he actually leaves out the subscript $i$ and argues for ball 1).
Proof step 1)
Ross uses his proposition 6.1. for increasing or decreasing sequences of events (e.g. decreasing is equivalent to $E_1 \supset E_2 \supset E_3 \supset E_4 \supset ...$).
Proposition 6.1: If $\lbrace E_n, n\geq 1 \rbrace$ is either an increasing or a decreasing sequence of events, then $$\lim_{n \to \infty} P(E_n) = P(\lim_{n \to \infty} E_n) $$
Using this proposition Ross states that the probability for observing ball $i$ at 12 p.m. (which is the event $lim_{n \to \infty} E_{in}$) is equal to
$$lim_{n \to \infty} P (E_{in})$$
Allis and Koetsier argue that this is one of those implicit assumptions. The supertask itselve does not (logically) imply what happens at 12 p.m. and solutions to the problem have to make implicit assumptions, which is in this case that we can use the principle of continuity on the set of balls inside the urn to state what happens at infinity. If a (set-theoretic) limit to infinity is a particular value, then at infinity we will have that particular value (there can be no sudden jump).
An interesting variant of the Ross-Littlewood paradox is when we also randomly return balls that had been discarded before. In that there won't be convergence (like Thomson's lamp) and we can not as easily define the limit of the sequences $E_{in}$ (which is not decreasing anymore).
Proof step 2)
The limit is calculated. This is a simple algebraic step.
$$ lim_{n \to \infty} P (E_{in}) = \prod_{k=i}^{\infty} \frac{9k}{9k+1} = 0$$
Proof step 3)
It is argued that step 1 and 2 works for all $i$ by a simple statement
"Similarly, we can show that $P(F_i)=0$ for all $i$"
where $F_i$ is the event that ball $i$ has been taken out of the urn when we have reached 12 p.m.
While this may be true, we may wonder about the product expression whose lower index now goes to infinity:
$$lim_{i \to \infty}(lim_{n \to \infty} P (E_{in})) = lim_{i \to \infty}\prod_{k=i}^{\infty} \frac{9k}{9k+1} = ... ?$$
I have not so much to say about it except that I hope that someone can explain to me whether it works.
It would also be nice to obtain better intuitive examples about the notion that the decreasing sequences $E_{in}, E_{in+1}, E_{in+2}, ...$, which are required for proposition 6.1, can not all start with the step number index, $n$, being equal to 1. This index should be increasing to infinity (which is not just the number of steps becoming infinite, but also the random selection of the ball that is to be discarded becomes infinite and the number of balls for which we observe the limit becomes infinite). While this technicality might be tackled (and maybe already has been done in the other answers, either implicitly or explicitly), a thorough and intuitive, explanation might be very helpful.
In this step 3 it becomes rather technical, while Ross is very short about it. Ross presupposes the existence of a probability space (or at least is not explicit about it) in which we can apply these operations at infinity, just the same way as we can apply the operations in finite subspaces.
The answer by ekvall provides a construction, using the extension theorem due to Ionescu-Tulcea, resulting in an infinite product space $\sum_{k=0}^\infty \Omega_i \bigotimes_{k=0}^\infty \mathcal{A}_i$ in which we can express the events $P(E_i)$ by the infinite product of probability kernels, resulting in the $P=0$.
However it is not spelled out in an intuitive sense. How can we show intuitively that the event space $E_{i}$ works? That it's complement is the null set (and not a number 1 with infinitly many zeros, such as is the solution in the adjusted version of the Ross-Littlewood problem by Allis and Koetsier) and that it is a probability space?
Proof step 4)
Boole's inequality is used to finalize the proof.
$$P\left( \bigcup_1^\infty F_i \right) \leq \sum_1^\infty P(F_i) = 0$$
The inequality is proven for sets of events which are finite or infinite countable. This is true for the $F_i$.
This proof by Ross is not a proof in a constuctivist sense. Instead of proving that the probability is almost 1 for the urn to be empty at 12 p.m., it is proving that the probability is almost 0 for the urn to be filled with any ball with a finite number on it.
Recollection
The deterministic Ross-Littlewood paradox contains explicitly the empty set (this is how this post started). This makes it less surprising that the probabilistic version ends up with the empty set, and the result (whether it is true or not) is not so much more paradoxical as the non-probabilistic RL versions. An interesting thought experiment is the following version of the RL problem:
Imagine starting with an urn that is full with infinitely many balls, and start randomly discarding balls from with it. This supertask, if it ends, must logically empty the urn. Since, if it was not empty we could have continued. (This thought experiment, however, stretches the notion of a supertask and has a vaguely defined end. Is it when the urn is empty or when we reach 12 p.m.?)
There is something unsatisfying about the technique of Ross' proof, or at least some better intuition and explanation with other examples might be needed in order to be able to fully appreciate the beauty of the proof. The 4 steps together form a mechanism that can be generalized and possibly applied to generate many other paradoxes (Although I have tried I did not succeed).
We may be able to generate a theorem such that for any other suitable sample space which increases in size towards infinity (the sample space of the RL problem has $card(2^\mathbb{N})$). If we can define a countable set of events $E_{ij}$ which are a decreasing sequence with a limit 0 as the step $j$ increases, then the probability of the event that is the union of those events goes to zero as we approach infinity. If we can make the union of the events to be the entire space (in the RL example the empty vase was not included in the union whose probability goes to zero, so no severe paradox occurred) then we can make a more severe paradox which challenges the consistency of the axioms in combination with transfinite deduction.
One such example (or an attempt to create on) is the infinitely often splitting of a bread into smaller pieces (in order to fulfill the mathematical conditions let's say we only make the splits into pieces that have the size of a positive rational number). For this example we can define events (at step x we have a piece of size x), which are decreasing sequences and the limit of the probability for the events goes to zero (likewise as the RL paradox, the decreasing sequences only occur further and further in time, and there is pointwise but not and uniform convergence).
We would have to conclude that when we finish this supertask that
the bread has disappeared. We can go into different directions
here. 1) We could say that the solution is the empty set (although
this solution is much less pleasant than in the RL paradox, because
the empty set is not part of the sample space) 2) We could say there
are infinitely many undefined pieces (e.g. the size of infinitely
small) 3) or maybe we would have to conclude (after performing Ross'
proof and finding empty) that this is not a supertask that can be
completed? That the notion of finishing such a supertask can be made
but does not necessarily "exist" (a sort of Russell's paradox).
A quote from Besicovitch printed in Littlewood's miscellany:
"a mathematician's reputation rests on the number of bad proofs he has given".
Allis, V., Koetsier, T. (1995), On Some Paradoxes of the Infinite II, The British Journal for the Philosophy of Science, pp. 235-247
Koetsier, T. (2012), Didactiek met oneindig veel pingpongballen, Nieuw Archief voor Wiskunde, 5/13 nr4, pp. 258-261 (dutch original, translation is possible via google and other methods)
Littlewood, J.E. (1953), A mathematician's Miscellany, pp. 5 (free link via archive.org)
Merlin, D., Sprugnoli, R., and Verri M.C. (2002), The tennis ball problem, Journal of Combinatorial Theory, pp. 307-344
Ross, S.M. (1976), A first course in probability, (section 2.7)
Tymoczko, T. and Henle, J. (1995 original) (1999 2nd edition reference on google), Sweet Reason: a field guide to modern logic | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma
The aim of this post is to argue for the OPs last option that we need a better formulation. Or at least, Ross proof is not as clear cut as it may seem at first, and certainly, the proof is not so intu |
1,419 | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left? | OK, I'll try again.
The answer is that the paradox is purely mathematical. Enumaris's and cmaster's answer's tell what is going on in one way, but this is another way to see the problem. The problem is how we deal with probabilities with infinities, as Jaynes has written about (see my other attempted answer for details).
An infinite series is usually treated as if it has no end, but in this problem there is an end time (12PM) and so logically, even if not mathematically, there is a last cycle of addition and removal of balls: the one that happens infinitessimally prior to 12PM. The existence of a 'last' cycle allows us to look at the probabilities backwards as well as forwards through time.
Consider the ten balls last added. For each of them their probability of being removed is zero because they are each just one of infinity balls that might be removed. Thus the probability that there will be at least ten balls remaining at 12PM is unity.
QED. A probabilistic argument that does not lead to nonsense. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma | OK, I'll try again.
The answer is that the paradox is purely mathematical. Enumaris's and cmaster's answer's tell what is going on in one way, but this is another way to see the problem. The problem i | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left?
OK, I'll try again.
The answer is that the paradox is purely mathematical. Enumaris's and cmaster's answer's tell what is going on in one way, but this is another way to see the problem. The problem is how we deal with probabilities with infinities, as Jaynes has written about (see my other attempted answer for details).
An infinite series is usually treated as if it has no end, but in this problem there is an end time (12PM) and so logically, even if not mathematically, there is a last cycle of addition and removal of balls: the one that happens infinitessimally prior to 12PM. The existence of a 'last' cycle allows us to look at the probabilities backwards as well as forwards through time.
Consider the ten balls last added. For each of them their probability of being removed is zero because they are each just one of infinity balls that might be removed. Thus the probability that there will be at least ten balls remaining at 12PM is unity.
QED. A probabilistic argument that does not lead to nonsense. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma
OK, I'll try again.
The answer is that the paradox is purely mathematical. Enumaris's and cmaster's answer's tell what is going on in one way, but this is another way to see the problem. The problem i |
1,420 | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left? | I believe that this example supports "if the premise is false then the conditional is true"
In this universe, there are no infinite urns and no infinite collection of balls. It is impossible to split time into arbitrarily small pieces.
Thus Sheldon Ross is right to say that the urn is empty at 12:00. Students who say that the urn has infinite balls at 12:00 are just as right.
If you answered the urn has 50 balls then you are also correct.
I have not rigorously proved that this universe does not contain infinite urns and infinite balls and that time is not atomic - I just believe those things. If you believe those three assertions are wrong, then you believe Ross's problem is empirically falsifiable. I am waiting for your experimental results. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma | I believe that this example supports "if the premise is false then the conditional is true"
In this universe, there are no infinite urns and no infinite collection of balls. It is impossible to split | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left?
I believe that this example supports "if the premise is false then the conditional is true"
In this universe, there are no infinite urns and no infinite collection of balls. It is impossible to split time into arbitrarily small pieces.
Thus Sheldon Ross is right to say that the urn is empty at 12:00. Students who say that the urn has infinite balls at 12:00 are just as right.
If you answered the urn has 50 balls then you are also correct.
I have not rigorously proved that this universe does not contain infinite urns and infinite balls and that time is not atomic - I just believe those things. If you believe those three assertions are wrong, then you believe Ross's problem is empirically falsifiable. I am waiting for your experimental results. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma
I believe that this example supports "if the premise is false then the conditional is true"
In this universe, there are no infinite urns and no infinite collection of balls. It is impossible to split |
1,421 | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left? | I support the opinion that the problem is ill-posed. When we consider something transfinite we often have to use a limit. It seems that here it is the only way. Since we distinguish different balls, we have an infinite-dimensional process $$(X_{t,1}, X_{t,2},...),$$
where $t=-1,-1/2,-1/4,...$ stands for the time, $X_{t,j}=1$ if there is the ball $j$ at time $t+0$ and $X_{t,j}=0$ otherwise.
Now it is on each everyone's discretion which convergence to use: uniform, componentwise, $l_p$, etc. Needless to say, the answer depends on the choice.
The misunderstanding in this problem goes from neglecting the fact that metric issues are crucial when we consider convergence of infinite-dimensional vectors. Without choosing the type of convergence, no correct answer can be given.
(There is componentwise convergence to zero vector. While $l_1$ norm counts the number of balls, so in this norm the process is exploding.) | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma | I support the opinion that the problem is ill-posed. When we consider something transfinite we often have to use a limit. It seems that here it is the only way. Since we distinguish different balls, w | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left?
I support the opinion that the problem is ill-posed. When we consider something transfinite we often have to use a limit. It seems that here it is the only way. Since we distinguish different balls, we have an infinite-dimensional process $$(X_{t,1}, X_{t,2},...),$$
where $t=-1,-1/2,-1/4,...$ stands for the time, $X_{t,j}=1$ if there is the ball $j$ at time $t+0$ and $X_{t,j}=0$ otherwise.
Now it is on each everyone's discretion which convergence to use: uniform, componentwise, $l_p$, etc. Needless to say, the answer depends on the choice.
The misunderstanding in this problem goes from neglecting the fact that metric issues are crucial when we consider convergence of infinite-dimensional vectors. Without choosing the type of convergence, no correct answer can be given.
(There is componentwise convergence to zero vector. While $l_1$ norm counts the number of balls, so in this norm the process is exploding.) | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma
I support the opinion that the problem is ill-posed. When we consider something transfinite we often have to use a limit. It seems that here it is the only way. Since we distinguish different balls, w |
1,422 | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left? | Recently several comments by Wilhelm, Wolfgang Mückenheim, caused me to reconsider certain formulations in my answer. I am posting this as a new answer mainly because the different approach of this answer, not arguing about the teaching of this problem, but instead about the paradox being invalid.
Wilhelm discusses in his lengthy manuscript that
Transactions are only possible
at finite steps $n$ (there is no action possible "between all $n$ and $\omega$").
This reminded me of the term
$$\sum_{k=1}^\infty \prod_{n=k}^\infty \left( \frac{9n}{9n+1} \right)$$
which is derived from Ross' work. This term is indeterminate when the path to infinity is not defined for the following limit.
$$\lim_\limits{(l,m)\to(\infty,\infty)}\sum_{k=1}^l\prod_{n=k}^m\left(\frac{9n}{9n+1}\right)$$
This seems to resemble the point that Wilhelm discusses and is also mentioned in aksakal's answer. The steps in time become infinitely small, so we will be able to reach 12 p.m. in that sense, but we will at the same time need to add and remove an (unphysical) infinity number of balls. It is a false idea to attach this supertask to a process like Zeno's arrow, just like the switch of Thompson's paradoxical lamp can not have a definite position at the end of a supertask.
In terms of the limit we can say that the physical path to infinity that we take is
$$\lim_\limits{l\to \infty}\sum_{k=1}^l\prod_{n=k}^l\left(\frac{9n}{9n+1}\right) = \lim_\limits{l\to \infty} \frac{9l}{10}$$
so not zero but infinite. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma | Recently several comments by Wilhelm, Wolfgang Mückenheim, caused me to reconsider certain formulations in my answer. I am posting this as a new answer mainly because the different approach of this an | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left?
Recently several comments by Wilhelm, Wolfgang Mückenheim, caused me to reconsider certain formulations in my answer. I am posting this as a new answer mainly because the different approach of this answer, not arguing about the teaching of this problem, but instead about the paradox being invalid.
Wilhelm discusses in his lengthy manuscript that
Transactions are only possible
at finite steps $n$ (there is no action possible "between all $n$ and $\omega$").
This reminded me of the term
$$\sum_{k=1}^\infty \prod_{n=k}^\infty \left( \frac{9n}{9n+1} \right)$$
which is derived from Ross' work. This term is indeterminate when the path to infinity is not defined for the following limit.
$$\lim_\limits{(l,m)\to(\infty,\infty)}\sum_{k=1}^l\prod_{n=k}^m\left(\frac{9n}{9n+1}\right)$$
This seems to resemble the point that Wilhelm discusses and is also mentioned in aksakal's answer. The steps in time become infinitely small, so we will be able to reach 12 p.m. in that sense, but we will at the same time need to add and remove an (unphysical) infinity number of balls. It is a false idea to attach this supertask to a process like Zeno's arrow, just like the switch of Thompson's paradoxical lamp can not have a definite position at the end of a supertask.
In terms of the limit we can say that the physical path to infinity that we take is
$$\lim_\limits{l\to \infty}\sum_{k=1}^l\prod_{n=k}^l\left(\frac{9n}{9n+1}\right) = \lim_\limits{l\to \infty} \frac{9l}{10}$$
so not zero but infinite. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma
Recently several comments by Wilhelm, Wolfgang Mückenheim, caused me to reconsider certain formulations in my answer. I am posting this as a new answer mainly because the different approach of this an |
1,423 | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left? | The problem as stated is a variant of a conditionally convergent sum. That is, the sum is indeterminate depending on how the addition is performed; in what order the terms are summed. In general, if $\Sigma_{n=0}^\infty x_n$ converges but $\Sigma_{n=0}^\infty |x_n|$ is divergent, then convergence is conditional and not absolute. Although this is most often applied to alternating sign series, the order of signs is either irrelevant or the series is conditionally convergent. As such, there is no unique answer, see Mathworld commentary.
Finally, the only paradox inherent in finding different answers for a conditionally convergent sum is how one can expect it to be otherwise. This is worse than asking a pseudo-random number generator to create equal valued answers for different seeds because in that latter case one would not typically obtain as well $\pm\infty$ as admissible answers. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma | The problem as stated is a variant of a conditionally convergent sum. That is, the sum is indeterminate depending on how the addition is performed; in what order the terms are summed. In general, if | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left?
The problem as stated is a variant of a conditionally convergent sum. That is, the sum is indeterminate depending on how the addition is performed; in what order the terms are summed. In general, if $\Sigma_{n=0}^\infty x_n$ converges but $\Sigma_{n=0}^\infty |x_n|$ is divergent, then convergence is conditional and not absolute. Although this is most often applied to alternating sign series, the order of signs is either irrelevant or the series is conditionally convergent. As such, there is no unique answer, see Mathworld commentary.
Finally, the only paradox inherent in finding different answers for a conditionally convergent sum is how one can expect it to be otherwise. This is worse than asking a pseudo-random number generator to create equal valued answers for different seeds because in that latter case one would not typically obtain as well $\pm\infty$ as admissible answers. | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma
The problem as stated is a variant of a conditionally convergent sum. That is, the sum is indeterminate depending on how the addition is performed; in what order the terms are summed. In general, if |
1,424 | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left? | More intuition than formal education, but:
If the intervals to midnight are halving, we never reach midnight... we only approach asymptotically; so one could argue that there is no solution.
Alternatively, depending on the phrasing:
as there are infinite intervals of +10 balls the answer is infinite
as there are infinite intervals of (+10 balls - 1) the answer is 10*infinite -1*infinite = 0?
as there are infinite intervals of (+9 balls) +1 the answer is infinite + 1 | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma | More intuition than formal education, but:
If the intervals to midnight are halving, we never reach midnight... we only approach asymptotically; so one could argue that there is no solution.
Alternati | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left?
More intuition than formal education, but:
If the intervals to midnight are halving, we never reach midnight... we only approach asymptotically; so one could argue that there is no solution.
Alternatively, depending on the phrasing:
as there are infinite intervals of +10 balls the answer is infinite
as there are infinite intervals of (+10 balls - 1) the answer is 10*infinite -1*infinite = 0?
as there are infinite intervals of (+9 balls) +1 the answer is infinite + 1 | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma
More intuition than formal education, but:
If the intervals to midnight are halving, we never reach midnight... we only approach asymptotically; so one could argue that there is no solution.
Alternati |
1,425 | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left? | Rewrite: Jan 16, 2018
Section 1: Outline
The fundamental results this post are as follows:
The halfway ball has a probability of about $0.91$ of remaining in the limit as the step goes to $\infty$ - this is both a real world observation and is derived mathematically.
The derived function has a domain of the rationals in $(0,1]$. For example, the probability in the limit of the halfway ball remaining corresponds to the domain value $1/2$.
This function can computed the probability of remaining for any fraction of the step size.
Ross' analysis is not wrong but is incomplete because it attempts to iterate the rationals in order of magnitude $\left(i,\infty\right), i=1..\infty$.
The rationals cannot be iterated in order of magnitude. Hence, Ross's analysis cannot access the full domain and can only offer a limited view of the total behavior.
Ross's analysis does however account for one particular observable behavior: in the limit
it is not possible through serial iteration from 1 to reach the first remaining ballset.
Ross' limit sequences have some nice convincing properties that seem intuitively unique.
However, we show another set of limit sequences which satisfy the same nice properties and give the values for our function.
Section 2 "Notation and terminology" covers notation and terminology used in this post.
Section 3 "The Halfway Ballset" introduces a real world observation - the converge in the limit of the probability of remaining of a ball
whose index is a halfway through all the inserted balls. This limit value is about 91%. The case of the halfway ballset is generalized to any rational in $(0,1]$,
which all have non-zero limit values.
Section 4 "Resolution of the Paradox" presents a unified framework for including both Ross' result and the 'rational-domain' results (described herein).
As already noted, Ross' analysis an only offer a limited view of the total behavior. Hence, the source of the paradox is identified and resolved.
In the appendix some other results less important results are discussed:
"Expectations in the limit" calculates the expected number of balls remaining up to and including any fraction of the step size.
A corollary of this result is determining the index of the first ball which has an expectation of remaining greater than one.
Section 2: Notation and terminology
We label the ball indexes inserted at step $n$ as $\{n.1, n.2, n.3, ..... n.10\}$
and call this set the $n$th "ballset". Ballset is one word, created for this post.
This terminology regrettably deviates from Ross' terminology, but it also makes the text much clearer and shorter.
The notation $E(a,b)$ refers to the event that ball $a.1$ in ballset $a$ remains at step $b$, ignoring the other balls in the ballset.
The notation $P(a,b)$ is an abbreviation for $P(E(a,b))$ and it refers to the probability of $E(a,b)$.
Note that all balls $a.i$ in ballset $a$ have the same probability of remaining.
-- The value of $P(E(a,b))$ is $\prod_{k=a}^{b} \frac{9k}{(9k+1)}$.
The Ross limit $P(a)$ is the probability $P(a,b)$ as $b$ goes to infinity:
-- $P_{lim1}(a) = \lim_{b\rightarrow\infty} P(a,b)$
The rational-limit is defined as the limit as the both ball index $a$ and step $b$ go to infinity while maintaining constant ratio:
-- $P_{lim2}(a,b) = \lim_{k\rightarrow\infty} P(ka,kb)$
Section 3: The halfway ballset
At every even step $2n$, the halfway ballset is defined as the $n$th ballset.
At every even step $2n$, the halfway probability of remaining is defined as $P(n,2n)$.
In the limit as $n\rightarrow\infty$,
the halfway probability of remaining is therefore
$\lim_{n\rightarrow\infty} P(1*n,2*n)$.
Theorem 1 below gives a numerical value for the halfway probability of remaining.
Theorem 1 - Limit of probability of elements in a ratio-preserving domain sequence
\begin{equation}
\lim_{n\rightarrow\infty} P(a*n,b*n) = (\frac{a}{b})^\frac{1}{9}
\end{equation}
The proof is given below just before the appendix.
By Theorem 1, the halfway probability of remaining in the limit is
$(\frac{1}{2})^\frac{1}{9}$
which evaluates to an approximate decimal value of $0.925875$.
Sanity Check
Lets do a sanity check to see if the numerical limit for the halfway probability "looks right".
\begin{array}{|l|rcl|}
\hline
n & P(n/2,n) &=& \text{trunc decimal val} \\
\hline
1000 & P(500,1000) &=& 0.92572614082 \\
\hline
10000 & P(5000,10000) &=& 0.9258598528 \\
\hline
100000 & P(50000,100000) &=& 0.925873226 \\
\hline
1000000 & P(500000,1000000) &=& 0.92587456 \\
\hline
\hline
\infty & \lim_{n\rightarrow\infty} P(n,2n) &=& 0.925875 \\
\hline
\end{array}
The first 4 rows are the halfway probabilities of remaining for the step number values of
$10^3$, $10^4$, $10^5$, and $10^6$, respectively.
The final row is the limit.
It seems the halfway probabilities are indeed converging to the predicted limit.
This real world observation, which does not fit within Ross's framework, needs to be explained.
** Section 4 "Resolution of the Paradox" **
This section explains a unified framework for both Ross' analysis and the rational-domain analysis,
By viewing them together the paradox is resolved.
The rational limit $P_{lim2}(a,b)$ is reducible to a function from the rationals $(0,1]$ to the reals $(0,1]$:
\begin{array}{rcl}
P_{lim2}(a,b) &=& lim_{k\rightarrow\infty} P(ka,kb) \\
&=& (\frac{a'}{b'})^\frac{1}{9}
\end{array}
where $gcd(a',b')=1$ and $\frac{a'}{b'} = \frac{a}{b}$. Here $gcd()$ indicates greatest common divisor.
Equivalently statements are "$a'$ and $b'$ are mutually prime", and "$\frac{a'}{b'}$ is the reduced fraction of $\frac{a}{b}$.
The Ross limit can be written as the limit of a sequence of rational limits:
\begin{array}{rclr}
P_{lim1}(a) &=& lim_{k\rightarrow\infty} P(a,k) & \\
&=& lim_{i,k\rightarrow\infty} P(ka/i,kb) & \text{for some } b \\
&=& lim_{i\rightarrow\infty} P_{lim2}(a/i,b) & \\
&=& lim_{i\rightarrow\infty} P_{lim2}(0,b) &
\end{array}
The tuple $(0,b)$ is not a member of the rationals in $(0,1]$; it belongs to $[0,0]$.
Therefore the Ross limit is isomorphic to the function $P_{lim2}(a,b)$ on domain $[0,0]$ and its image is always the unique real $0$.
The Ross limit and the rational-limit are the same function on two disjoint domains $[0,0]$ and $(0,1]$ respectively.
The Ross limit only considers the case of ballset indexes which have been demoted to be infinitely small relative to the stepsize.
The Ross-limit analysis predicts that in the limit, accessing the values $P_{lim1}(i)$ sequentially for $i=1,2,...\infty$ will never reach a non-zero value.
This is a correct and corresponds to real world observation.
The rational-limit analysis accounts for real world observations such as the halfway ballset which the Ross-limit does not account for.
The function is the same $P_{lim2}(a,b)$ but the domain is $(0,1]$ instead of $[0,0]$
The diagram below depicts the both the Ross limit sequences and the rational limit sequences.
It is probably fair to say that Ross' analysis includes an implicit assumption that the Ross-limit and its domain is the entire domain of interest.
The intuition implicitly underlying Ross' assumption is like due to the four conditions below even if they are not explicitly recognized:
Let $S_i = P(i,n), n=1,...,\infty$ be the $i$th Roth limit sequence.
Let $S = \cup_{i=(1...\infty)} S_i$ be the union of Roth limit sequences.
(1) The sequences $S_i$ are disjoint and each sequence converges.
(2) The union of elements of all sequences $S$ cover exactly the set of all (ball,step) tuples coming into play: $\{ (i,n)\ |\ i\leq n\ \land \ i,n\in Q \}$
(3) All of the sequences $S_i$ are infinite in $n$, the step index, so they don't terminate "early".
(4) The sequences $S_i$ themselves form a super-sequence $\{ S_i \}_{i_in (1...\infty) }$. Therefore that super-sequence can be "created" iteratively, i,e,, they are countable.
It's not immediately apparent that another system of limit sequences could satisfy the above points (1) - (4).
However, we will now discuss another system of limit sequences which do indeed satisfy the above points (1) - (4).
Let $S_{p,q}$, where $gcd(p,q)=1$, represent the rational-limit sequence
\begin{equation}
S_{p,q} = \{ (kp,kq) \}_{k\in (1...\infty)}
\end{equation}
Let $D^*$ be the mutually prime tuples of $D$: = $D^* =\{ (p,q)\in D \land gcd(p,q)=1 \}$ .
Let $S^*$ be the union of said rational limit sequences: $S^* = \cup_{d\in D^*} S_{p,q}$
Clearly the sequences $S_{p,q}$ whose union is $S^*$ satisfy the above properties (1) - (3).
The indexes $(p,q)$ are exactly the rationals on $(0,1]$.
To satisfy condition (4) we need to show that the rationals on $(0,1]$ are countable.
The (Farey sequence)2 of order $n$ is the sequence of completely reduced fractions between 0 and 1 which when in lowest terms have denominators less than or equal to $n$,
arranged in order of increasing size. Here are the first eight Farey sequences:
F1 = {0/1, 1/1}
F2 = {0/1, 1/2, 1/1}
F3 = {0/1, 1/3, 1/2, 2/3, 1/1}
F4 = {0/1, 1/4, 1/3, 1/2, 2/3, 3/4, 1/1}
F5 = {0/1, 1/5, 1/4, 1/3, 2/5, 1/2, 3/5, 2/3, 3/4, 4/5, 1/1}
F6 = {0/1, 1/6, 1/5, 1/4, 1/3, 2/5, 1/2, 3/5, 2/3, 3/4, 4/5, 5/6, 1/1}
F7 = {0/1, 1/7, 1/6, 1/5, 1/4, 2/7, 1/3, 2/5, 3/7, 1/2, 4/7, 3/5, 2/3, 5/7, 3/4, 4/5, 5/6, 6/7, 1/1}
F8 = {0/1, 1/8, 1/7, 1/6, 1/5, 1/4, 2/7, 1/3, 3/8, 2/5, 3/7, 1/2, 4/7, 3/5, 5/8, 2/3, 5/7, 3/4, 4/5, 5/6, 6/7, 7/8, 1/1}
Let $F^*_n$ represent the $n$th Farey sequence without the first element $0/1$.
Let $S^*_n$ be the union of rational limit sequences which have at least one element up to and including step $n$:
\begin{equation}
S^*_n = \{ S_{p,q}\ |\ \exists (a,b) \}
\end{equation}
The elements of $F^*_n$ index, converted from fractions to tuples, exactly index the elements of $S^*_n$.
The following table compares the grouping of the limit sequences in the Ross analysis and the rational limit analysis:
\begin{array}{|c|c|c|}
\hline
& \text{Ross} & \text{rational} \\
\hline
\text{num new seq per step } & 1 & \text{multiple (generally)} \\
\hline
\text{new seq at step } n & S_n & F^*_n - F^*_{n-1} \\
\hline
\text{tot num seq up to step }n & n & \| F^*_n \| \\
\hline
\text{super-seq up to step }n & \{ S_m \}_{m=1}^{n} & F^*_n \\
\hline
\end{array}
Finally, since methods exist [3],[4] for iteratively creating the super sequence $ F^*_n $, the condition (4) is also satisfied.
One of those methods, a variant of the Stern-Brocot tree, is as follows:
The mediant of two rationals $a/c$ and $b/d$ is defined as $\frac{a+b}{c+d}$
Set $F*_n = \emptyset$
Append $1/n$ to $F*_n$
Loop for $i$ in $1...(\|F*_{n-1}\|-1)$
Append $F*_{n-1}[i]$ to F*_n$
Let $x = mediant(F*_{n-1}[i], F*_{n-1}[i+1])$
If $denom(x) \leq n$ append x to $F*_n$
continue loop
Append $F*_{n-1}[n]$ to $F*_n$
The paradox has been resolved.
Proof of Theorem 1
First note that:
\begin{eqnarray}
P(E_{a,b})
&=&
\prod_{k=a}^{b} \frac{9k}{(9k+1)}
\\
&=&
\frac{\Gamma \left(a+\frac{1}{9}\right) \Gamma (b+1)}{\Gamma (a) \Gamma\left(b+\frac{10}{9}\right)}
\\
&=&
(a-1)^{\frac{1}{2}-a} \left(a-\frac{8}{9}\right)^{a-\frac{7}{18}} b^{b+\frac{1}{2}}
\left(b+\frac{1}{9}\right)^{-b-\frac{11}{18}}
\end{eqnarray}
where the last transformation is the Sterling transformation.
Then, syntactically substituting $a\rightarrow a*n$ and $b\rightarrow b*n$ into the last (Sterling form) equation we get
\begin{eqnarray}
\lim_{n\rightarrow\infty} P(E_{a,b}) &=& \lim_{n\rightarrow\infty}
(a M-1)^{\frac{1}{2}-a M} \left(a M-\frac{8}{9}\right)^{a M-\frac{7}{18}} (b M)^{b
M+\frac{1}{2}} \left(b M+\frac{1}{9}\right)^{-b M-\frac{11}{18}}
\\
&=& \left(\frac{a}{b}\right)^\frac{1}{9}
\end{eqnarray}
Appendix: Other results
Expectations in the limit
This section gives a closed expression for the expected number of balls remaining up to and including any fraction of the step size.
A corollary of this result is a numerical approximation of the index of the first ball which has an expectation of remaining greater than one.
( To be continued ) | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma | Rewrite: Jan 16, 2018
Section 1: Outline
The fundamental results this post are as follows:
The halfway ball has a probability of about $0.91$ of remaining in the limit as the step goes to $\infty$ - | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How many balls are left?
Rewrite: Jan 16, 2018
Section 1: Outline
The fundamental results this post are as follows:
The halfway ball has a probability of about $0.91$ of remaining in the limit as the step goes to $\infty$ - this is both a real world observation and is derived mathematically.
The derived function has a domain of the rationals in $(0,1]$. For example, the probability in the limit of the halfway ball remaining corresponds to the domain value $1/2$.
This function can computed the probability of remaining for any fraction of the step size.
Ross' analysis is not wrong but is incomplete because it attempts to iterate the rationals in order of magnitude $\left(i,\infty\right), i=1..\infty$.
The rationals cannot be iterated in order of magnitude. Hence, Ross's analysis cannot access the full domain and can only offer a limited view of the total behavior.
Ross's analysis does however account for one particular observable behavior: in the limit
it is not possible through serial iteration from 1 to reach the first remaining ballset.
Ross' limit sequences have some nice convincing properties that seem intuitively unique.
However, we show another set of limit sequences which satisfy the same nice properties and give the values for our function.
Section 2 "Notation and terminology" covers notation and terminology used in this post.
Section 3 "The Halfway Ballset" introduces a real world observation - the converge in the limit of the probability of remaining of a ball
whose index is a halfway through all the inserted balls. This limit value is about 91%. The case of the halfway ballset is generalized to any rational in $(0,1]$,
which all have non-zero limit values.
Section 4 "Resolution of the Paradox" presents a unified framework for including both Ross' result and the 'rational-domain' results (described herein).
As already noted, Ross' analysis an only offer a limited view of the total behavior. Hence, the source of the paradox is identified and resolved.
In the appendix some other results less important results are discussed:
"Expectations in the limit" calculates the expected number of balls remaining up to and including any fraction of the step size.
A corollary of this result is determining the index of the first ball which has an expectation of remaining greater than one.
Section 2: Notation and terminology
We label the ball indexes inserted at step $n$ as $\{n.1, n.2, n.3, ..... n.10\}$
and call this set the $n$th "ballset". Ballset is one word, created for this post.
This terminology regrettably deviates from Ross' terminology, but it also makes the text much clearer and shorter.
The notation $E(a,b)$ refers to the event that ball $a.1$ in ballset $a$ remains at step $b$, ignoring the other balls in the ballset.
The notation $P(a,b)$ is an abbreviation for $P(E(a,b))$ and it refers to the probability of $E(a,b)$.
Note that all balls $a.i$ in ballset $a$ have the same probability of remaining.
-- The value of $P(E(a,b))$ is $\prod_{k=a}^{b} \frac{9k}{(9k+1)}$.
The Ross limit $P(a)$ is the probability $P(a,b)$ as $b$ goes to infinity:
-- $P_{lim1}(a) = \lim_{b\rightarrow\infty} P(a,b)$
The rational-limit is defined as the limit as the both ball index $a$ and step $b$ go to infinity while maintaining constant ratio:
-- $P_{lim2}(a,b) = \lim_{k\rightarrow\infty} P(ka,kb)$
Section 3: The halfway ballset
At every even step $2n$, the halfway ballset is defined as the $n$th ballset.
At every even step $2n$, the halfway probability of remaining is defined as $P(n,2n)$.
In the limit as $n\rightarrow\infty$,
the halfway probability of remaining is therefore
$\lim_{n\rightarrow\infty} P(1*n,2*n)$.
Theorem 1 below gives a numerical value for the halfway probability of remaining.
Theorem 1 - Limit of probability of elements in a ratio-preserving domain sequence
\begin{equation}
\lim_{n\rightarrow\infty} P(a*n,b*n) = (\frac{a}{b})^\frac{1}{9}
\end{equation}
The proof is given below just before the appendix.
By Theorem 1, the halfway probability of remaining in the limit is
$(\frac{1}{2})^\frac{1}{9}$
which evaluates to an approximate decimal value of $0.925875$.
Sanity Check
Lets do a sanity check to see if the numerical limit for the halfway probability "looks right".
\begin{array}{|l|rcl|}
\hline
n & P(n/2,n) &=& \text{trunc decimal val} \\
\hline
1000 & P(500,1000) &=& 0.92572614082 \\
\hline
10000 & P(5000,10000) &=& 0.9258598528 \\
\hline
100000 & P(50000,100000) &=& 0.925873226 \\
\hline
1000000 & P(500000,1000000) &=& 0.92587456 \\
\hline
\hline
\infty & \lim_{n\rightarrow\infty} P(n,2n) &=& 0.925875 \\
\hline
\end{array}
The first 4 rows are the halfway probabilities of remaining for the step number values of
$10^3$, $10^4$, $10^5$, and $10^6$, respectively.
The final row is the limit.
It seems the halfway probabilities are indeed converging to the predicted limit.
This real world observation, which does not fit within Ross's framework, needs to be explained.
** Section 4 "Resolution of the Paradox" **
This section explains a unified framework for both Ross' analysis and the rational-domain analysis,
By viewing them together the paradox is resolved.
The rational limit $P_{lim2}(a,b)$ is reducible to a function from the rationals $(0,1]$ to the reals $(0,1]$:
\begin{array}{rcl}
P_{lim2}(a,b) &=& lim_{k\rightarrow\infty} P(ka,kb) \\
&=& (\frac{a'}{b'})^\frac{1}{9}
\end{array}
where $gcd(a',b')=1$ and $\frac{a'}{b'} = \frac{a}{b}$. Here $gcd()$ indicates greatest common divisor.
Equivalently statements are "$a'$ and $b'$ are mutually prime", and "$\frac{a'}{b'}$ is the reduced fraction of $\frac{a}{b}$.
The Ross limit can be written as the limit of a sequence of rational limits:
\begin{array}{rclr}
P_{lim1}(a) &=& lim_{k\rightarrow\infty} P(a,k) & \\
&=& lim_{i,k\rightarrow\infty} P(ka/i,kb) & \text{for some } b \\
&=& lim_{i\rightarrow\infty} P_{lim2}(a/i,b) & \\
&=& lim_{i\rightarrow\infty} P_{lim2}(0,b) &
\end{array}
The tuple $(0,b)$ is not a member of the rationals in $(0,1]$; it belongs to $[0,0]$.
Therefore the Ross limit is isomorphic to the function $P_{lim2}(a,b)$ on domain $[0,0]$ and its image is always the unique real $0$.
The Ross limit and the rational-limit are the same function on two disjoint domains $[0,0]$ and $(0,1]$ respectively.
The Ross limit only considers the case of ballset indexes which have been demoted to be infinitely small relative to the stepsize.
The Ross-limit analysis predicts that in the limit, accessing the values $P_{lim1}(i)$ sequentially for $i=1,2,...\infty$ will never reach a non-zero value.
This is a correct and corresponds to real world observation.
The rational-limit analysis accounts for real world observations such as the halfway ballset which the Ross-limit does not account for.
The function is the same $P_{lim2}(a,b)$ but the domain is $(0,1]$ instead of $[0,0]$
The diagram below depicts the both the Ross limit sequences and the rational limit sequences.
It is probably fair to say that Ross' analysis includes an implicit assumption that the Ross-limit and its domain is the entire domain of interest.
The intuition implicitly underlying Ross' assumption is like due to the four conditions below even if they are not explicitly recognized:
Let $S_i = P(i,n), n=1,...,\infty$ be the $i$th Roth limit sequence.
Let $S = \cup_{i=(1...\infty)} S_i$ be the union of Roth limit sequences.
(1) The sequences $S_i$ are disjoint and each sequence converges.
(2) The union of elements of all sequences $S$ cover exactly the set of all (ball,step) tuples coming into play: $\{ (i,n)\ |\ i\leq n\ \land \ i,n\in Q \}$
(3) All of the sequences $S_i$ are infinite in $n$, the step index, so they don't terminate "early".
(4) The sequences $S_i$ themselves form a super-sequence $\{ S_i \}_{i_in (1...\infty) }$. Therefore that super-sequence can be "created" iteratively, i,e,, they are countable.
It's not immediately apparent that another system of limit sequences could satisfy the above points (1) - (4).
However, we will now discuss another system of limit sequences which do indeed satisfy the above points (1) - (4).
Let $S_{p,q}$, where $gcd(p,q)=1$, represent the rational-limit sequence
\begin{equation}
S_{p,q} = \{ (kp,kq) \}_{k\in (1...\infty)}
\end{equation}
Let $D^*$ be the mutually prime tuples of $D$: = $D^* =\{ (p,q)\in D \land gcd(p,q)=1 \}$ .
Let $S^*$ be the union of said rational limit sequences: $S^* = \cup_{d\in D^*} S_{p,q}$
Clearly the sequences $S_{p,q}$ whose union is $S^*$ satisfy the above properties (1) - (3).
The indexes $(p,q)$ are exactly the rationals on $(0,1]$.
To satisfy condition (4) we need to show that the rationals on $(0,1]$ are countable.
The (Farey sequence)2 of order $n$ is the sequence of completely reduced fractions between 0 and 1 which when in lowest terms have denominators less than or equal to $n$,
arranged in order of increasing size. Here are the first eight Farey sequences:
F1 = {0/1, 1/1}
F2 = {0/1, 1/2, 1/1}
F3 = {0/1, 1/3, 1/2, 2/3, 1/1}
F4 = {0/1, 1/4, 1/3, 1/2, 2/3, 3/4, 1/1}
F5 = {0/1, 1/5, 1/4, 1/3, 2/5, 1/2, 3/5, 2/3, 3/4, 4/5, 1/1}
F6 = {0/1, 1/6, 1/5, 1/4, 1/3, 2/5, 1/2, 3/5, 2/3, 3/4, 4/5, 5/6, 1/1}
F7 = {0/1, 1/7, 1/6, 1/5, 1/4, 2/7, 1/3, 2/5, 3/7, 1/2, 4/7, 3/5, 2/3, 5/7, 3/4, 4/5, 5/6, 6/7, 1/1}
F8 = {0/1, 1/8, 1/7, 1/6, 1/5, 1/4, 2/7, 1/3, 3/8, 2/5, 3/7, 1/2, 4/7, 3/5, 5/8, 2/3, 5/7, 3/4, 4/5, 5/6, 6/7, 7/8, 1/1}
Let $F^*_n$ represent the $n$th Farey sequence without the first element $0/1$.
Let $S^*_n$ be the union of rational limit sequences which have at least one element up to and including step $n$:
\begin{equation}
S^*_n = \{ S_{p,q}\ |\ \exists (a,b) \}
\end{equation}
The elements of $F^*_n$ index, converted from fractions to tuples, exactly index the elements of $S^*_n$.
The following table compares the grouping of the limit sequences in the Ross analysis and the rational limit analysis:
\begin{array}{|c|c|c|}
\hline
& \text{Ross} & \text{rational} \\
\hline
\text{num new seq per step } & 1 & \text{multiple (generally)} \\
\hline
\text{new seq at step } n & S_n & F^*_n - F^*_{n-1} \\
\hline
\text{tot num seq up to step }n & n & \| F^*_n \| \\
\hline
\text{super-seq up to step }n & \{ S_m \}_{m=1}^{n} & F^*_n \\
\hline
\end{array}
Finally, since methods exist [3],[4] for iteratively creating the super sequence $ F^*_n $, the condition (4) is also satisfied.
One of those methods, a variant of the Stern-Brocot tree, is as follows:
The mediant of two rationals $a/c$ and $b/d$ is defined as $\frac{a+b}{c+d}$
Set $F*_n = \emptyset$
Append $1/n$ to $F*_n$
Loop for $i$ in $1...(\|F*_{n-1}\|-1)$
Append $F*_{n-1}[i]$ to F*_n$
Let $x = mediant(F*_{n-1}[i], F*_{n-1}[i+1])$
If $denom(x) \leq n$ append x to $F*_n$
continue loop
Append $F*_{n-1}[n]$ to $F*_n$
The paradox has been resolved.
Proof of Theorem 1
First note that:
\begin{eqnarray}
P(E_{a,b})
&=&
\prod_{k=a}^{b} \frac{9k}{(9k+1)}
\\
&=&
\frac{\Gamma \left(a+\frac{1}{9}\right) \Gamma (b+1)}{\Gamma (a) \Gamma\left(b+\frac{10}{9}\right)}
\\
&=&
(a-1)^{\frac{1}{2}-a} \left(a-\frac{8}{9}\right)^{a-\frac{7}{18}} b^{b+\frac{1}{2}}
\left(b+\frac{1}{9}\right)^{-b-\frac{11}{18}}
\end{eqnarray}
where the last transformation is the Sterling transformation.
Then, syntactically substituting $a\rightarrow a*n$ and $b\rightarrow b*n$ into the last (Sterling form) equation we get
\begin{eqnarray}
\lim_{n\rightarrow\infty} P(E_{a,b}) &=& \lim_{n\rightarrow\infty}
(a M-1)^{\frac{1}{2}-a M} \left(a M-\frac{8}{9}\right)^{a M-\frac{7}{18}} (b M)^{b
M+\frac{1}{2}} \left(b M+\frac{1}{9}\right)^{-b M-\frac{11}{18}}
\\
&=& \left(\frac{a}{b}\right)^\frac{1}{9}
\end{eqnarray}
Appendix: Other results
Expectations in the limit
This section gives a closed expression for the expected number of balls remaining up to and including any fraction of the step size.
A corollary of this result is a numerical approximation of the index of the first ball which has an expectation of remaining greater than one.
( To be continued ) | At each step of a limiting infinite process, put 10 balls in an urn and remove one at random. How ma
Rewrite: Jan 16, 2018
Section 1: Outline
The fundamental results this post are as follows:
The halfway ball has a probability of about $0.91$ of remaining in the limit as the step goes to $\infty$ - |
1,426 | Why does the Lasso provide Variable Selection? | Let's consider a very simple model: $y = \beta x + e$, with an L1 penalty on $\hat{\beta}$ and a least-squares loss function on $\hat{e}$. We can expand the expression to be minimized as:
$\min y^Ty -2 y^Tx\hat{\beta} + \hat{\beta} x^Tx\hat{\beta} + 2\lambda|\hat{\beta}|$
Keep in mind this is a univariate example, with $\beta$ and $x$ being scalars, to show how LASSO can send a coefficient to zero. This can be generalized to the multivariate case.
Let us assume the least-squares solution is some $\hat{\beta} > 0$, which is equivalent to assuming that $y^Tx > 0$, and see what happens when we add the L1 penalty. With $\hat{\beta}>0$, $|\hat{\beta}| = \hat{\beta}$, so the penalty term is equal to $2\lambda\beta$. The derivative of the objective function w.r.t. $\hat{\beta}$ is:
$-2y^Tx +2x^Tx\hat{\beta} + 2\lambda$
which evidently has solution $\hat{\beta} = (y^Tx - \lambda)/(x^Tx)$.
Obviously by increasing $\lambda$ we can drive $\hat{\beta}$ to zero (at $\lambda = y^Tx$). However, once $\hat{\beta} = 0$, increasing $\lambda$ won't drive it negative, because, writing loosely, the instant $\hat{\beta}$ becomes negative, the derivative of the objective function changes to:
$-2y^Tx +2x^Tx\hat{\beta} - 2\lambda$
where the flip in the sign of $\lambda$ is due to the absolute value nature of the penalty term; when $\beta$ becomes negative, the penalty term becomes equal to $-2\lambda\beta$, and taking the derivative w.r.t. $\beta$ results in $-2\lambda$. This leads to the solution $\hat{\beta} = (y^Tx + \lambda)/(x^Tx)$, which is obviously inconsistent with $\hat{\beta} < 0$ (given that the least squares solution $> 0$, which implies $y^Tx > 0$, and $\lambda > 0$). There is an increase in the L1 penalty AND an increase in the squared error term (as we are moving farther from the least squares solution) when moving $\hat{\beta}$ from $0$ to $ < 0$, so we don't, we just stick at $\hat{\beta}=0$.
It should be intuitively clear the same logic applies, with appropriate sign changes, for a least squares solution with $\hat{\beta} < 0$.
With the least squares penalty $\lambda\hat{\beta}^2$, however, the derivative becomes:
$-2y^Tx +2x^Tx\hat{\beta} + 2\lambda\hat{\beta}$
which evidently has solution $\hat{\beta} = y^Tx/(x^Tx + \lambda)$. Obviously no increase in $\lambda$ will drive this all the way to zero. So the L2 penalty can't act as a variable selection tool without some mild ad-hockery such as "set the parameter estimate equal to zero if it is less than $\epsilon$".
Obviously things can change when you move to multivariate models, for example, moving one parameter estimate around might force another one to change sign, but the general principle is the same: the L2 penalty function can't get you all the way to zero, because, writing very heuristically, it in effect adds to the "denominator" of the expression for $\hat{\beta}$, but the L1 penalty function can, because it in effect adds to the "numerator". | Why does the Lasso provide Variable Selection? | Let's consider a very simple model: $y = \beta x + e$, with an L1 penalty on $\hat{\beta}$ and a least-squares loss function on $\hat{e}$. We can expand the expression to be minimized as:
$\min y^Ty | Why does the Lasso provide Variable Selection?
Let's consider a very simple model: $y = \beta x + e$, with an L1 penalty on $\hat{\beta}$ and a least-squares loss function on $\hat{e}$. We can expand the expression to be minimized as:
$\min y^Ty -2 y^Tx\hat{\beta} + \hat{\beta} x^Tx\hat{\beta} + 2\lambda|\hat{\beta}|$
Keep in mind this is a univariate example, with $\beta$ and $x$ being scalars, to show how LASSO can send a coefficient to zero. This can be generalized to the multivariate case.
Let us assume the least-squares solution is some $\hat{\beta} > 0$, which is equivalent to assuming that $y^Tx > 0$, and see what happens when we add the L1 penalty. With $\hat{\beta}>0$, $|\hat{\beta}| = \hat{\beta}$, so the penalty term is equal to $2\lambda\beta$. The derivative of the objective function w.r.t. $\hat{\beta}$ is:
$-2y^Tx +2x^Tx\hat{\beta} + 2\lambda$
which evidently has solution $\hat{\beta} = (y^Tx - \lambda)/(x^Tx)$.
Obviously by increasing $\lambda$ we can drive $\hat{\beta}$ to zero (at $\lambda = y^Tx$). However, once $\hat{\beta} = 0$, increasing $\lambda$ won't drive it negative, because, writing loosely, the instant $\hat{\beta}$ becomes negative, the derivative of the objective function changes to:
$-2y^Tx +2x^Tx\hat{\beta} - 2\lambda$
where the flip in the sign of $\lambda$ is due to the absolute value nature of the penalty term; when $\beta$ becomes negative, the penalty term becomes equal to $-2\lambda\beta$, and taking the derivative w.r.t. $\beta$ results in $-2\lambda$. This leads to the solution $\hat{\beta} = (y^Tx + \lambda)/(x^Tx)$, which is obviously inconsistent with $\hat{\beta} < 0$ (given that the least squares solution $> 0$, which implies $y^Tx > 0$, and $\lambda > 0$). There is an increase in the L1 penalty AND an increase in the squared error term (as we are moving farther from the least squares solution) when moving $\hat{\beta}$ from $0$ to $ < 0$, so we don't, we just stick at $\hat{\beta}=0$.
It should be intuitively clear the same logic applies, with appropriate sign changes, for a least squares solution with $\hat{\beta} < 0$.
With the least squares penalty $\lambda\hat{\beta}^2$, however, the derivative becomes:
$-2y^Tx +2x^Tx\hat{\beta} + 2\lambda\hat{\beta}$
which evidently has solution $\hat{\beta} = y^Tx/(x^Tx + \lambda)$. Obviously no increase in $\lambda$ will drive this all the way to zero. So the L2 penalty can't act as a variable selection tool without some mild ad-hockery such as "set the parameter estimate equal to zero if it is less than $\epsilon$".
Obviously things can change when you move to multivariate models, for example, moving one parameter estimate around might force another one to change sign, but the general principle is the same: the L2 penalty function can't get you all the way to zero, because, writing very heuristically, it in effect adds to the "denominator" of the expression for $\hat{\beta}$, but the L1 penalty function can, because it in effect adds to the "numerator". | Why does the Lasso provide Variable Selection?
Let's consider a very simple model: $y = \beta x + e$, with an L1 penalty on $\hat{\beta}$ and a least-squares loss function on $\hat{e}$. We can expand the expression to be minimized as:
$\min y^Ty |
1,427 | Why does the Lasso provide Variable Selection? | Suppose we have a data set with y = 1 and x = [1/10 1/10] (one data point, two features). One solution is to pick one of the features, another feature is to weight both features. I.e. we can either pick w = [5 5] or w = [10 0].
Note that for the L1 norm both have the same penalty, but the more spread out weight has a lower penalty for the L2 norm. | Why does the Lasso provide Variable Selection? | Suppose we have a data set with y = 1 and x = [1/10 1/10] (one data point, two features). One solution is to pick one of the features, another feature is to weight both features. I.e. we can either | Why does the Lasso provide Variable Selection?
Suppose we have a data set with y = 1 and x = [1/10 1/10] (one data point, two features). One solution is to pick one of the features, another feature is to weight both features. I.e. we can either pick w = [5 5] or w = [10 0].
Note that for the L1 norm both have the same penalty, but the more spread out weight has a lower penalty for the L2 norm. | Why does the Lasso provide Variable Selection?
Suppose we have a data set with y = 1 and x = [1/10 1/10] (one data point, two features). One solution is to pick one of the features, another feature is to weight both features. I.e. we can either |
1,428 | Why does the Lasso provide Variable Selection? | I think there are excellent anwers already but just to add some intuition concerning the geometric interpretation:
"The lasso performs $L1$ shrinkage, so that there are "corners" in the constraint, which in two dimensions corresponds to a diamond. If the sum of squares "hits'' one of these corners, then the coefficient corresponding to the axis is shrunk to zero.
As $p$ increases, the multidimensional diamond has an increasing number of corners, and so it is highly likely that some coefficients will be set equal to zero. Hence, the lasso performs shrinkage and (effectively) subset selection.
In contrast with subset selection, ridge performs a soft thresholding: as the smoothing parameter is varied, the sample path of the estimates moves continuously to zero."
Source: https://onlinecourses.science.psu.edu/stat857/book/export/html/137
The effect can nicely be visualized where the colored lines are the paths of regression coefficients shrinking towards zero.
"Ridge regression shrinks all regression coefficients towards zero; the lasso tends to give a set of zero regression coefficients and leads to a sparse solution."
Source: https://onlinecourses.science.psu.edu/stat857/node/158 | Why does the Lasso provide Variable Selection? | I think there are excellent anwers already but just to add some intuition concerning the geometric interpretation:
"The lasso performs $L1$ shrinkage, so that there are "corners" in the constraint, wh | Why does the Lasso provide Variable Selection?
I think there are excellent anwers already but just to add some intuition concerning the geometric interpretation:
"The lasso performs $L1$ shrinkage, so that there are "corners" in the constraint, which in two dimensions corresponds to a diamond. If the sum of squares "hits'' one of these corners, then the coefficient corresponding to the axis is shrunk to zero.
As $p$ increases, the multidimensional diamond has an increasing number of corners, and so it is highly likely that some coefficients will be set equal to zero. Hence, the lasso performs shrinkage and (effectively) subset selection.
In contrast with subset selection, ridge performs a soft thresholding: as the smoothing parameter is varied, the sample path of the estimates moves continuously to zero."
Source: https://onlinecourses.science.psu.edu/stat857/book/export/html/137
The effect can nicely be visualized where the colored lines are the paths of regression coefficients shrinking towards zero.
"Ridge regression shrinks all regression coefficients towards zero; the lasso tends to give a set of zero regression coefficients and leads to a sparse solution."
Source: https://onlinecourses.science.psu.edu/stat857/node/158 | Why does the Lasso provide Variable Selection?
I think there are excellent anwers already but just to add some intuition concerning the geometric interpretation:
"The lasso performs $L1$ shrinkage, so that there are "corners" in the constraint, wh |
1,429 | Why does the Lasso provide Variable Selection? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
I recently created a blog post to compare ridge and lasso using a toy data frame of shark attacks. It helped me understand the behaviors of the algorithms especially when correlated variables are present. Take a look and also see this SO question to explain the shrinkage toward zero. | Why does the Lasso provide Variable Selection? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Why does the Lasso provide Variable Selection?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
I recently created a blog post to compare ridge and lasso using a toy data frame of shark attacks. It helped me understand the behaviors of the algorithms especially when correlated variables are present. Take a look and also see this SO question to explain the shrinkage toward zero. | Why does the Lasso provide Variable Selection?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
1,430 | Calculating optimal number of bins in a histogram | The Freedman-Diaconis rule is very robust and works well in practice. The bin-width is set to $h=2\times\text{IQR}\times n^{-1/3}$. So the number of bins is $(\max-\min)/h$, where $n$ is the number of observations, max is the maximum value and min is the minimum value.
In base R, you can use:
hist(x, breaks="FD")
For other plotting libraries without this option (e.g., ggplot2), you can calculate binwidth as:
bw <- 2 * IQR(x) / length(x)^(1/3)
### for example #####
ggplot() + geom_histogram(aes(x), binwidth = bw) | Calculating optimal number of bins in a histogram | The Freedman-Diaconis rule is very robust and works well in practice. The bin-width is set to $h=2\times\text{IQR}\times n^{-1/3}$. So the number of bins is $(\max-\min)/h$, where $n$ is the number of | Calculating optimal number of bins in a histogram
The Freedman-Diaconis rule is very robust and works well in practice. The bin-width is set to $h=2\times\text{IQR}\times n^{-1/3}$. So the number of bins is $(\max-\min)/h$, where $n$ is the number of observations, max is the maximum value and min is the minimum value.
In base R, you can use:
hist(x, breaks="FD")
For other plotting libraries without this option (e.g., ggplot2), you can calculate binwidth as:
bw <- 2 * IQR(x) / length(x)^(1/3)
### for example #####
ggplot() + geom_histogram(aes(x), binwidth = bw) | Calculating optimal number of bins in a histogram
The Freedman-Diaconis rule is very robust and works well in practice. The bin-width is set to $h=2\times\text{IQR}\times n^{-1/3}$. So the number of bins is $(\max-\min)/h$, where $n$ is the number of |
1,431 | Calculating optimal number of bins in a histogram | If you use too few bins, the histogram doesn't really portray the data very well. If you have too many bins, you get a broken comb look, which also doesn't give a sense of the distribution.
One solution is to create a graph that shows every value. Either a dot plot, or a cumulative frequency distribution, which doesn't require any bins.
If you want to create a frequency distribution with equally spaced bins, you need to decide how many bins (or the width of each). The decision clearly depends on the number of values. If you have lots of values, your graph will look better and be more informative if you have lots of bins. This wikipedia page lists several methods for deciding bin width from the number of observations. The simplest method is to set the number of bins equal to the square root of the number of values you are binning.
This page from Hideaki Shimazaki explains an alternative method. It is a bit more complicated to calculate, but seems to do a great job. The top part of the page is a Java app. Scroll past that to see the theory and explanation, then keep scrolling to find links to the papers that explain the method. | Calculating optimal number of bins in a histogram | If you use too few bins, the histogram doesn't really portray the data very well. If you have too many bins, you get a broken comb look, which also doesn't give a sense of the distribution.
One soluti | Calculating optimal number of bins in a histogram
If you use too few bins, the histogram doesn't really portray the data very well. If you have too many bins, you get a broken comb look, which also doesn't give a sense of the distribution.
One solution is to create a graph that shows every value. Either a dot plot, or a cumulative frequency distribution, which doesn't require any bins.
If you want to create a frequency distribution with equally spaced bins, you need to decide how many bins (or the width of each). The decision clearly depends on the number of values. If you have lots of values, your graph will look better and be more informative if you have lots of bins. This wikipedia page lists several methods for deciding bin width from the number of observations. The simplest method is to set the number of bins equal to the square root of the number of values you are binning.
This page from Hideaki Shimazaki explains an alternative method. It is a bit more complicated to calculate, but seems to do a great job. The top part of the page is a Java app. Scroll past that to see the theory and explanation, then keep scrolling to find links to the papers that explain the method. | Calculating optimal number of bins in a histogram
If you use too few bins, the histogram doesn't really portray the data very well. If you have too many bins, you get a broken comb look, which also doesn't give a sense of the distribution.
One soluti |
1,432 | Calculating optimal number of bins in a histogram | Maybe the paper "Variations on the histogram" by Denby and Mallows will be of interest:
This new display which we term "dhist" (for diagonally-cut histogram) preserves the desirable features of both the equal-width hist and the equal-area hist. It will show tall narrow bins like the e-a hist when there are spikes in the data and will show isolated outliers just like the usual histogram.
They also mention that code in R is available on request. | Calculating optimal number of bins in a histogram | Maybe the paper "Variations on the histogram" by Denby and Mallows will be of interest:
This new display which we term "dhist" (for diagonally-cut histogram) preserves the desirable features of both | Calculating optimal number of bins in a histogram
Maybe the paper "Variations on the histogram" by Denby and Mallows will be of interest:
This new display which we term "dhist" (for diagonally-cut histogram) preserves the desirable features of both the equal-width hist and the equal-area hist. It will show tall narrow bins like the e-a hist when there are spikes in the data and will show isolated outliers just like the usual histogram.
They also mention that code in R is available on request. | Calculating optimal number of bins in a histogram
Maybe the paper "Variations on the histogram" by Denby and Mallows will be of interest:
This new display which we term "dhist" (for diagonally-cut histogram) preserves the desirable features of both |
1,433 | Calculating optimal number of bins in a histogram | Did you see the Shimazaki-Shinomoto method?
Although it seems to be computationally expensive, it may give you good results. It's worth giving it a try if computational time is not your problem. There are some implementations of this method in java, MATLAB, etc, in the following link, which runs fast enough:
web-interface | Calculating optimal number of bins in a histogram | Did you see the Shimazaki-Shinomoto method?
Although it seems to be computationally expensive, it may give you good results. It's worth giving it a try if computational time is not your problem. There | Calculating optimal number of bins in a histogram
Did you see the Shimazaki-Shinomoto method?
Although it seems to be computationally expensive, it may give you good results. It's worth giving it a try if computational time is not your problem. There are some implementations of this method in java, MATLAB, etc, in the following link, which runs fast enough:
web-interface | Calculating optimal number of bins in a histogram
Did you see the Shimazaki-Shinomoto method?
Although it seems to be computationally expensive, it may give you good results. It's worth giving it a try if computational time is not your problem. There |
1,434 | Calculating optimal number of bins in a histogram | I'm not sure this counts as strictly good practice, but I tend to produce more than one histogram with different bin widths and pick the histogram which histogram to use based on which histogram fits the interpretation I'm trying to communicate best. Whilst this introduces some subjectivity into the choice of histogram I justify it on the basis I have had much more time to understand the data than the person I'm giving the histogram to so I need to give them a very concise message.
I'm also a big fan of presenting histograms with the same number of points in each bin rather than the same bin width. I usually find these represent the data far better then the constant bin width although they are more difficult to produce. | Calculating optimal number of bins in a histogram | I'm not sure this counts as strictly good practice, but I tend to produce more than one histogram with different bin widths and pick the histogram which histogram to use based on which histogram fits | Calculating optimal number of bins in a histogram
I'm not sure this counts as strictly good practice, but I tend to produce more than one histogram with different bin widths and pick the histogram which histogram to use based on which histogram fits the interpretation I'm trying to communicate best. Whilst this introduces some subjectivity into the choice of histogram I justify it on the basis I have had much more time to understand the data than the person I'm giving the histogram to so I need to give them a very concise message.
I'm also a big fan of presenting histograms with the same number of points in each bin rather than the same bin width. I usually find these represent the data far better then the constant bin width although they are more difficult to produce. | Calculating optimal number of bins in a histogram
I'm not sure this counts as strictly good practice, but I tend to produce more than one histogram with different bin widths and pick the histogram which histogram to use based on which histogram fits |
1,435 | Calculating optimal number of bins in a histogram | If I need to determine the number of bins programmatically I usually start out with a histogram that has way more bins than needed. Once the histogram is filled I then combine bins until I have enough entries per bin for the method I am using, e.g. if I want to model Poisson-uncertainties in a counting experiment with uncertainties from a normal distribution until I have more than something like 10 entries. | Calculating optimal number of bins in a histogram | If I need to determine the number of bins programmatically I usually start out with a histogram that has way more bins than needed. Once the histogram is filled I then combine bins until I have enough | Calculating optimal number of bins in a histogram
If I need to determine the number of bins programmatically I usually start out with a histogram that has way more bins than needed. Once the histogram is filled I then combine bins until I have enough entries per bin for the method I am using, e.g. if I want to model Poisson-uncertainties in a counting experiment with uncertainties from a normal distribution until I have more than something like 10 entries. | Calculating optimal number of bins in a histogram
If I need to determine the number of bins programmatically I usually start out with a histogram that has way more bins than needed. Once the histogram is filled I then combine bins until I have enough |
1,436 | Calculating optimal number of bins in a histogram | Please see this answer as a complementary of Mr. Rob Hyndman's answer.
In order to create histogram plots with exact same intervals or 'binwidths' using the Freedman–Diaconis rule either with basic R or ggplot2 package, we can use one of the values of hist() function namely breaks. Suppose we want to create a histogram of qsec from mtcars data using the Freedman–Diaconis rule. In basic R we use
x <- mtcars$qsec
hist(x, breaks = "FD")
Meanwhile, in ggplot2 package we use
h <- hist(x, breaks = "FD", plot = FALSE)
qplot(x, geom = "histogram", breaks = h$breaks, fill = I("red"), col = I("white"))
Or, alternatively
ggplot(mtcars, aes(x)) + geom_histogram(breaks = h$breaks, col = "white")
All of them generate histogram plots with exact same intervals and number of bins as intended. | Calculating optimal number of bins in a histogram | Please see this answer as a complementary of Mr. Rob Hyndman's answer.
In order to create histogram plots with exact same intervals or 'binwidths' using the Freedman–Diaconis rule either with basic R | Calculating optimal number of bins in a histogram
Please see this answer as a complementary of Mr. Rob Hyndman's answer.
In order to create histogram plots with exact same intervals or 'binwidths' using the Freedman–Diaconis rule either with basic R or ggplot2 package, we can use one of the values of hist() function namely breaks. Suppose we want to create a histogram of qsec from mtcars data using the Freedman–Diaconis rule. In basic R we use
x <- mtcars$qsec
hist(x, breaks = "FD")
Meanwhile, in ggplot2 package we use
h <- hist(x, breaks = "FD", plot = FALSE)
qplot(x, geom = "histogram", breaks = h$breaks, fill = I("red"), col = I("white"))
Or, alternatively
ggplot(mtcars, aes(x)) + geom_histogram(breaks = h$breaks, col = "white")
All of them generate histogram plots with exact same intervals and number of bins as intended. | Calculating optimal number of bins in a histogram
Please see this answer as a complementary of Mr. Rob Hyndman's answer.
In order to create histogram plots with exact same intervals or 'binwidths' using the Freedman–Diaconis rule either with basic R |
1,437 | Calculating optimal number of bins in a histogram | Conventional wisdom dictates that a "broken look' resulting from a histogram with many bins is undesirable. This clashes with the need to show individual outliers, digit preference, bimodality, data gaps, and other features. I believe that histograms need to be both summary and descriptive measures. For that reason I use either m=100 or 200 bins regardless of the sample size, with modifications to (1) have unequally spaced bins when the number of distinct data values is not huge and (2) to pool such unequally spaced bins when two distinct data values are closer together than, say, 1/5m of the data span. The result is "spike histograms" which I've implemented in many functions in the R Hmisc package. The key algorithm is here in for example the histboxp function.
Many examples are shown here including interactive spike histograms where data values can be viewed in hover text. You'll also see an example where horizontal lines are added underneath the histogram to show various quantiles as in a box blot.
Spike histograms are similar to rug plots, and the human eye is quite good at summarizing distributional shapes from examining tick mark density in the rug. No significant binning is needed. | Calculating optimal number of bins in a histogram | Conventional wisdom dictates that a "broken look' resulting from a histogram with many bins is undesirable. This clashes with the need to show individual outliers, digit preference, bimodality, data | Calculating optimal number of bins in a histogram
Conventional wisdom dictates that a "broken look' resulting from a histogram with many bins is undesirable. This clashes with the need to show individual outliers, digit preference, bimodality, data gaps, and other features. I believe that histograms need to be both summary and descriptive measures. For that reason I use either m=100 or 200 bins regardless of the sample size, with modifications to (1) have unequally spaced bins when the number of distinct data values is not huge and (2) to pool such unequally spaced bins when two distinct data values are closer together than, say, 1/5m of the data span. The result is "spike histograms" which I've implemented in many functions in the R Hmisc package. The key algorithm is here in for example the histboxp function.
Many examples are shown here including interactive spike histograms where data values can be viewed in hover text. You'll also see an example where horizontal lines are added underneath the histogram to show various quantiles as in a box blot.
Spike histograms are similar to rug plots, and the human eye is quite good at summarizing distributional shapes from examining tick mark density in the rug. No significant binning is needed. | Calculating optimal number of bins in a histogram
Conventional wisdom dictates that a "broken look' resulting from a histogram with many bins is undesirable. This clashes with the need to show individual outliers, digit preference, bimodality, data |
1,438 | Calculating optimal number of bins in a histogram | With so few data, what approaches should I take to calculating the number of bins to use?
FD or doane methods (see below) might be more suitable. Experience tells, this depends on the upstream task as well. If binning changes the inference or results of the upstream task. Then, one should find a stable/robust method that does not change with changing data.
numpy's manual page on histogram_bin_edges provides nice list and pros and cons of each approach. The 'auto' option follows partially Rob Hyndman's recommendation Here is the full list
‘auto’
Maximum of the ‘sturges’ and ‘fd’ estimators. Provides good all around performance.
‘fd’ (Freedman Diaconis Estimator)
Robust (resilient to outliers) estimator that takes into account data variability and data size.
‘doane’
An improved version of Sturges’ estimator that works better with non-normal datasets.
‘scott’
Less robust estimator that that takes into account data variability and data size.
‘stone’
Estimator based on leave-one-out cross-validation estimate of the integrated squared error. Can be regarded as a generalization of Scott’s rule.
‘rice’
Estimator does not take variability into account, only data size. Commonly overestimates number of bins required.
‘sturges’
R’s default method, only accounts for data size. Only optimal for gaussian data and underestimates number of bins for large non-gaussian datasets.
‘sqrt’
Square root (of data size) estimator, used by Excel and other programs for its speed and simplicity. | Calculating optimal number of bins in a histogram | With so few data, what approaches should I take to calculating the number of bins to use?
FD or doane methods (see below) might be more suitable. Experience tells, this depends on the upstream task a | Calculating optimal number of bins in a histogram
With so few data, what approaches should I take to calculating the number of bins to use?
FD or doane methods (see below) might be more suitable. Experience tells, this depends on the upstream task as well. If binning changes the inference or results of the upstream task. Then, one should find a stable/robust method that does not change with changing data.
numpy's manual page on histogram_bin_edges provides nice list and pros and cons of each approach. The 'auto' option follows partially Rob Hyndman's recommendation Here is the full list
‘auto’
Maximum of the ‘sturges’ and ‘fd’ estimators. Provides good all around performance.
‘fd’ (Freedman Diaconis Estimator)
Robust (resilient to outliers) estimator that takes into account data variability and data size.
‘doane’
An improved version of Sturges’ estimator that works better with non-normal datasets.
‘scott’
Less robust estimator that that takes into account data variability and data size.
‘stone’
Estimator based on leave-one-out cross-validation estimate of the integrated squared error. Can be regarded as a generalization of Scott’s rule.
‘rice’
Estimator does not take variability into account, only data size. Commonly overestimates number of bins required.
‘sturges’
R’s default method, only accounts for data size. Only optimal for gaussian data and underestimates number of bins for large non-gaussian datasets.
‘sqrt’
Square root (of data size) estimator, used by Excel and other programs for its speed and simplicity. | Calculating optimal number of bins in a histogram
With so few data, what approaches should I take to calculating the number of bins to use?
FD or doane methods (see below) might be more suitable. Experience tells, this depends on the upstream task a |
1,439 | Calculating optimal number of bins in a histogram | Another method is Bayesian Blocks from Studies in Astronomical Time Series Analysis. VI. Bayesian Block Representations by Scargle et al.
Bayesian Blocks is a dynamic histogramming method which optimizes one
of several possible fitness functions to determine an optimal binning
for data, where the bins are not necessarily uniform width.
Bayesian Blocks for Histograms | Calculating optimal number of bins in a histogram | Another method is Bayesian Blocks from Studies in Astronomical Time Series Analysis. VI. Bayesian Block Representations by Scargle et al.
Bayesian Blocks is a dynamic histogramming method which optim | Calculating optimal number of bins in a histogram
Another method is Bayesian Blocks from Studies in Astronomical Time Series Analysis. VI. Bayesian Block Representations by Scargle et al.
Bayesian Blocks is a dynamic histogramming method which optimizes one
of several possible fitness functions to determine an optimal binning
for data, where the bins are not necessarily uniform width.
Bayesian Blocks for Histograms | Calculating optimal number of bins in a histogram
Another method is Bayesian Blocks from Studies in Astronomical Time Series Analysis. VI. Bayesian Block Representations by Scargle et al.
Bayesian Blocks is a dynamic histogramming method which optim |
1,440 | Calculating optimal number of bins in a histogram | The MDL histogram density estimation method has the following features:
variable width; the method is not constrained to histograms with fixed bin widths.
adaptive; the number of bins, and bin widths are determined based on data. Very few input parameters are required, and the parameters have little impact on the resulting aesthetics.
principled; the resulting histogram is the normalized maximum likelihood distribution (constrained to histograms). | Calculating optimal number of bins in a histogram | The MDL histogram density estimation method has the following features:
variable width; the method is not constrained to histograms with fixed bin widths.
adaptive; the number of bins, and bin widths | Calculating optimal number of bins in a histogram
The MDL histogram density estimation method has the following features:
variable width; the method is not constrained to histograms with fixed bin widths.
adaptive; the number of bins, and bin widths are determined based on data. Very few input parameters are required, and the parameters have little impact on the resulting aesthetics.
principled; the resulting histogram is the normalized maximum likelihood distribution (constrained to histograms). | Calculating optimal number of bins in a histogram
The MDL histogram density estimation method has the following features:
variable width; the method is not constrained to histograms with fixed bin widths.
adaptive; the number of bins, and bin widths |
1,441 | Is it possible to train a neural network without backpropagation? | The first two algorithms you mention (Nelder-Mead and Simulated Annealing) are generally considered pretty much obsolete in optimization circles, as there are much better alternatives which are both more reliable and less costly. Genetic algorithms covers a wide range, and some of these can be reasonable.
However, in the broader class of derivative-free optimization (DFO) algorithms, there are many which are significantly better than these "classics", as this has been an active area of research in recent decades. So, might some of these newer approaches be reasonable for deep learning?
A relatively recent paper comparing the state of the art is the following:
Rios, L. M., & Sahinidis, N. V. (2013) Derivative-free optimization: a review of algorithms and comparison of software implementations. Journal of Global Optimization.
This is a nice paper which has many interesting insights into recent techniques. For example, the results clearly show that the best local optimizers are all "model-based", using different forms of sequential quadratic programming (SQP).
However, as noted in their abstract "We find that the ability of all these solvers to obtain good solutions diminishes with increasing problem size." To give an idea of the numbers, for all problems the solvers were given a budget of 2500 function evaluations, and problem sizes were a maximum of ~300 parameters to optimize. Beyond O[10] parameters, very few of these optimizers performed very well, and even the best ones showed a noticable decay in performance as problem size was increased.
So for very high dimensional problems, DFO algorithms just are not competitive with derivative based ones. To give some perspective, PDE (partial differential equation)-based optimization is another area with very high dimensional problems (e.g. several parameter for each cell of a large 3D finite element grid). In this realm, the "adjoint method" is one of the most used methods. This is also a gradient-descent optimizer based on automatic differentiation of a forward model code.
The closest to a high-dimensional DFO optimizer is perhaps the Ensemble Kalman Filter, used for assimilating data into complex PDE simulations, e.g. weather models. Interestingly, this is essentially an SQP approach, but with a Bayesian-Gaussian interpretation (so the quadratic model is positive definite, i.e. no saddle points). But I do not think that the number of parameters or observations in these applications is comparable to what is seen in deep learning.
Side note (local minima): From the little I have read on deep learning, I think the consensus is that it is saddle points rather than local minima, which are most problematic for high dimensional NN-parameter spaces.
For example, the recent review in Nature says "Recent theoretical and empirical results strongly suggest that local minima are not a serious issue in general. Instead, the landscape is packed with a combinatorially large number of saddle points where the gradient is zero, and the surface curves up in most dimensions and curves down in the remainder."
A related concern is about local vs. global optimization (for example this question pointed out in the comments). While I do not do deep learning, in my experience overfitting is definitely a valid concern. In my opinion, global optimization methods are most suited for engineering design problems that do not strongly depend on "natural" data. In data assimilation problems, any current global minima could easily change upon addition of new data (caveat: My experience is concentrated in geoscience problems, where data is generally "sparse" relative to model capacity).
An interesting perspective is perhaps
O. Bousquet & L. Bottou (2008) The tradeoffs of large scale learning. NIPS.
which provides semi-theoretical arguments on why and when approximate optimization may be preferable in practice.
End note (meta-optimization): While gradient based techniques seem likely to be dominant for training networks, there may be a role for DFO in associated meta-optimization tasks.
One example would be hyper-parameter tuning. (Interestingly, the successful model-based DFO optimizers from Rios & Sahinidis could be seen as essentially solving a sequence of design-of-experiments/response-surface problems.)
Another example might be designing architectures, in terms of the set-up of layers (e.g. number, type, sequence, nodes/layer). In this discrete-optimization context genetic-style algorithms may be more appropriate. Note that here I am thinking of the case where connectivity is determined implicitly by these factors (e.g. fully-connected layers, convolutional layers, etc.). In other words the $\mathrm{O}[N^2]$ connectivity is $not$ meta-optimized explicitly. (The connection strength would fall under training, where e.g. sparsity could be promoted by $L_1$ regularization and/or ReLU activations ... these choices could be meta-optimized however.) | Is it possible to train a neural network without backpropagation? | The first two algorithms you mention (Nelder-Mead and Simulated Annealing) are generally considered pretty much obsolete in optimization circles, as there are much better alternatives which are both m | Is it possible to train a neural network without backpropagation?
The first two algorithms you mention (Nelder-Mead and Simulated Annealing) are generally considered pretty much obsolete in optimization circles, as there are much better alternatives which are both more reliable and less costly. Genetic algorithms covers a wide range, and some of these can be reasonable.
However, in the broader class of derivative-free optimization (DFO) algorithms, there are many which are significantly better than these "classics", as this has been an active area of research in recent decades. So, might some of these newer approaches be reasonable for deep learning?
A relatively recent paper comparing the state of the art is the following:
Rios, L. M., & Sahinidis, N. V. (2013) Derivative-free optimization: a review of algorithms and comparison of software implementations. Journal of Global Optimization.
This is a nice paper which has many interesting insights into recent techniques. For example, the results clearly show that the best local optimizers are all "model-based", using different forms of sequential quadratic programming (SQP).
However, as noted in their abstract "We find that the ability of all these solvers to obtain good solutions diminishes with increasing problem size." To give an idea of the numbers, for all problems the solvers were given a budget of 2500 function evaluations, and problem sizes were a maximum of ~300 parameters to optimize. Beyond O[10] parameters, very few of these optimizers performed very well, and even the best ones showed a noticable decay in performance as problem size was increased.
So for very high dimensional problems, DFO algorithms just are not competitive with derivative based ones. To give some perspective, PDE (partial differential equation)-based optimization is another area with very high dimensional problems (e.g. several parameter for each cell of a large 3D finite element grid). In this realm, the "adjoint method" is one of the most used methods. This is also a gradient-descent optimizer based on automatic differentiation of a forward model code.
The closest to a high-dimensional DFO optimizer is perhaps the Ensemble Kalman Filter, used for assimilating data into complex PDE simulations, e.g. weather models. Interestingly, this is essentially an SQP approach, but with a Bayesian-Gaussian interpretation (so the quadratic model is positive definite, i.e. no saddle points). But I do not think that the number of parameters or observations in these applications is comparable to what is seen in deep learning.
Side note (local minima): From the little I have read on deep learning, I think the consensus is that it is saddle points rather than local minima, which are most problematic for high dimensional NN-parameter spaces.
For example, the recent review in Nature says "Recent theoretical and empirical results strongly suggest that local minima are not a serious issue in general. Instead, the landscape is packed with a combinatorially large number of saddle points where the gradient is zero, and the surface curves up in most dimensions and curves down in the remainder."
A related concern is about local vs. global optimization (for example this question pointed out in the comments). While I do not do deep learning, in my experience overfitting is definitely a valid concern. In my opinion, global optimization methods are most suited for engineering design problems that do not strongly depend on "natural" data. In data assimilation problems, any current global minima could easily change upon addition of new data (caveat: My experience is concentrated in geoscience problems, where data is generally "sparse" relative to model capacity).
An interesting perspective is perhaps
O. Bousquet & L. Bottou (2008) The tradeoffs of large scale learning. NIPS.
which provides semi-theoretical arguments on why and when approximate optimization may be preferable in practice.
End note (meta-optimization): While gradient based techniques seem likely to be dominant for training networks, there may be a role for DFO in associated meta-optimization tasks.
One example would be hyper-parameter tuning. (Interestingly, the successful model-based DFO optimizers from Rios & Sahinidis could be seen as essentially solving a sequence of design-of-experiments/response-surface problems.)
Another example might be designing architectures, in terms of the set-up of layers (e.g. number, type, sequence, nodes/layer). In this discrete-optimization context genetic-style algorithms may be more appropriate. Note that here I am thinking of the case where connectivity is determined implicitly by these factors (e.g. fully-connected layers, convolutional layers, etc.). In other words the $\mathrm{O}[N^2]$ connectivity is $not$ meta-optimized explicitly. (The connection strength would fall under training, where e.g. sparsity could be promoted by $L_1$ regularization and/or ReLU activations ... these choices could be meta-optimized however.) | Is it possible to train a neural network without backpropagation?
The first two algorithms you mention (Nelder-Mead and Simulated Annealing) are generally considered pretty much obsolete in optimization circles, as there are much better alternatives which are both m |
1,442 | Is it possible to train a neural network without backpropagation? | Well, the original neural networks, before the backpropagation revolution in the 70s, were "trained" by hand. :)
That being said:
There is a "school" of machine learning called extreme learning machine that does not use backpropagation.
What they do do is to create a neural network with many, many, many nodes --with random weights-- and then train the last layer using minimum squares (like a linear regression). They then either prune the neural network afterwards or they apply regularization in the last step (like lasso) to avoid overfitting. I have seen this applied to neural networks with a single hidden layer only. There is no training, so it's super fast. I did some tests and surprisingly, these neural networks "trained" this way are quite accurate.
Most people, at least the ones I work with, treat this machine learning "school" with derision and they are an outcast group with their own conferences and so on, but I actually think it's kind of ingenuous.
One other point: within backpropagation, there are alternatives that are seldom mentioned like resilient backproagation, which are implemented in R in the neuralnet package, which only use the magnitude of the derivative. The algorithm is made of if-else conditions instead of linear algebra. They have some advantages over traditional backpropagation, namely you do not need to normalize your data because they do not suffer from the vanishing gradient problem. | Is it possible to train a neural network without backpropagation? | Well, the original neural networks, before the backpropagation revolution in the 70s, were "trained" by hand. :)
That being said:
There is a "school" of machine learning called extreme learning machin | Is it possible to train a neural network without backpropagation?
Well, the original neural networks, before the backpropagation revolution in the 70s, were "trained" by hand. :)
That being said:
There is a "school" of machine learning called extreme learning machine that does not use backpropagation.
What they do do is to create a neural network with many, many, many nodes --with random weights-- and then train the last layer using minimum squares (like a linear regression). They then either prune the neural network afterwards or they apply regularization in the last step (like lasso) to avoid overfitting. I have seen this applied to neural networks with a single hidden layer only. There is no training, so it's super fast. I did some tests and surprisingly, these neural networks "trained" this way are quite accurate.
Most people, at least the ones I work with, treat this machine learning "school" with derision and they are an outcast group with their own conferences and so on, but I actually think it's kind of ingenuous.
One other point: within backpropagation, there are alternatives that are seldom mentioned like resilient backproagation, which are implemented in R in the neuralnet package, which only use the magnitude of the derivative. The algorithm is made of if-else conditions instead of linear algebra. They have some advantages over traditional backpropagation, namely you do not need to normalize your data because they do not suffer from the vanishing gradient problem. | Is it possible to train a neural network without backpropagation?
Well, the original neural networks, before the backpropagation revolution in the 70s, were "trained" by hand. :)
That being said:
There is a "school" of machine learning called extreme learning machin |
1,443 | Is it possible to train a neural network without backpropagation? | There are all sorts of local search algorithms you could use, backpropagation has just proved to be the most efficient for more complex tasks in general; there are circumstances where other local searches are better.
You could use random-start hill climbing on a neural network to find an ok solution quickly, but it wouldn't be feasible to find a near optimal solution.
Wikipedia (I know, not the greatest source, but still) says
For problems where finding the precise global optimum is less important than finding an acceptable local optimum in a fixed amount of time, simulated annealing may be preferable to alternatives such as gradient descent.
source
As for genetic algorithms, I would see Backpropagation vs Genetic Algorithm for Neural Network training
The main case I would make for backprop is that it is very widely used and has had a lot of great improvements. These images really show some of the incredible advancements to vanilla backpropagation.
I wouldn't think of backprop as one algorithm, but a class of algorithms.
I'd also like to add that for neural networks, 10k parameters is small beans. Another search would work great, but on a deep network with millions of parameters, it's hardly practical. | Is it possible to train a neural network without backpropagation? | There are all sorts of local search algorithms you could use, backpropagation has just proved to be the most efficient for more complex tasks in general; there are circumstances where other local sear | Is it possible to train a neural network without backpropagation?
There are all sorts of local search algorithms you could use, backpropagation has just proved to be the most efficient for more complex tasks in general; there are circumstances where other local searches are better.
You could use random-start hill climbing on a neural network to find an ok solution quickly, but it wouldn't be feasible to find a near optimal solution.
Wikipedia (I know, not the greatest source, but still) says
For problems where finding the precise global optimum is less important than finding an acceptable local optimum in a fixed amount of time, simulated annealing may be preferable to alternatives such as gradient descent.
source
As for genetic algorithms, I would see Backpropagation vs Genetic Algorithm for Neural Network training
The main case I would make for backprop is that it is very widely used and has had a lot of great improvements. These images really show some of the incredible advancements to vanilla backpropagation.
I wouldn't think of backprop as one algorithm, but a class of algorithms.
I'd also like to add that for neural networks, 10k parameters is small beans. Another search would work great, but on a deep network with millions of parameters, it's hardly practical. | Is it possible to train a neural network without backpropagation?
There are all sorts of local search algorithms you could use, backpropagation has just proved to be the most efficient for more complex tasks in general; there are circumstances where other local sear |
1,444 | Is it possible to train a neural network without backpropagation? | You can use pretty much any numerical optimization algorithm to optimize weights of a neural network. You can also use mixed continous-discrete optimization algorithms to optimize not only weights, but layout itself (number of layers, number of neurons in each layer, even type of the neuron).
However there's no optimization algorithm that do not suffer from "curse of dimensionality" and local optimas in some manner | Is it possible to train a neural network without backpropagation? | You can use pretty much any numerical optimization algorithm to optimize weights of a neural network. You can also use mixed continous-discrete optimization algorithms to optimize not only weights, bu | Is it possible to train a neural network without backpropagation?
You can use pretty much any numerical optimization algorithm to optimize weights of a neural network. You can also use mixed continous-discrete optimization algorithms to optimize not only weights, but layout itself (number of layers, number of neurons in each layer, even type of the neuron).
However there's no optimization algorithm that do not suffer from "curse of dimensionality" and local optimas in some manner | Is it possible to train a neural network without backpropagation?
You can use pretty much any numerical optimization algorithm to optimize weights of a neural network. You can also use mixed continous-discrete optimization algorithms to optimize not only weights, bu |
1,445 | Is it possible to train a neural network without backpropagation? | You can also use another network to advise how the parameters should be updated.
There is the Decoupled Neural Interfaces (DNI) from Google Deepmind. Instead of using backpropagation, it uses another set of neural networks to predict how to update the parameters, which allows for parallel and asynchronous parameter update.
The paper shows that DNI increases the training speed and model capacity of RNNs, and gives comparable results for both RNNs and FFNNs on various tasks.
The paper also listed and compared many other non-backpropagation methods
Our synthetic gradient model is most analogous to a value function
which is used for gradient ascent [2] or a value function used for
bootstrapping. Most other works that aim to remove backpropagation do
so with the goal of performing biologically plausible credit
assignment, but this doesn’t eliminate update locking between layers.
E.g. target propagation [3, 15] removes the reliance on passing
gradients between layers, by instead generating target activations
which should be fitted to. However these targets must still be
generated sequentially, propagating backwards through the network and
layers are therefore still update- and backwardslocked. Other
algorithms remove the backwards locking by allowing loss or rewards to
be broadcast directly to each layer – e.g. REINFORCE [21] (considering
all activations are actions), Kickback 1, and Policy Gradient
Coagent Networks [20] – but still remain update locked since they
require rewards to be generated by an output (or a global critic).
While Real-Time Recurrent Learning [22] or approximations such as [17]
may seem a promising way to remove update locking, these methods
require maintaining the full (or approximate) gradient of the current
state with respect to the parameters. This is inherently not scalable
and also requires the optimiser to have global knowledge of the
network state. In contrast, by framing the interaction between layers
as a local communication problem with DNI, we remove the need for
global knowledge of the learning system. Other works such as [4, 19]
allow training of layers in parallel without backpropagation, but in
practice are not scalable to more complex and generic network
architectures. | Is it possible to train a neural network without backpropagation? | You can also use another network to advise how the parameters should be updated.
There is the Decoupled Neural Interfaces (DNI) from Google Deepmind. Instead of using backpropagation, it uses another | Is it possible to train a neural network without backpropagation?
You can also use another network to advise how the parameters should be updated.
There is the Decoupled Neural Interfaces (DNI) from Google Deepmind. Instead of using backpropagation, it uses another set of neural networks to predict how to update the parameters, which allows for parallel and asynchronous parameter update.
The paper shows that DNI increases the training speed and model capacity of RNNs, and gives comparable results for both RNNs and FFNNs on various tasks.
The paper also listed and compared many other non-backpropagation methods
Our synthetic gradient model is most analogous to a value function
which is used for gradient ascent [2] or a value function used for
bootstrapping. Most other works that aim to remove backpropagation do
so with the goal of performing biologically plausible credit
assignment, but this doesn’t eliminate update locking between layers.
E.g. target propagation [3, 15] removes the reliance on passing
gradients between layers, by instead generating target activations
which should be fitted to. However these targets must still be
generated sequentially, propagating backwards through the network and
layers are therefore still update- and backwardslocked. Other
algorithms remove the backwards locking by allowing loss or rewards to
be broadcast directly to each layer – e.g. REINFORCE [21] (considering
all activations are actions), Kickback 1, and Policy Gradient
Coagent Networks [20] – but still remain update locked since they
require rewards to be generated by an output (or a global critic).
While Real-Time Recurrent Learning [22] or approximations such as [17]
may seem a promising way to remove update locking, these methods
require maintaining the full (or approximate) gradient of the current
state with respect to the parameters. This is inherently not scalable
and also requires the optimiser to have global knowledge of the
network state. In contrast, by framing the interaction between layers
as a local communication problem with DNI, we remove the need for
global knowledge of the learning system. Other works such as [4, 19]
allow training of layers in parallel without backpropagation, but in
practice are not scalable to more complex and generic network
architectures. | Is it possible to train a neural network without backpropagation?
You can also use another network to advise how the parameters should be updated.
There is the Decoupled Neural Interfaces (DNI) from Google Deepmind. Instead of using backpropagation, it uses another |
1,446 | Is it possible to train a neural network without backpropagation? | As long as this is a community question , I thought I would add another response. "Back Propagation" is simply the gradient descent algorithm. It involves using only the first derivative of the function for which one is trying to find the local minima or maxima. There is another method called Newton's method or Newton-Raphson which involves calculating the Hessian and so uses second derivatives. It can succeed in instances in which gradient descent fails. I am told by others more knowledgeable than me, and yes this is a second hand appeal to authority, that it is not used in neural nets because calculating all the second derivatives is too costly in terms of computation. | Is it possible to train a neural network without backpropagation? | As long as this is a community question , I thought I would add another response. "Back Propagation" is simply the gradient descent algorithm. It involves using only the first derivative of the fun | Is it possible to train a neural network without backpropagation?
As long as this is a community question , I thought I would add another response. "Back Propagation" is simply the gradient descent algorithm. It involves using only the first derivative of the function for which one is trying to find the local minima or maxima. There is another method called Newton's method or Newton-Raphson which involves calculating the Hessian and so uses second derivatives. It can succeed in instances in which gradient descent fails. I am told by others more knowledgeable than me, and yes this is a second hand appeal to authority, that it is not used in neural nets because calculating all the second derivatives is too costly in terms of computation. | Is it possible to train a neural network without backpropagation?
As long as this is a community question , I thought I would add another response. "Back Propagation" is simply the gradient descent algorithm. It involves using only the first derivative of the fun |
1,447 | Assessing approximate distribution of data based on a histogram | The difficulty with using histograms to infer shape
While histograms are often handy and mostly useful, they can be misleading. Their appearance can alter quite a lot with changes in the locations of the bin boundaries.
This problem has long been known*, though perhaps not as widely as it should be -- you rarely see it mentioned in elementary-level discussions (though there are exceptions).
* for example, Paul Rubin[1] put it this way: "it's well known that changing the endpoints in a histogram can significantly alter its appearance". .
I think it's an issue that should be more widely discussed when introducing histograms. I'll give some examples and discussion.
Why you should be wary of relying on a single histogram of a data set
Take a look at these four histograms:
That's four very different looking histograms.
If you paste the following data in (I'm using R here):
Annie <- c(3.15,5.46,3.28,4.2,1.98,2.28,3.12,4.1,3.42,3.91,2.06,5.53,
5.19,2.39,1.88,3.43,5.51,2.54,3.64,4.33,4.85,5.56,1.89,4.84,5.74,3.22,
5.52,1.84,4.31,2.01,4.01,5.31,2.56,5.11,2.58,4.43,4.96,1.9,5.6,1.92)
Brian <- c(2.9, 5.21, 3.03, 3.95, 1.73, 2.03, 2.87, 3.85, 3.17, 3.66,
1.81, 5.28, 4.94, 2.14, 1.63, 3.18, 5.26, 2.29, 3.39, 4.08, 4.6,
5.31, 1.64, 4.59, 5.49, 2.97, 5.27, 1.59, 4.06, 1.76, 3.76, 5.06,
2.31, 4.86, 2.33, 4.18, 4.71, 1.65, 5.35, 1.67)
Chris <- c(2.65, 4.96, 2.78, 3.7, 1.48, 1.78, 2.62, 3.6, 2.92, 3.41, 1.56,
5.03, 4.69, 1.89, 1.38, 2.93, 5.01, 2.04, 3.14, 3.83, 4.35, 5.06,
1.39, 4.34, 5.24, 2.72, 5.02, 1.34, 3.81, 1.51, 3.51, 4.81, 2.06,
4.61, 2.08, 3.93, 4.46, 1.4, 5.1, 1.42)
Zoe <- c(2.4, 4.71, 2.53, 3.45, 1.23, 1.53, 2.37, 3.35, 2.67, 3.16,
1.31, 4.78, 4.44, 1.64, 1.13, 2.68, 4.76, 1.79, 2.89, 3.58, 4.1,
4.81, 1.14, 4.09, 4.99, 2.47, 4.77, 1.09, 3.56, 1.26, 3.26, 4.56,
1.81, 4.36, 1.83, 3.68, 4.21, 1.15, 4.85, 1.17)
Then you can generate them yourself:
opar<-par()
par(mfrow=c(2,2))
hist(Annie,breaks=1:6,main="Annie",xlab="V1",col="lightblue")
hist(Brian,breaks=1:6,main="Brian",xlab="V2",col="lightblue")
hist(Chris,breaks=1:6,main="Chris",xlab="V3",col="lightblue")
hist(Zoe,breaks=1:6,main="Zoe",xlab="V4",col="lightblue")
par(opar)
Now look at this strip chart:
x<-c(Annie,Brian,Chris,Zoe)
g<-rep(c('A','B','C','Z'),each=40)
stripchart(x~g,pch='|')
abline(v=(5:23)/4,col=8,lty=3)
abline(v=(2:5),col=6,lty=3)
(If it's still not obvious, see what happens when you subtract Annie's data from each set: head(matrix(x-Annie,nrow=40)))
The data has simply been shifted left each time by 0.25.
Yet the impressions we get from the histograms - right skew, uniform, left skew and bimodal - were utterly different. Our impression was entirely governed by the location of the first bin-origin relative to the minimum.
So not just 'exponential' vs 'not-really-exponential' but 'right skew' vs 'left skew' or 'bimodal' vs 'uniform' just by moving where your bins start.
Edit: If you vary the binwidth, you can get stuff like this happen:
That's the same 34 observations in both cases, just different breakpoints, one with binwidth $1$ and the other with binwidth $0.8$.
x <- c(1.03, 1.24, 1.47, 1.52, 1.92, 1.93, 1.94, 1.95, 1.96, 1.97, 1.98,
1.99, 2.72, 2.75, 2.78, 2.81, 2.84, 2.87, 2.9, 2.93, 2.96, 2.99, 3.6,
3.64, 3.66, 3.72, 3.77, 3.88, 3.91, 4.14, 4.54, 4.77, 4.81, 5.62)
hist(x,breaks=seq(0.3,6.7,by=0.8),xlim=c(0,6.7),col="green3",freq=FALSE)
hist(x,breaks=0:8,col="aquamarine",freq=FALSE)
Nifty, eh?
Yes, those data were deliberately generated to do that... but the lesson is clear - what you think you see in a histogram may not be a particularly accurate impression of the data.
What can we do?
Histograms are widely used, frequently convenient to obtain and sometimes expected. What can we do to avoid or mitigate such problems?
As Nick Cox points out in a comment to a related question: The rule of thumb always should be that details robust to variations in bin width and bin origin are likely to be genuine; details fragile to such are likely to be spurious or trivial.
At the least, you should always do histograms at several different binwidths or bin-origins, or preferably both.
Alternatively, check a kernel density estimate at not-too-wide a bandwidth.
One other approach that reduces the arbitrariness of histograms is averaged shifted histograms,
(that's one on that most recent set of data) but if you go to that effort, I think you might as well use a kernel density estimate.
If I am doing a histogram (I use them in spite of being acutely aware of the issue), I almost always prefer to use considerably more bins than typical program defaults tend to give and very often I like to do several histograms with varying bin width (and, occasionally, origin). If they're reasonably consistent in impression, you're not likely to have this problem, and if they're not consistent, you know to look more carefully, perhaps try a kernel density estimate, an empirical CDF, a Q-Q plot or something similar.
While histograms may sometimes be misleading, boxplots are even more prone to such problems; with a boxplot you don't even have the ability to say "use more bins". See the four very different data sets in this post, all with identical, symmetric boxplots, even though one of the data sets is quite skew.
[1]: Rubin, Paul (2014) "Histogram Abuse!",
Blog post, OR in an OB world, Jan 23 2014
link ... (alternate link) | Assessing approximate distribution of data based on a histogram | The difficulty with using histograms to infer shape
While histograms are often handy and mostly useful, they can be misleading. Their appearance can alter quite a lot with changes in the locations of | Assessing approximate distribution of data based on a histogram
The difficulty with using histograms to infer shape
While histograms are often handy and mostly useful, they can be misleading. Their appearance can alter quite a lot with changes in the locations of the bin boundaries.
This problem has long been known*, though perhaps not as widely as it should be -- you rarely see it mentioned in elementary-level discussions (though there are exceptions).
* for example, Paul Rubin[1] put it this way: "it's well known that changing the endpoints in a histogram can significantly alter its appearance". .
I think it's an issue that should be more widely discussed when introducing histograms. I'll give some examples and discussion.
Why you should be wary of relying on a single histogram of a data set
Take a look at these four histograms:
That's four very different looking histograms.
If you paste the following data in (I'm using R here):
Annie <- c(3.15,5.46,3.28,4.2,1.98,2.28,3.12,4.1,3.42,3.91,2.06,5.53,
5.19,2.39,1.88,3.43,5.51,2.54,3.64,4.33,4.85,5.56,1.89,4.84,5.74,3.22,
5.52,1.84,4.31,2.01,4.01,5.31,2.56,5.11,2.58,4.43,4.96,1.9,5.6,1.92)
Brian <- c(2.9, 5.21, 3.03, 3.95, 1.73, 2.03, 2.87, 3.85, 3.17, 3.66,
1.81, 5.28, 4.94, 2.14, 1.63, 3.18, 5.26, 2.29, 3.39, 4.08, 4.6,
5.31, 1.64, 4.59, 5.49, 2.97, 5.27, 1.59, 4.06, 1.76, 3.76, 5.06,
2.31, 4.86, 2.33, 4.18, 4.71, 1.65, 5.35, 1.67)
Chris <- c(2.65, 4.96, 2.78, 3.7, 1.48, 1.78, 2.62, 3.6, 2.92, 3.41, 1.56,
5.03, 4.69, 1.89, 1.38, 2.93, 5.01, 2.04, 3.14, 3.83, 4.35, 5.06,
1.39, 4.34, 5.24, 2.72, 5.02, 1.34, 3.81, 1.51, 3.51, 4.81, 2.06,
4.61, 2.08, 3.93, 4.46, 1.4, 5.1, 1.42)
Zoe <- c(2.4, 4.71, 2.53, 3.45, 1.23, 1.53, 2.37, 3.35, 2.67, 3.16,
1.31, 4.78, 4.44, 1.64, 1.13, 2.68, 4.76, 1.79, 2.89, 3.58, 4.1,
4.81, 1.14, 4.09, 4.99, 2.47, 4.77, 1.09, 3.56, 1.26, 3.26, 4.56,
1.81, 4.36, 1.83, 3.68, 4.21, 1.15, 4.85, 1.17)
Then you can generate them yourself:
opar<-par()
par(mfrow=c(2,2))
hist(Annie,breaks=1:6,main="Annie",xlab="V1",col="lightblue")
hist(Brian,breaks=1:6,main="Brian",xlab="V2",col="lightblue")
hist(Chris,breaks=1:6,main="Chris",xlab="V3",col="lightblue")
hist(Zoe,breaks=1:6,main="Zoe",xlab="V4",col="lightblue")
par(opar)
Now look at this strip chart:
x<-c(Annie,Brian,Chris,Zoe)
g<-rep(c('A','B','C','Z'),each=40)
stripchart(x~g,pch='|')
abline(v=(5:23)/4,col=8,lty=3)
abline(v=(2:5),col=6,lty=3)
(If it's still not obvious, see what happens when you subtract Annie's data from each set: head(matrix(x-Annie,nrow=40)))
The data has simply been shifted left each time by 0.25.
Yet the impressions we get from the histograms - right skew, uniform, left skew and bimodal - were utterly different. Our impression was entirely governed by the location of the first bin-origin relative to the minimum.
So not just 'exponential' vs 'not-really-exponential' but 'right skew' vs 'left skew' or 'bimodal' vs 'uniform' just by moving where your bins start.
Edit: If you vary the binwidth, you can get stuff like this happen:
That's the same 34 observations in both cases, just different breakpoints, one with binwidth $1$ and the other with binwidth $0.8$.
x <- c(1.03, 1.24, 1.47, 1.52, 1.92, 1.93, 1.94, 1.95, 1.96, 1.97, 1.98,
1.99, 2.72, 2.75, 2.78, 2.81, 2.84, 2.87, 2.9, 2.93, 2.96, 2.99, 3.6,
3.64, 3.66, 3.72, 3.77, 3.88, 3.91, 4.14, 4.54, 4.77, 4.81, 5.62)
hist(x,breaks=seq(0.3,6.7,by=0.8),xlim=c(0,6.7),col="green3",freq=FALSE)
hist(x,breaks=0:8,col="aquamarine",freq=FALSE)
Nifty, eh?
Yes, those data were deliberately generated to do that... but the lesson is clear - what you think you see in a histogram may not be a particularly accurate impression of the data.
What can we do?
Histograms are widely used, frequently convenient to obtain and sometimes expected. What can we do to avoid or mitigate such problems?
As Nick Cox points out in a comment to a related question: The rule of thumb always should be that details robust to variations in bin width and bin origin are likely to be genuine; details fragile to such are likely to be spurious or trivial.
At the least, you should always do histograms at several different binwidths or bin-origins, or preferably both.
Alternatively, check a kernel density estimate at not-too-wide a bandwidth.
One other approach that reduces the arbitrariness of histograms is averaged shifted histograms,
(that's one on that most recent set of data) but if you go to that effort, I think you might as well use a kernel density estimate.
If I am doing a histogram (I use them in spite of being acutely aware of the issue), I almost always prefer to use considerably more bins than typical program defaults tend to give and very often I like to do several histograms with varying bin width (and, occasionally, origin). If they're reasonably consistent in impression, you're not likely to have this problem, and if they're not consistent, you know to look more carefully, perhaps try a kernel density estimate, an empirical CDF, a Q-Q plot or something similar.
While histograms may sometimes be misleading, boxplots are even more prone to such problems; with a boxplot you don't even have the ability to say "use more bins". See the four very different data sets in this post, all with identical, symmetric boxplots, even though one of the data sets is quite skew.
[1]: Rubin, Paul (2014) "Histogram Abuse!",
Blog post, OR in an OB world, Jan 23 2014
link ... (alternate link) | Assessing approximate distribution of data based on a histogram
The difficulty with using histograms to infer shape
While histograms are often handy and mostly useful, they can be misleading. Their appearance can alter quite a lot with changes in the locations of |
1,448 | Assessing approximate distribution of data based on a histogram | Cumulative distribution plots [MATLAB, R] – where you plot the fraction of data values less than or equal to a range of values – are by far the best way to look at distributions of empirical data. Here, for example, are the ECDFs of this data, produced in R:
This can be generated with the following R input (with the above data):
plot(ecdf(Annie),xlim=c(min(Zoe),max(Annie)),col="red",main="ECDFs")
lines(ecdf(Brian),col="blue")
lines(ecdf(Chris),col="green")
lines(ecdf(Zoe),col="orange")
As you can see, it's visually obvious that these four distributions are simply translations of each other. In general, the benefits of ECDFs for visualizing empirical distributions of data are:
They simply present the data as it actually occurs with no transformation other than accumulation, so there's no possibility of accidentally deceiving yourself, as there is with histograms and kernel density estimates, because of how you're processing the data.
They give a clear visual sense of the distribution of the data since each point is buffered by all the data before and after it. Compare this with non-cumulative density visualizations, where the accuracy of each density is naturally unbuffered, and thus must be estimated either by binning (histograms) or smoothing (KDEs).
They work equally well regardless of whether the data follows a nice parametric distribution, some mixture, or a messy non-parametric distribution.
The only trick is learning how to read ECDFs properly: shallow sloped areas mean sparse distribution, steep sloped areas mean dense distribution. Once you get the hang of reading them, however, they're a wonderful tool for looking at distributions of empirical data. | Assessing approximate distribution of data based on a histogram | Cumulative distribution plots [MATLAB, R] – where you plot the fraction of data values less than or equal to a range of values – are by far the best way to look at distributions of empirical data. Her | Assessing approximate distribution of data based on a histogram
Cumulative distribution plots [MATLAB, R] – where you plot the fraction of data values less than or equal to a range of values – are by far the best way to look at distributions of empirical data. Here, for example, are the ECDFs of this data, produced in R:
This can be generated with the following R input (with the above data):
plot(ecdf(Annie),xlim=c(min(Zoe),max(Annie)),col="red",main="ECDFs")
lines(ecdf(Brian),col="blue")
lines(ecdf(Chris),col="green")
lines(ecdf(Zoe),col="orange")
As you can see, it's visually obvious that these four distributions are simply translations of each other. In general, the benefits of ECDFs for visualizing empirical distributions of data are:
They simply present the data as it actually occurs with no transformation other than accumulation, so there's no possibility of accidentally deceiving yourself, as there is with histograms and kernel density estimates, because of how you're processing the data.
They give a clear visual sense of the distribution of the data since each point is buffered by all the data before and after it. Compare this with non-cumulative density visualizations, where the accuracy of each density is naturally unbuffered, and thus must be estimated either by binning (histograms) or smoothing (KDEs).
They work equally well regardless of whether the data follows a nice parametric distribution, some mixture, or a messy non-parametric distribution.
The only trick is learning how to read ECDFs properly: shallow sloped areas mean sparse distribution, steep sloped areas mean dense distribution. Once you get the hang of reading them, however, they're a wonderful tool for looking at distributions of empirical data. | Assessing approximate distribution of data based on a histogram
Cumulative distribution plots [MATLAB, R] – where you plot the fraction of data values less than or equal to a range of values – are by far the best way to look at distributions of empirical data. Her |
1,449 | Assessing approximate distribution of data based on a histogram | A kernel density or logspline plot may be a better option compared to a histogram. There are still some options that can be set with these methods, but they are less fickle than histograms. There are qqplots as well. A nice tool for seeing if data is close enough to a theoretical distribution is detailed in:
Buja, A., Cook, D. Hofmann, H., Lawrence, M. Lee, E.-K., Swayne,
D.F and Wickham, H. (2009) Statistical Inference for exploratory
data analysis and model diagnostics Phil. Trans. R. Soc. A 2009
367, 4361-4383 doi: 10.1098/rsta.2009.0120
The short version of the idea (still read the paper for details) is that you generate data from the null distribution and create several plots one of which is the original/real data and the rest are simulated from the theoretical distribution. You then present the plots to someone (possibly yourself) that has not seen the original data and see if they can pick out the real data. If they cannot identify the real data then you don't have evidence against the null.
The vis.test function in the TeachingDemos package for R help implement a form of this test.
Here is a quick example. One of the plots below is 25 points generated from a t distribution with 10 degrees of freedom, the other 8 are generated from a normal distribution with the same mean and variance.
The vis.test function created this plot and then prompts the user to choose which of the plots they think is different, then repeats the process 2 more times (3 total). | Assessing approximate distribution of data based on a histogram | A kernel density or logspline plot may be a better option compared to a histogram. There are still some options that can be set with these methods, but they are less fickle than histograms. There ar | Assessing approximate distribution of data based on a histogram
A kernel density or logspline plot may be a better option compared to a histogram. There are still some options that can be set with these methods, but they are less fickle than histograms. There are qqplots as well. A nice tool for seeing if data is close enough to a theoretical distribution is detailed in:
Buja, A., Cook, D. Hofmann, H., Lawrence, M. Lee, E.-K., Swayne,
D.F and Wickham, H. (2009) Statistical Inference for exploratory
data analysis and model diagnostics Phil. Trans. R. Soc. A 2009
367, 4361-4383 doi: 10.1098/rsta.2009.0120
The short version of the idea (still read the paper for details) is that you generate data from the null distribution and create several plots one of which is the original/real data and the rest are simulated from the theoretical distribution. You then present the plots to someone (possibly yourself) that has not seen the original data and see if they can pick out the real data. If they cannot identify the real data then you don't have evidence against the null.
The vis.test function in the TeachingDemos package for R help implement a form of this test.
Here is a quick example. One of the plots below is 25 points generated from a t distribution with 10 degrees of freedom, the other 8 are generated from a normal distribution with the same mean and variance.
The vis.test function created this plot and then prompts the user to choose which of the plots they think is different, then repeats the process 2 more times (3 total). | Assessing approximate distribution of data based on a histogram
A kernel density or logspline plot may be a better option compared to a histogram. There are still some options that can be set with these methods, but they are less fickle than histograms. There ar |
1,450 | Assessing approximate distribution of data based on a histogram | Suggestion: Histograms usually only assign the x-axis data to have occurred at the midpoint of the bin and omit x-axis measures of location of greater accuracy. The effect this has on the derivatives of fit can be quite large. Let us take a trivial example. Suppose we take the classical derivation of a Dirac delta but modify it so that we start with a Cauchy distribution at some arbitrary median location with a finite scale (full width half-maximum). Then we take the limit as the scale goes to zero. If we use the classical definition of a histogram and do not change bin sizes we will capture neither the location or the scale. If however, we use a median location within bins of even of fixed width, we will always capture the location, if not the scale when the scale is small relative to the bin width.
For fitting values where the data is skewed, using fixed bin midpoints will x-axis shift the entire curve segment in that region, which I believe relates to the question above.
STEP 1
Here is an almost solution. I used $n=8$ in each histogram category, and just displayed these as the mean x-axis value from each bin. Since each histogram bin has a value of 8, the distributions all look uniform, and I had to offset them vertically to show them. The display is not the correct answer, but it is not without information. It correctly tells us that there is an x-axis offset between groups. It also tells us that the actual distribution appears to be slightly U shaped. Why? Note that the distance between mean values is further apart in the centers, and closer at the edges. So, to make this a better representation, we should borrow whole samples and fractional amounts of each bin boundary sample to make all the mean bin values on the x-axis equidistant. Fixing this and displaying it properly would require a bit of programming. But, it may just be a way to make histograms so that they actually display the underlying data in some logical format. The shape will still change if we change the total number of bins covering the range of the data, but the idea is to resolve some of the problems created by binning arbitrarily.
STEP 2 So let's start borrowing between bins to try to make the means more evenly spaced.
Now, we can see the shape of the histograms beginning to emerge. But the difference between means is not perfect as we only have whole numbers of samples to swap between bins. To remove the restriction of integer values on the y-axis and complete the process of making equidistant x-axis mean values, we have to start sharing fractions of a sample between bins.
Step 3 The sharing of values and parts of values.
As one can see, the sharing of parts of a value at a bin boundry can improve the uniformity of distance between mean values. I managed to do this to three decimal places with the data given. However, one cannot, I do not think, make the distance between mean values exactly equal in general, as the coarseness of the data will not permit that.
One can, however, do other things like use kernel density estimation.
Here we see Annie's data as a bounded kernel density using Gaussian smoothings of 0.1, 0.2, and 0.4. The other subjects will have shifted functions of the same type, provided one does the same thing as I did, namely use the lower and upper bounds of each data set. So, this is no longer a histogram, but a PDF, and it serves the same role as a histogram without some of the warts. | Assessing approximate distribution of data based on a histogram | Suggestion: Histograms usually only assign the x-axis data to have occurred at the midpoint of the bin and omit x-axis measures of location of greater accuracy. The effect this has on the derivatives | Assessing approximate distribution of data based on a histogram
Suggestion: Histograms usually only assign the x-axis data to have occurred at the midpoint of the bin and omit x-axis measures of location of greater accuracy. The effect this has on the derivatives of fit can be quite large. Let us take a trivial example. Suppose we take the classical derivation of a Dirac delta but modify it so that we start with a Cauchy distribution at some arbitrary median location with a finite scale (full width half-maximum). Then we take the limit as the scale goes to zero. If we use the classical definition of a histogram and do not change bin sizes we will capture neither the location or the scale. If however, we use a median location within bins of even of fixed width, we will always capture the location, if not the scale when the scale is small relative to the bin width.
For fitting values where the data is skewed, using fixed bin midpoints will x-axis shift the entire curve segment in that region, which I believe relates to the question above.
STEP 1
Here is an almost solution. I used $n=8$ in each histogram category, and just displayed these as the mean x-axis value from each bin. Since each histogram bin has a value of 8, the distributions all look uniform, and I had to offset them vertically to show them. The display is not the correct answer, but it is not without information. It correctly tells us that there is an x-axis offset between groups. It also tells us that the actual distribution appears to be slightly U shaped. Why? Note that the distance between mean values is further apart in the centers, and closer at the edges. So, to make this a better representation, we should borrow whole samples and fractional amounts of each bin boundary sample to make all the mean bin values on the x-axis equidistant. Fixing this and displaying it properly would require a bit of programming. But, it may just be a way to make histograms so that they actually display the underlying data in some logical format. The shape will still change if we change the total number of bins covering the range of the data, but the idea is to resolve some of the problems created by binning arbitrarily.
STEP 2 So let's start borrowing between bins to try to make the means more evenly spaced.
Now, we can see the shape of the histograms beginning to emerge. But the difference between means is not perfect as we only have whole numbers of samples to swap between bins. To remove the restriction of integer values on the y-axis and complete the process of making equidistant x-axis mean values, we have to start sharing fractions of a sample between bins.
Step 3 The sharing of values and parts of values.
As one can see, the sharing of parts of a value at a bin boundry can improve the uniformity of distance between mean values. I managed to do this to three decimal places with the data given. However, one cannot, I do not think, make the distance between mean values exactly equal in general, as the coarseness of the data will not permit that.
One can, however, do other things like use kernel density estimation.
Here we see Annie's data as a bounded kernel density using Gaussian smoothings of 0.1, 0.2, and 0.4. The other subjects will have shifted functions of the same type, provided one does the same thing as I did, namely use the lower and upper bounds of each data set. So, this is no longer a histogram, but a PDF, and it serves the same role as a histogram without some of the warts. | Assessing approximate distribution of data based on a histogram
Suggestion: Histograms usually only assign the x-axis data to have occurred at the midpoint of the bin and omit x-axis measures of location of greater accuracy. The effect this has on the derivatives |
1,451 | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | TL;DR: Unless you assume people are unreasonably bad at judging car color, or that blue cars are unreasonably rare, the large number of people in your example means the probability that the car is blue is basically 100%.
Matthew Drury already gave the right answer but I'd just like to add to that with some numerical examples, because you chose your numbers such that you actually get pretty similar answers for a wide range of different parameter settings. For example, let's assume, as you said in one of your comments, that the probability that people judge the color of a car correctly is 0.9. That is:
$$p(\text{say it's blue}|\text{car is blue})=0.9=1-p(\text{say it isn't blue}|\text{car is blue})$$
and also
$$p(\text{say it isn't blue}|\text{car isn't blue})=0.9=1-p(\text{say it is blue}|\text{car isn't blue})$$
Having defined that, the remaining thing we have to decide is: what is the prior probability that the car is blue? Let's pick a very low probability just to see what happens, and say that $p(\text{car is blue})=0.001$, i.e. only 0.1% of all cars are blue. Then the posterior probability that the car is blue can be calculated as:
\begin{align*}
&p(\text{car is blue}|\text{answers})\\
&=\frac{p(\text{answers}|\text{car is blue})\,p(\text{car is blue})}{p(\text{answers}|\text{car is blue})\,p(\text{car is blue})+p(\text{answers}|\text{car isn't blue})\,p(\text{car isn't blue})}\\
&=\frac{0.9^{900}\times 0.1^{100}\times0.001}{0.9^{900}\times 0.1^{100}\times0.001+0.1^{900}\times0.9^{100}\times0.999}
\end{align*}
If you look at the denominator, it's pretty clear that the second term in that sum will be negligible, since the relative size of the terms in the sum is dominated by the ratio of $0.9^{900}$ to $0.1^{900}$, which is on the order of $10^{58}$. And indeed, if you do this calculation on a computer (taking care to avoid numerical underflow issues) you get an answer that is equal to 1 (within machine precision).
The reason the prior probabilities don't really matter much here is because you have so much evidence for one possibility (the car is blue) versus another. This can be quantified by the likelihood ratio, which we can calculate as:
$$
\frac{p(\text{answers}|\text{car is blue})}{p(\text{answers}|\text{car isn't blue})}=\frac{0.9^{900}\times 0.1^{100}}{0.1^{900}\times 0.9^{100}}\approx 10^{763}
$$
So before even considering the prior probabilities, the evidence suggests that one option is already astronomically more likely than the other, and for the prior to make any difference, blue cars would have to be unreasonably, stupidly rare (so rare that we would expect to find 0 blue cars on earth).
So what if we change how accurate people are in their descriptions of car color? Of course, we could push this to the extreme and say they get it right only 50% of the time, which is no better than flipping a coin. In this case, the posterior probability that the car is blue is simply equal to the prior probability, because the people's answers told us nothing. But surely people do at least a little better than that, and even if we say that people are accurate only 51% of the time, the likelihood ratio still works out such that it is roughly $10^{13}$ times more likely for the car to be blue.
This is all a result of the rather large numbers you chose in your example. If it had been 9/10 people saying the car was blue, it would have been a very different story, even though the same ratio of people were in one camp vs. the other. Because statistical evidence doesn't depend on this ratio, but rather on the numerical difference between the opposing factions. In fact, in the likelihood ratio (which quantifies the evidence), the 100 people who say the car isn't blue exactly cancel 100 of the 900 people who say it is blue, so it's the same as if you had 800 people all agreeing it was blue. And that's obviously pretty clear evidence.
(Edit: As Silverfish pointed out, the assumptions I made here actually implied that whenever a person describes a non-blue car incorrectly, they will default to saying it's blue. This isn't realistic of course, because they could really say any color, and will say blue only some of the time. This makes no difference to the conclusions though, since the less likely people are to mistake a non-blue car for a blue one, the stronger the evidence that it is blue when they say it is. So if anything, the numbers given above are actually only a lower bound on the pro-blue evidence.) | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | TL;DR: Unless you assume people are unreasonably bad at judging car color, or that blue cars are unreasonably rare, the large number of people in your example means the probability that the car is blu | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
TL;DR: Unless you assume people are unreasonably bad at judging car color, or that blue cars are unreasonably rare, the large number of people in your example means the probability that the car is blue is basically 100%.
Matthew Drury already gave the right answer but I'd just like to add to that with some numerical examples, because you chose your numbers such that you actually get pretty similar answers for a wide range of different parameter settings. For example, let's assume, as you said in one of your comments, that the probability that people judge the color of a car correctly is 0.9. That is:
$$p(\text{say it's blue}|\text{car is blue})=0.9=1-p(\text{say it isn't blue}|\text{car is blue})$$
and also
$$p(\text{say it isn't blue}|\text{car isn't blue})=0.9=1-p(\text{say it is blue}|\text{car isn't blue})$$
Having defined that, the remaining thing we have to decide is: what is the prior probability that the car is blue? Let's pick a very low probability just to see what happens, and say that $p(\text{car is blue})=0.001$, i.e. only 0.1% of all cars are blue. Then the posterior probability that the car is blue can be calculated as:
\begin{align*}
&p(\text{car is blue}|\text{answers})\\
&=\frac{p(\text{answers}|\text{car is blue})\,p(\text{car is blue})}{p(\text{answers}|\text{car is blue})\,p(\text{car is blue})+p(\text{answers}|\text{car isn't blue})\,p(\text{car isn't blue})}\\
&=\frac{0.9^{900}\times 0.1^{100}\times0.001}{0.9^{900}\times 0.1^{100}\times0.001+0.1^{900}\times0.9^{100}\times0.999}
\end{align*}
If you look at the denominator, it's pretty clear that the second term in that sum will be negligible, since the relative size of the terms in the sum is dominated by the ratio of $0.9^{900}$ to $0.1^{900}$, which is on the order of $10^{58}$. And indeed, if you do this calculation on a computer (taking care to avoid numerical underflow issues) you get an answer that is equal to 1 (within machine precision).
The reason the prior probabilities don't really matter much here is because you have so much evidence for one possibility (the car is blue) versus another. This can be quantified by the likelihood ratio, which we can calculate as:
$$
\frac{p(\text{answers}|\text{car is blue})}{p(\text{answers}|\text{car isn't blue})}=\frac{0.9^{900}\times 0.1^{100}}{0.1^{900}\times 0.9^{100}}\approx 10^{763}
$$
So before even considering the prior probabilities, the evidence suggests that one option is already astronomically more likely than the other, and for the prior to make any difference, blue cars would have to be unreasonably, stupidly rare (so rare that we would expect to find 0 blue cars on earth).
So what if we change how accurate people are in their descriptions of car color? Of course, we could push this to the extreme and say they get it right only 50% of the time, which is no better than flipping a coin. In this case, the posterior probability that the car is blue is simply equal to the prior probability, because the people's answers told us nothing. But surely people do at least a little better than that, and even if we say that people are accurate only 51% of the time, the likelihood ratio still works out such that it is roughly $10^{13}$ times more likely for the car to be blue.
This is all a result of the rather large numbers you chose in your example. If it had been 9/10 people saying the car was blue, it would have been a very different story, even though the same ratio of people were in one camp vs. the other. Because statistical evidence doesn't depend on this ratio, but rather on the numerical difference between the opposing factions. In fact, in the likelihood ratio (which quantifies the evidence), the 100 people who say the car isn't blue exactly cancel 100 of the 900 people who say it is blue, so it's the same as if you had 800 people all agreeing it was blue. And that's obviously pretty clear evidence.
(Edit: As Silverfish pointed out, the assumptions I made here actually implied that whenever a person describes a non-blue car incorrectly, they will default to saying it's blue. This isn't realistic of course, because they could really say any color, and will say blue only some of the time. This makes no difference to the conclusions though, since the less likely people are to mistake a non-blue car for a blue one, the stronger the evidence that it is blue when they say it is. So if anything, the numbers given above are actually only a lower bound on the pro-blue evidence.) | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
TL;DR: Unless you assume people are unreasonably bad at judging car color, or that blue cars are unreasonably rare, the large number of people in your example means the probability that the car is blu |
1,452 | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | The correct answer depends on information not specified in the problem, you will have to make some more assumptions to derive a single, definitive answer:
The prior probability the car is blue, i.e. your belief that the car is blue given you have not yet asked anyone.
The probability someone tells you the car is blue when it actually is blue, and the probability they tell you the car is blue when it actually is not blue.
The probability that the car actually is blue when someone says it is, and the probability that the car is not blue, when someone says it is blue.
With these pieces of information, we can break the whole thing down with Bayes's formula to derive a posterior probability that the car is blue. I'll focus on the case where we only ask one person, but the same reasoning can be applied to the case where you ask $1000$ people.
$$\begin{align*}
P_{post} (\text{car is blue}) &= P(\text{car is blue} \mid \text{say is blue}) P(\text{say is blue}) \\
& \ \ \ \ + P(\text{car is blue} \mid \text{say is not blue}) P(\text{say is not blue})
\end{align*}$$
We need to continue to further break down $P(\text{say is blue})$, this is where the prior comes in:
$$\begin{align*}
P(\text{say is blue}) = \ &P(\text{say is blue} \mid \text{car is blue}) P_{prior}(\text{car is blue}) \\
&+ P(\text{say is blue} \mid \text{car is not blue}) P_{prior}(\text{car is not blue})
\end{align*}$$
So two applications of Bayes's rule get's you there. You'll need to determine the unspecified parameters based on either information you have about the specific situation, or by making some reasonable assumptions.
There are some other combinations of what assumptions you can make, based on:
$$ P(\text{say is blue} \mid \text{car is blue}) P(\text{car is blue}) = P(\text{car is blue} \mid \text{say is blue}) P(\text{say is blue}) $$
At the outset, you don't know any of these things. So you must make some reasonable assumptions about three of them, and then the fourth is determined from there. | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | The correct answer depends on information not specified in the problem, you will have to make some more assumptions to derive a single, definitive answer:
The prior probability the car is blue, i.e. | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
The correct answer depends on information not specified in the problem, you will have to make some more assumptions to derive a single, definitive answer:
The prior probability the car is blue, i.e. your belief that the car is blue given you have not yet asked anyone.
The probability someone tells you the car is blue when it actually is blue, and the probability they tell you the car is blue when it actually is not blue.
The probability that the car actually is blue when someone says it is, and the probability that the car is not blue, when someone says it is blue.
With these pieces of information, we can break the whole thing down with Bayes's formula to derive a posterior probability that the car is blue. I'll focus on the case where we only ask one person, but the same reasoning can be applied to the case where you ask $1000$ people.
$$\begin{align*}
P_{post} (\text{car is blue}) &= P(\text{car is blue} \mid \text{say is blue}) P(\text{say is blue}) \\
& \ \ \ \ + P(\text{car is blue} \mid \text{say is not blue}) P(\text{say is not blue})
\end{align*}$$
We need to continue to further break down $P(\text{say is blue})$, this is where the prior comes in:
$$\begin{align*}
P(\text{say is blue}) = \ &P(\text{say is blue} \mid \text{car is blue}) P_{prior}(\text{car is blue}) \\
&+ P(\text{say is blue} \mid \text{car is not blue}) P_{prior}(\text{car is not blue})
\end{align*}$$
So two applications of Bayes's rule get's you there. You'll need to determine the unspecified parameters based on either information you have about the specific situation, or by making some reasonable assumptions.
There are some other combinations of what assumptions you can make, based on:
$$ P(\text{say is blue} \mid \text{car is blue}) P(\text{car is blue}) = P(\text{car is blue} \mid \text{say is blue}) P(\text{say is blue}) $$
At the outset, you don't know any of these things. So you must make some reasonable assumptions about three of them, and then the fourth is determined from there. | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
The correct answer depends on information not specified in the problem, you will have to make some more assumptions to derive a single, definitive answer:
The prior probability the car is blue, i.e. |
1,453 | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | There's an important assumption that your 1000 opinions don't share a systematic bias. Which is a reasonable assumption here, but could be important in other cases.
Examples might be:
they all share a similar colorblindness (genetics in a population for example),
they all saw the car at night under orange sodium street lighting,
they all share a common culture in which blue is taboo or magically associated (which biases whether or not they describe any object as blue or use a cultural euphemism or whatever instead),
they have all been told (or share a common belief) that if they do/don't answer some specific way, something good/bad will happen to them.....
It isn't likely in this case but its a significant implied assumption in other cases. It doesn't have to be that extreme either - transpose your question to some other domain and this will be a real factor.
Examples for each where your answer may be affected by a shared bias:
ask if a tall thin glass holds more than an actually-identical short fat glass, but your 1000 respondents are very young children (shared misperception).
ask 1000 people if walking under a ladder is dangerous (common cultural belief)
ask 1000 married people if they love their partner/have had an affair, in circumstances where they believe their partner will know of their answer. The context might be a TV show, or partner present when asked etc (common belief about consequences)
It wouldn't be hard to imagine some structurally identical questions where the 900:100 response was a measure of beliefs and honesty, or something else, and doesn't point to the correct answer. Not likely in this case but in other cases - yes. | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | There's an important assumption that your 1000 opinions don't share a systematic bias. Which is a reasonable assumption here, but could be important in other cases.
Examples might be:
they all share | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
There's an important assumption that your 1000 opinions don't share a systematic bias. Which is a reasonable assumption here, but could be important in other cases.
Examples might be:
they all share a similar colorblindness (genetics in a population for example),
they all saw the car at night under orange sodium street lighting,
they all share a common culture in which blue is taboo or magically associated (which biases whether or not they describe any object as blue or use a cultural euphemism or whatever instead),
they have all been told (or share a common belief) that if they do/don't answer some specific way, something good/bad will happen to them.....
It isn't likely in this case but its a significant implied assumption in other cases. It doesn't have to be that extreme either - transpose your question to some other domain and this will be a real factor.
Examples for each where your answer may be affected by a shared bias:
ask if a tall thin glass holds more than an actually-identical short fat glass, but your 1000 respondents are very young children (shared misperception).
ask 1000 people if walking under a ladder is dangerous (common cultural belief)
ask 1000 married people if they love their partner/have had an affair, in circumstances where they believe their partner will know of their answer. The context might be a TV show, or partner present when asked etc (common belief about consequences)
It wouldn't be hard to imagine some structurally identical questions where the 900:100 response was a measure of beliefs and honesty, or something else, and doesn't point to the correct answer. Not likely in this case but in other cases - yes. | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
There's an important assumption that your 1000 opinions don't share a systematic bias. Which is a reasonable assumption here, but could be important in other cases.
Examples might be:
they all share |
1,454 | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | One reason you're getting different answers from different people is that the question can be interpreted in different ways, and it isn't clear what you mean by "probability" here. One way to make sense of the question is to assign priors and reason using Bayes' rule as in Matthew's answer.
Before asking for probabilities, you have to decide what's modeled as random and what's not. It's not universally accepted that unknown but fixed quantities should be assigned priors. Here's a similar experiment to yours that highlights the problem with the question:
Assume $X_i$, $i = 1, \dots, 1000$ are i.i.d. Bernoulli random variables with success probability (mean) $p = 0.5$. For interpretability, let's think of the $X_i$ as coin flips. Suppose you observe (the sufficient statistic) $\sum_{i = 1}^{1000}X_i = 900$. What is the probability that the coin is fair?
From a frequentist perspective the question is either nonsensical or the answer is "one". If you're Bayesian maybe you want to assign a prior distribution to $p$, in which case the question makes sense. The fundamental difference between my example and the question is that $p$ is unknown in the question, and the question disguises the fact that the actual randomness is whether a (presumably randomly sampled) person answers that the car is blue or not. The car's color is not randomly assigned and thus it's uninteresting to speak of the probability of it being blue from a frequentist perspective. | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | One reason you're getting different answers from different people is that the question can be interpreted in different ways, and it isn't clear what you mean by "probability" here. One way to make sen | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
One reason you're getting different answers from different people is that the question can be interpreted in different ways, and it isn't clear what you mean by "probability" here. One way to make sense of the question is to assign priors and reason using Bayes' rule as in Matthew's answer.
Before asking for probabilities, you have to decide what's modeled as random and what's not. It's not universally accepted that unknown but fixed quantities should be assigned priors. Here's a similar experiment to yours that highlights the problem with the question:
Assume $X_i$, $i = 1, \dots, 1000$ are i.i.d. Bernoulli random variables with success probability (mean) $p = 0.5$. For interpretability, let's think of the $X_i$ as coin flips. Suppose you observe (the sufficient statistic) $\sum_{i = 1}^{1000}X_i = 900$. What is the probability that the coin is fair?
From a frequentist perspective the question is either nonsensical or the answer is "one". If you're Bayesian maybe you want to assign a prior distribution to $p$, in which case the question makes sense. The fundamental difference between my example and the question is that $p$ is unknown in the question, and the question disguises the fact that the actual randomness is whether a (presumably randomly sampled) person answers that the car is blue or not. The car's color is not randomly assigned and thus it's uninteresting to speak of the probability of it being blue from a frequentist perspective. | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
One reason you're getting different answers from different people is that the question can be interpreted in different ways, and it isn't clear what you mean by "probability" here. One way to make sen |
1,455 | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | Simple practical answer:
The probability can easily range from 0% to 100% depending on your assumptions
Though I really like the existing answers, in practice it basically boils down to these two simple scenarios:
Scenario 1: People are assumed to be very good at recognizing blue when it is blue ... 0%
In this case, there are so many people stating that the car is not blue, that it is very unlikely that the car is actually blue. Hence, the probability approaches 0%.
Scenario 2: People are assumed to be very good at recognizing not-blue when it is not-blue ... 100%
In this case, there are so many people stating that the car is blue, that it is very likely that it is indeed blue. Hence the probability approaches 100%.
Of course coming at this from a mathematical angle you would start by something generic like 'let us assume that the relevant probabilities are ...', which is quite meaningless as such things are typically not known for any random circumstance. Hence I advocate looking at the extremes to grasp the idea that both percentages can easily be justified with simple and realistic assumptions, and that there is therefore no single meaningful answer. | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | Simple practical answer:
The probability can easily range from 0% to 100% depending on your assumptions
Though I really like the existing answers, in practice it basically boils down to these two simp | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
Simple practical answer:
The probability can easily range from 0% to 100% depending on your assumptions
Though I really like the existing answers, in practice it basically boils down to these two simple scenarios:
Scenario 1: People are assumed to be very good at recognizing blue when it is blue ... 0%
In this case, there are so many people stating that the car is not blue, that it is very unlikely that the car is actually blue. Hence, the probability approaches 0%.
Scenario 2: People are assumed to be very good at recognizing not-blue when it is not-blue ... 100%
In this case, there are so many people stating that the car is blue, that it is very likely that it is indeed blue. Hence the probability approaches 100%.
Of course coming at this from a mathematical angle you would start by something generic like 'let us assume that the relevant probabilities are ...', which is quite meaningless as such things are typically not known for any random circumstance. Hence I advocate looking at the extremes to grasp the idea that both percentages can easily be justified with simple and realistic assumptions, and that there is therefore no single meaningful answer. | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
Simple practical answer:
The probability can easily range from 0% to 100% depending on your assumptions
Though I really like the existing answers, in practice it basically boils down to these two simp |
1,456 | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | You need to develop some framework of estimation. Some questions you might ask are
How many colors are there? Are we talking two colors? Or all the colors of the rainbow?
How distinct are the colors? Are we talking blue and orange? Or blue, cyan, and turquoise?
What does it mean to be blue? Are cyan and/or turquoise blue? Or just blue itself?
How good are these people at estimating color? Are they all graphic designers? Or are they color blind?
From a purely statistical standpoint, we can make some guesses as to the last one. First, we know that at least 10% of the people are choosing an incorrect response. If there are only two colors (from the first question), then we might say that there is
Probability says blue and is blue = 90% say is blue * 90% correct = 81%
Probability says blue and is not = 90% * 10% incorrect = 9%
Probability says not but is blue = 10% * 90% incorrect = 9%
Probability says not and is not = 10% * 10% = 1%
As a quick check, if we add those together, we get 100%. You can see a more mathematical notation of this at the @MatthewDrury answer.
How do we get the 90% in the third? It's how many people said blue but were wrong if it's not. Because there are only two colors, these are symmetric. If there were more than two colors, then the chance of the wrong choice being blue when they said something else would be lower.
Anyway, this method of estimation gives us 90% blue. This includes an 81% chance of people saying blue when it is and a 9% chance of people saying that it isn't when it is. This is probably the closest we can come to answering the original question, and it requires us to rely on the data to estimate two different things. And to assume that the chance of blue being chosen is the same as the chance of blue being correct.
If there are more than two colors, then the logic is going to change a bit. The first two lines stay the same, but we lose the symmetry in the last two lines. In that case, we need more input. We might conceivably estimate the chance of correctly saying blue as 81% again, but we have no idea what the chances are that the color is blue when someone says that it is not.
We could also improve upon even the two color estimate. Given a statistically significant number of cars of each color, we could have a statistically significant number of people view and categorize them. Then we could count how often people are right when they make each color choice and how often they are right for each color choice. Then we could estimate more accurately given people's actual choices.
You might ask how 90% could be wrong. Consider what happens if there are three colors: azure, blue, and sapphire. Someone might reasonably consider all three of these to be blue. But we want more. We want the exact shade. But who remembers the names of the other shades? Many might guess blue because it is the only matching shade they know. And still be wrong when it turns out to be azure. | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | You need to develop some framework of estimation. Some questions you might ask are
How many colors are there? Are we talking two colors? Or all the colors of the rainbow?
How distinct are the c | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
You need to develop some framework of estimation. Some questions you might ask are
How many colors are there? Are we talking two colors? Or all the colors of the rainbow?
How distinct are the colors? Are we talking blue and orange? Or blue, cyan, and turquoise?
What does it mean to be blue? Are cyan and/or turquoise blue? Or just blue itself?
How good are these people at estimating color? Are they all graphic designers? Or are they color blind?
From a purely statistical standpoint, we can make some guesses as to the last one. First, we know that at least 10% of the people are choosing an incorrect response. If there are only two colors (from the first question), then we might say that there is
Probability says blue and is blue = 90% say is blue * 90% correct = 81%
Probability says blue and is not = 90% * 10% incorrect = 9%
Probability says not but is blue = 10% * 90% incorrect = 9%
Probability says not and is not = 10% * 10% = 1%
As a quick check, if we add those together, we get 100%. You can see a more mathematical notation of this at the @MatthewDrury answer.
How do we get the 90% in the third? It's how many people said blue but were wrong if it's not. Because there are only two colors, these are symmetric. If there were more than two colors, then the chance of the wrong choice being blue when they said something else would be lower.
Anyway, this method of estimation gives us 90% blue. This includes an 81% chance of people saying blue when it is and a 9% chance of people saying that it isn't when it is. This is probably the closest we can come to answering the original question, and it requires us to rely on the data to estimate two different things. And to assume that the chance of blue being chosen is the same as the chance of blue being correct.
If there are more than two colors, then the logic is going to change a bit. The first two lines stay the same, but we lose the symmetry in the last two lines. In that case, we need more input. We might conceivably estimate the chance of correctly saying blue as 81% again, but we have no idea what the chances are that the color is blue when someone says that it is not.
We could also improve upon even the two color estimate. Given a statistically significant number of cars of each color, we could have a statistically significant number of people view and categorize them. Then we could count how often people are right when they make each color choice and how often they are right for each color choice. Then we could estimate more accurately given people's actual choices.
You might ask how 90% could be wrong. Consider what happens if there are three colors: azure, blue, and sapphire. Someone might reasonably consider all three of these to be blue. But we want more. We want the exact shade. But who remembers the names of the other shades? Many might guess blue because it is the only matching shade they know. And still be wrong when it turns out to be azure. | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
You need to develop some framework of estimation. Some questions you might ask are
How many colors are there? Are we talking two colors? Or all the colors of the rainbow?
How distinct are the c |
1,457 | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | An exact, mathematical, true/false probability cannot be computed with the information you provide.
However, in real life such information is never available with certainty. Therefore, using our intuition (and where all my money would go if we were betting), the car is definitely blue. (some believe this is not statistics anymore, but well, black/white views of science are not very helpful)
The reasoning is simple. Assume the car is not blue. Then 90% of the people (!) were wrong. They could only be wrong because of a list of issues including:
color blindness
pathological lying
being under the influence of substances like alcohol, LCD, etc
not understanding the question
other form of mental disorder
a combination of the above
Since the above is clearly not likely to affect 90% of an average random population (e.g. colour blindness affects around 8% of males and 0.6% of females, that is 43 people out of 1000), it is necessarily the case that the car is blue. (That is were all my money would go anyway). | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | An exact, mathematical, true/false probability cannot be computed with the information you provide.
However, in real life such information is never available with certainty. Therefore, using our intu | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
An exact, mathematical, true/false probability cannot be computed with the information you provide.
However, in real life such information is never available with certainty. Therefore, using our intuition (and where all my money would go if we were betting), the car is definitely blue. (some believe this is not statistics anymore, but well, black/white views of science are not very helpful)
The reasoning is simple. Assume the car is not blue. Then 90% of the people (!) were wrong. They could only be wrong because of a list of issues including:
color blindness
pathological lying
being under the influence of substances like alcohol, LCD, etc
not understanding the question
other form of mental disorder
a combination of the above
Since the above is clearly not likely to affect 90% of an average random population (e.g. colour blindness affects around 8% of males and 0.6% of females, that is 43 people out of 1000), it is necessarily the case that the car is blue. (That is were all my money would go anyway). | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
An exact, mathematical, true/false probability cannot be computed with the information you provide.
However, in real life such information is never available with certainty. Therefore, using our intu |
1,458 | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | I would not eat feces based on the fact that billion of flies can't be wrong. There might dozens of other reasons why 900 people out of 1000 might have been cheated to think the car is blue. After all, that's the base of magical tricks, luring people into thinking something removed from reality.
If 900 people out of 1000 see a magician stabbing his/her assistant, they will promptly answer the assistant was stabbed, for how improbable an homicide happened on the stage.
A blue light on a reflective car paint, anyone? | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | I would not eat feces based on the fact that billion of flies can't be wrong. There might dozens of other reasons why 900 people out of 1000 might have been cheated to think the car is blue. After all | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
I would not eat feces based on the fact that billion of flies can't be wrong. There might dozens of other reasons why 900 people out of 1000 might have been cheated to think the car is blue. After all, that's the base of magical tricks, luring people into thinking something removed from reality.
If 900 people out of 1000 see a magician stabbing his/her assistant, they will promptly answer the assistant was stabbed, for how improbable an homicide happened on the stage.
A blue light on a reflective car paint, anyone? | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
I would not eat feces based on the fact that billion of flies can't be wrong. There might dozens of other reasons why 900 people out of 1000 might have been cheated to think the car is blue. After all |
1,459 | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | The questionee knows too little about how the poll was carried out in order to answer the question accurately. As far as he's concerned, the poll can suffer from several problems:
The people taking the poll could have been biased:
The car looked blue because of an optical illusion.
The color of the car was for some reason difficult to observe, and the people had for some reason been shown a lot of blue cars before this one, making most of them believe this car was probably blue, too.
You had paid them to say that the car is blue.
You had someone hypnotize all of them into believing that the car is blue.
They had made a pact to lie and sabotage the poll.
There may have been correlations among the people taking the poll because of how they were selected or because they affected each other:
You accidentally carried out the poll at a mass meeting for people with the same kind of color blindness.
You carried out the poll at kindergartens; the girls were not interested in the car and most of the boys had blue as their favorite color, making them imagine that the car was blue.
The first person who was shown the car was drunk and thought it looked blue, shouted "IT IS BLUE", influencing everyone else into thinking that the car was blue.
So while the probability that the car is blue if the poll was completely correctly carried out is extremely high (as explained in Ruben van Bergen's answer), the reliability of the poll may have been compromised which makes the chance that the car is not blue not insignificant. How big the questionee estimates this chance to be ultimately depends on his estimations of how likely it is that circumstances have screwed with the poll and of how good you are at carrying out polls (and how mischievous he thinks you are). | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | The questionee knows too little about how the poll was carried out in order to answer the question accurately. As far as he's concerned, the poll can suffer from several problems:
The people taking th | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
The questionee knows too little about how the poll was carried out in order to answer the question accurately. As far as he's concerned, the poll can suffer from several problems:
The people taking the poll could have been biased:
The car looked blue because of an optical illusion.
The color of the car was for some reason difficult to observe, and the people had for some reason been shown a lot of blue cars before this one, making most of them believe this car was probably blue, too.
You had paid them to say that the car is blue.
You had someone hypnotize all of them into believing that the car is blue.
They had made a pact to lie and sabotage the poll.
There may have been correlations among the people taking the poll because of how they were selected or because they affected each other:
You accidentally carried out the poll at a mass meeting for people with the same kind of color blindness.
You carried out the poll at kindergartens; the girls were not interested in the car and most of the boys had blue as their favorite color, making them imagine that the car was blue.
The first person who was shown the car was drunk and thought it looked blue, shouted "IT IS BLUE", influencing everyone else into thinking that the car was blue.
So while the probability that the car is blue if the poll was completely correctly carried out is extremely high (as explained in Ruben van Bergen's answer), the reliability of the poll may have been compromised which makes the chance that the car is not blue not insignificant. How big the questionee estimates this chance to be ultimately depends on his estimations of how likely it is that circumstances have screwed with the poll and of how good you are at carrying out polls (and how mischievous he thinks you are). | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
The questionee knows too little about how the poll was carried out in order to answer the question accurately. As far as he's concerned, the poll can suffer from several problems:
The people taking th |
1,460 | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | What is the definition of "blue"?
Different cultures and languages have different notions of blue. IIRC, some cultures include green within their notion of blue!
Like any natural language word, you can only assume there is some cultural convention on when (and when not) to call things "blue".
Overall, color in language is surprisingly subjective (link from the comments below, thanks @Count Ibilis) | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | What is the definition of "blue"?
Different cultures and languages have different notions of blue. IIRC, some cultures include green within their notion of blue!
Like any natural language word, you ca | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
What is the definition of "blue"?
Different cultures and languages have different notions of blue. IIRC, some cultures include green within their notion of blue!
Like any natural language word, you can only assume there is some cultural convention on when (and when not) to call things "blue".
Overall, color in language is surprisingly subjective (link from the comments below, thanks @Count Ibilis) | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
What is the definition of "blue"?
Different cultures and languages have different notions of blue. IIRC, some cultures include green within their notion of blue!
Like any natural language word, you ca |
1,461 | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | The likelihood could, depending on more refined preconditions, be several different values, but 99.995% is the one that makes the most sense to me.
We know, by definition, that the car is blue (that's 100%), but it is not well-specified what this actually means (that would bet somewhat philosophical). I will assume something is blue in a sense of can-indeed-be-seen-as-blue.
We also know that 90% of test subjects reported it as blue.
We do not know what was asked or how the evaluation was done, and what lighting conditions the car was in. Being asked to name the color, some subjects might e.g. have said "greenish-blue" due to lighting conditions, and the assessor might not have counted that as "blue". The same people might have replied "yes" if the question had been "Is this blue?". I will assume that you did not intend to maliciously deceive your test subjects.
We know that the incidence of tritanopy is about 0,005% which means that if the car could actually be seen as blue, then 99.995% of the test subjects indeed did see the color as blue. That, however, means that 9.995% of the test subjects did not report blue when they clearly saw blue. They were lying about what they saw. This is close to what your life experience tells you as well: people are not always being honest (but, unless there's a motive, they usually are).
Thus, the non-observing person can assume with overwhelming certitude that the car is blue. That would be 100%
Except... except if the non-observing person herself suffers from tritanopy, in which case she would not see the car as blue even though everybody else (or rather, 90% of them) says so. Here it gets philosophical again: If everybody else heard a tree fall, but I didn't, did it fall?
I daresay that the most reasonable, practical answer would be: If the non-observing person happens to be trianope (0.005% chance), then verifying whether the predicted color and the real color as-seen are the same would yield false. Thus, the likelihood is 99.995% rather than 100%.
Further, as a bonus, since we found out that 9.995% of the test subjects are liars, and it is known that all Cretans are liars, we can conclude that we are not in Crete! | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | The likelihood could, depending on more refined preconditions, be several different values, but 99.995% is the one that makes the most sense to me.
We know, by definition, that the car is blue (that's | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
The likelihood could, depending on more refined preconditions, be several different values, but 99.995% is the one that makes the most sense to me.
We know, by definition, that the car is blue (that's 100%), but it is not well-specified what this actually means (that would bet somewhat philosophical). I will assume something is blue in a sense of can-indeed-be-seen-as-blue.
We also know that 90% of test subjects reported it as blue.
We do not know what was asked or how the evaluation was done, and what lighting conditions the car was in. Being asked to name the color, some subjects might e.g. have said "greenish-blue" due to lighting conditions, and the assessor might not have counted that as "blue". The same people might have replied "yes" if the question had been "Is this blue?". I will assume that you did not intend to maliciously deceive your test subjects.
We know that the incidence of tritanopy is about 0,005% which means that if the car could actually be seen as blue, then 99.995% of the test subjects indeed did see the color as blue. That, however, means that 9.995% of the test subjects did not report blue when they clearly saw blue. They were lying about what they saw. This is close to what your life experience tells you as well: people are not always being honest (but, unless there's a motive, they usually are).
Thus, the non-observing person can assume with overwhelming certitude that the car is blue. That would be 100%
Except... except if the non-observing person herself suffers from tritanopy, in which case she would not see the car as blue even though everybody else (or rather, 90% of them) says so. Here it gets philosophical again: If everybody else heard a tree fall, but I didn't, did it fall?
I daresay that the most reasonable, practical answer would be: If the non-observing person happens to be trianope (0.005% chance), then verifying whether the predicted color and the real color as-seen are the same would yield false. Thus, the likelihood is 99.995% rather than 100%.
Further, as a bonus, since we found out that 9.995% of the test subjects are liars, and it is known that all Cretans are liars, we can conclude that we are not in Crete! | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
The likelihood could, depending on more refined preconditions, be several different values, but 99.995% is the one that makes the most sense to me.
We know, by definition, that the car is blue (that's |
1,462 | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | You have a blue car (by some objective scientific measure - it is
blue).
...
"What is the probability that the car is blue?"
It is 100% blue.
All they know is that 900 people said it was blue, and 100 did not. You know nothing more about these people (the 1000).
Using these numbers (without any context) is utterly nonsense. It all boils down to personal interpretation of the question. We should not go down this path and use Wittgenstein's: "Wovon man nicht sprechen kann, darüber muss man schweigen."
Imagine the following question for comparison:
All they know is that 0 people said it was blue, and 0 did not.
You know nothing more about these people (the 0).
This is basically the same (information less) problem, but it is much more clear that what we think of the color of the car is mostly (if not completely) circumstantial.
In the long run, when we get multiple associated questions, then we are able to start guessing answers to such incomplete questions. This is the same for the tit-for-tat algorithm that doesn't work for a single case, but it does work in the long run. In the same sense Wittgenstein came back from his earlier work with his Principal Investigations. We are able to answer these questions, but we need more information/trials/questions. It is a process. | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | You have a blue car (by some objective scientific measure - it is
blue).
...
"What is the probability that the car is blue?"
It is 100% blue.
All they know is that 900 people said it was blue, an | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
You have a blue car (by some objective scientific measure - it is
blue).
...
"What is the probability that the car is blue?"
It is 100% blue.
All they know is that 900 people said it was blue, and 100 did not. You know nothing more about these people (the 1000).
Using these numbers (without any context) is utterly nonsense. It all boils down to personal interpretation of the question. We should not go down this path and use Wittgenstein's: "Wovon man nicht sprechen kann, darüber muss man schweigen."
Imagine the following question for comparison:
All they know is that 0 people said it was blue, and 0 did not.
You know nothing more about these people (the 0).
This is basically the same (information less) problem, but it is much more clear that what we think of the color of the car is mostly (if not completely) circumstantial.
In the long run, when we get multiple associated questions, then we are able to start guessing answers to such incomplete questions. This is the same for the tit-for-tat algorithm that doesn't work for a single case, but it does work in the long run. In the same sense Wittgenstein came back from his earlier work with his Principal Investigations. We are able to answer these questions, but we need more information/trials/questions. It is a process. | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
You have a blue car (by some objective scientific measure - it is
blue).
...
"What is the probability that the car is blue?"
It is 100% blue.
All they know is that 900 people said it was blue, an |
1,463 | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | If we assume the car is blue, then 100 out of 1,000 saying it's not blue implies an extreme sample bias of some kind. Perhaps you were sampling only colour-blind people. If we assume the car is not blue, then the sample bias is even worse. So all we can conclude from the data given is that the sample is very biased, and since we don't know how it was biased, we can't conclude anything about the colour of the car. | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | If we assume the car is blue, then 100 out of 1,000 saying it's not blue implies an extreme sample bias of some kind. Perhaps you were sampling only colour-blind people. If we assume the car is not bl | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
If we assume the car is blue, then 100 out of 1,000 saying it's not blue implies an extreme sample bias of some kind. Perhaps you were sampling only colour-blind people. If we assume the car is not blue, then the sample bias is even worse. So all we can conclude from the data given is that the sample is very biased, and since we don't know how it was biased, we can't conclude anything about the colour of the car. | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
If we assume the car is blue, then 100 out of 1,000 saying it's not blue implies an extreme sample bias of some kind. Perhaps you were sampling only colour-blind people. If we assume the car is not bl |
1,464 | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | There have been some answers. I'm by no means a mathematics guru, but well, here is mine.
There can only be 4 possibilities:
case 1) Persons says car is blue and is correct
case 2) Person says car is blue and is incorrect
case 3) Person says car is not blue and is correct
case 4) Person says car is not blue and is incorrect
From the question, you know that the sum of case 1 and case 4 is 900 people (90%), and the sum of case 2 and case 3 is 100 people (10%). However here is the catch: what you don't know is the the distribution within these 2 case pairs. Maybe the sum of case 1 and 4 completely is made up of case 1 (which means car is blue), or perhaps the whole sum is made up of case 4 (which means car is not blue). Same goes for sum of case 2+3. So... What you need is to come up with some way to predict the distribution within case sums. With no other indication in the question (nowhere does it say people are 80% certain to know their colors or anything like that) there is no way you can come up with a certain, definite answer.
Having said this... I do suspect the expected answer is something along the lines of:
P(Blue) = (case 1 + case 4) * 900 / 1000 = (1/4 + 1/4) * 900 / 1000 = 45 %
P(non-Blue) = (case 2 + case 3) * 100 / 1000 = (1/4 + 1/4) * 100 / 1000 = 5%
where remaining 50% is simply unknown, call it the error margin. | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | There have been some answers. I'm by no means a mathematics guru, but well, here is mine.
There can only be 4 possibilities:
case 1) Persons says car is blue and is correct
case 2) Person says car is | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
There have been some answers. I'm by no means a mathematics guru, but well, here is mine.
There can only be 4 possibilities:
case 1) Persons says car is blue and is correct
case 2) Person says car is blue and is incorrect
case 3) Person says car is not blue and is correct
case 4) Person says car is not blue and is incorrect
From the question, you know that the sum of case 1 and case 4 is 900 people (90%), and the sum of case 2 and case 3 is 100 people (10%). However here is the catch: what you don't know is the the distribution within these 2 case pairs. Maybe the sum of case 1 and 4 completely is made up of case 1 (which means car is blue), or perhaps the whole sum is made up of case 4 (which means car is not blue). Same goes for sum of case 2+3. So... What you need is to come up with some way to predict the distribution within case sums. With no other indication in the question (nowhere does it say people are 80% certain to know their colors or anything like that) there is no way you can come up with a certain, definite answer.
Having said this... I do suspect the expected answer is something along the lines of:
P(Blue) = (case 1 + case 4) * 900 / 1000 = (1/4 + 1/4) * 900 / 1000 = 45 %
P(non-Blue) = (case 2 + case 3) * 100 / 1000 = (1/4 + 1/4) * 100 / 1000 = 5%
where remaining 50% is simply unknown, call it the error margin. | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
There have been some answers. I'm by no means a mathematics guru, but well, here is mine.
There can only be 4 possibilities:
case 1) Persons says car is blue and is correct
case 2) Person says car is |
1,465 | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | Let $X,Y_1,Y_2,\ldots,Y_{1000} \in \{0,1\}$ denote the true color, and the responses, respectively. "Blue" is coded as a $1$, and vice versa. Assume that $p(x)$ is Bernoulli with parameter $p_x$. Assume that each $Y_i|X=1$ is Bernoull with parameter $p_1$, and assume $Y_i|X=0$ is Bernoulli with parameter $p_0$. Also, pick a prior for the parameters $\theta = (p_x,p_0,p_1)$.
You're looking for $p(\theta,x|y_{1:1000}) \propto p(\theta)p(x|\theta)\prod_{i=1}^{1000}p(y_i|x)$. Formulating it like this highlights the fact that (at least if you're a Bayesian) you need to choose priors for these three parameters. The Bayesian viewpoint is nice because you could take advantage of what you know about how often cars are blue, and what you know about peoples' tendencies to agree with reality.
Also you can generalize this model. For example what if the car changes colors, or if you're looking at a sequence of cars (then you have a sequence $\{x_i\}$), or if people are equipped differently to evaluate car colors ($\{y_i|x\}$ are not identically distributed), or if people are basing their decisions on what other people are saying, etc. | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | Let $X,Y_1,Y_2,\ldots,Y_{1000} \in \{0,1\}$ denote the true color, and the responses, respectively. "Blue" is coded as a $1$, and vice versa. Assume that $p(x)$ is Bernoulli with parameter $p_x$. Assu | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
Let $X,Y_1,Y_2,\ldots,Y_{1000} \in \{0,1\}$ denote the true color, and the responses, respectively. "Blue" is coded as a $1$, and vice versa. Assume that $p(x)$ is Bernoulli with parameter $p_x$. Assume that each $Y_i|X=1$ is Bernoull with parameter $p_1$, and assume $Y_i|X=0$ is Bernoulli with parameter $p_0$. Also, pick a prior for the parameters $\theta = (p_x,p_0,p_1)$.
You're looking for $p(\theta,x|y_{1:1000}) \propto p(\theta)p(x|\theta)\prod_{i=1}^{1000}p(y_i|x)$. Formulating it like this highlights the fact that (at least if you're a Bayesian) you need to choose priors for these three parameters. The Bayesian viewpoint is nice because you could take advantage of what you know about how often cars are blue, and what you know about peoples' tendencies to agree with reality.
Also you can generalize this model. For example what if the car changes colors, or if you're looking at a sequence of cars (then you have a sequence $\{x_i\}$), or if people are equipped differently to evaluate car colors ($\{y_i|x\}$ are not identically distributed), or if people are basing their decisions on what other people are saying, etc. | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
Let $X,Y_1,Y_2,\ldots,Y_{1000} \in \{0,1\}$ denote the true color, and the responses, respectively. "Blue" is coded as a $1$, and vice versa. Assume that $p(x)$ is Bernoulli with parameter $p_x$. Assu |
1,466 | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | The person that cannot see the car does not know it is scientifically proven to be blue. The probability to he/she that the car is blue is 50/50 (it is blue, or it isn't). Polling other people may influence this person's opinion but it does not change the probability that an unseen car is either blue, or not.
All of the above math determines the probability that your sample set can determine if it is blue. | If 900 out of 1000 people say a car is blue, what is the probability that it is blue? | The person that cannot see the car does not know it is scientifically proven to be blue. The probability to he/she that the car is blue is 50/50 (it is blue, or it isn't). Polling other people may i | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
The person that cannot see the car does not know it is scientifically proven to be blue. The probability to he/she that the car is blue is 50/50 (it is blue, or it isn't). Polling other people may influence this person's opinion but it does not change the probability that an unseen car is either blue, or not.
All of the above math determines the probability that your sample set can determine if it is blue. | If 900 out of 1000 people say a car is blue, what is the probability that it is blue?
The person that cannot see the car does not know it is scientifically proven to be blue. The probability to he/she that the car is blue is 50/50 (it is blue, or it isn't). Polling other people may i |
1,467 | ASA discusses limitations of $p$-values - what are the alternatives? | I will focus this answer on the specific question of what are the alternatives to $p$-values.
There are 21 discussion papers published along with the ASA statement (as Supplemental Materials): by Naomi Altman, Douglas Altman,
Daniel J. Benjamin, Yoav Benjamini, Jim Berger, Don Berry, John Carlin, George Cobb, Andrew Gelman, Steve Goodman, Sander Greenland, John Ioannidis, Joseph Horowitz, Valen
Johnson, Michael Lavine, Michael Lew, Rod Little, Deborah Mayo, Michele Millar, Charles
Poole, Ken Rothman, Stephen Senn, Dalene Stangl, Philip Stark and Steve Ziliak (some of them wrote together; I list all for future searches). These people probably cover all existing opinions about $p$-values and statistical inference.
I have looked through all 21 papers.
Unfortunately, most of them do not discuss any real alternatives, even though the majority are about the limitations, misunderstandings, and various other problems with $p$-values (for a defense of $p$-values, see Benjamini, Mayo, and Senn). This already suggests that alternatives, if any, are not easy to find and/or to defend.
So let us look at the list of "other approaches" given in the ASA statement itself (as quoted in your question):
[Other approaches] include methods that
emphasize estimation over testing, such as confidence, credibility, or prediction intervals;
Bayesian methods; alternative measures of evidence, such as likelihood ratios or Bayes Factors;
and other approaches such as decision-theoretic modeling and false discovery rates.
Confidence intervals
Confidence intervals are a frequentist tool that goes hand-in-hand with $p$-values; reporting a confidence interval (or some equivalent, e.g., mean $\pm$ standard error of the mean) together with the $p$-value is almost always a good idea.
Some people (not among the ASA disputants) suggest that confidence intervals should replace the $p$-values. One of the most outspoken proponents of this approach is Geoff Cumming who calls it new statistics (a name that I find appalling). See e.g. this blog post by Ulrich Schimmack for a detailed critique: A Critical Review of Cumming’s (2014) New Statistics: Reselling Old Statistics as New Statistics. See also We cannot afford to study effect size in the lab blog post by Uri Simonsohn for a related point.
See also this thread (and my answer therein) about the similiar suggestion by Norm Matloff where I argue that when reporting CIs one would still like to have the $p$-values reported as well: What is a good, convincing example in which p-values are useful?
Some other people (not among the ASA disputants either), however, argue that confidence intervals, being a frequentist tool, are as misguided as $p$-values and should also be disposed of. See, e.g., Morey et al. 2015, The Fallacy of Placing Confidence in Confidence Intervals linked by @Tim here in the comments. This is a very old debate.
Bayesian methods
(I don't like how the ASA statement formulates the list. Credible intervals and Bayes factors are listed separately from "Bayesian methods", but they are obviously Bayesian tools. So I count them together here.)
There is a huge and very opinionated literature on the Bayesian vs. frequentist debate. See, e.g., this recent thread for some thoughts: When (if ever) is a frequentist approach substantively better than a Bayesian? Bayesian analysis makes total sense if one has good informative priors, and everybody would be only happy to compute and report $p(\theta|\text{data})$ or $p(H_0:\theta=0|\text{data})$ instead of $p(\text{data at least as extreme}|H_0)$—but alas, people usually do not have good priors. An experimenter records 20 rats doing something in one condition and 20 rats doing the same thing in another condition; the prediction is that the performance of the former rats will exceed the performance of the latter rats, but nobody would be willing or indeed able to state a clear prior over the performance differences. (But see @FrankHarrell's answer where he advocates using "skeptical priors".)
Die-hard Bayesians suggest to use Bayesian methods even if one does not have any informative priors. One recent example is Krushke, 2012, Bayesian estimation supersedes the $t$-test, humbly abbreviated as BEST. The idea is to use a Bayesian model with weak uninformative priors to compute the posterior for the effect of interest (such as, e.g., a group difference). The practical difference with frequentist reasoning seems usually to be minor, and as far as I can see this approach remains unpopular. See What is an "uninformative prior"? Can we ever have one with truly no information? for the discussion of what is "uninformative" (answer: there is no such thing, hence the controversy).
An alternative approach, going back to Harold Jeffreys, is based on Bayesian testing (as opposed to Bayesian estimation) and uses Bayes factors. One of the more eloquent and prolific proponents is Eric-Jan Wagenmakers, who has published a lot on this topic in recent years. Two features of this approach are worth emphasizing here. First, see Wetzels et al., 2012, A Default Bayesian Hypothesis Test for ANOVA Designs for an illustration of just how strongly the outcome of such a Bayesian test can depend on the specific choice of the alternative hypothesis $H_1$ and the parameter distribution ("prior") it posits. Second, once a "reasonable" prior is chosen (Wagenmakers advertises Jeffreys' so called "default" priors), resulting Bayes factors often turn out to be quite consistent with the standard $p$-values, see e.g. this figure from this preprint by Marsman & Wagenmakers:
So while Wagenmakers et al. keep insisting that $p$-values are deeply flawed and Bayes factors are the way to go, one cannot but wonder... (To be fair, the point of Wetzels et al. 2011 is that for $p$-values close to $0.05$ Bayes factors only indicate very weak evidence against the null; but note that this can be easily dealt with in a frequentist paradigm simply by using a more stringent $\alpha$, something that a lot of people are advocating anyway.)
One of the more popular papers by Wagenmakers et al. in the defense of Bayes factors is 2011, Why psychologists must change the way they analyze their data: The case of psi where he argues that infamous Bem's paper on predicting the future would not have reached their faulty conclusions if only they had used Bayes factors instead of $p$-values. See this thoughtful blog post by Ulrich Schimmack for a detailed (and IMHO convincing) counter-argument: Why Psychologists Should Not Change The Way They Analyze Their Data: The Devil is in the Default Prior.
See also The Default Bayesian Test is Prejudiced Against Small Effects blog post by Uri Simonsohn.
For completeness, I mention that Wagenmakers 2007, A practical solution to the pervasive
problems of $p$-values suggested to use BIC as an approximation to Bayes factor to replace the $p$-values. BIC does not depend on the prior and hence, despite its name, is not really Bayesian; I am not sure what to think about this proposal. It seems that more recently Wagenmakers is more in favour of Bayesian tests with uninformative Jeffreys' priors, see above.
For further discussion of Bayes estimation vs. Bayesian testing, see Bayesian parameter estimation or Bayesian hypothesis testing? and links therein.
Minimum Bayes factors
Among the ASA disputants, this is explicitly suggested by Benjamin & Berger and by Valen Johnson (the only two papers that are all about suggesting a concrete alternative). Their specific suggestions are a bit different but they are similar in spirit.
The ideas of Berger go back to the Berger & Sellke 1987 and there is a number of papers by Berger, Sellke, and collaborators up until last year elaborating on this work. The idea is that under a spike and slab prior where point null $\mu=0$ hypothesis gets probability $0.5$ and all other values of $\mu$ get probability $0.5$ spread symmetrically around $0$ ("local alternative"), then the minimal posterior $p(H_0)$ over all local alternatives, i.e. the minimal Bayes factor, is much higher than the $p$-value. This is the basis of the (much contested) claim that $p$-values "overstate the evidence" against the null. The suggestion is to use a lower bound on Bayes factor in favour of the null instead of the $p$-value; under some broad assumptions this lower bound turns out to be given by $-ep\log(p)$, i.e., the $p$-value is effectively multiplied by $-e\log(p)$ which is a factor of around $10$ to $20$ for the common range of $p$-values. This approach has been endorsed by Steven Goodman too.
Later update: See a nice cartoon explaining these ideas in a simple way.
Even later update: See Held & Ott, 2018, On $p$-Values and Bayes Factors for a comprehensive review and further analysis of converting $p$-values to minimum Bayes factors. Here is one table from there:
Valen Johnson suggested something similar in his PNAS 2013 paper; his suggestion approximately boils down to multiplying $p$-values by $\sqrt{-4\pi\log(p)}$ which is around $5$ to $10$.
For a brief critique of Johnson's paper, see Andrew Gelman's and @Xi'an's reply in PNAS. For the counter-argument to Berger & Sellke 1987, see Casella & Berger 1987 (different Berger!). Among the APA discussion papers, Stephen Senn argues explicitly against any of these approaches:
Error probabilities are not posterior probabilities. Certainly, there is much more to statistical analysis than $P$-values but they should be left alone rather than being deformed in some way to become second class Bayesian posterior probabilities.
See also references in Senn's paper, including the ones to Mayo's blog.
ASA statement lists "decision-theoretic modeling and false discovery rates" as another alternative. I have no idea what they are talking about, and I was happy to see this stated in the discussion paper by Stark:
The "other approaches" section ignores the fact that the assumptions of
some of those methods are identical to those of $p$-values. Indeed, some of
the methods use $p$-values as input (e.g., the False Discovery Rate).
I am highly skeptical that there is anything that can replace $p$-values in actual scientific practice such that the problems that are often associated with $p$-values (replication crisis, $p$-hacking, etc.) would go away. Any fixed decision procedure, e.g. a Bayesian one, can probably be "hacked" in the same way as $p$-values can be $p$-hacked (for some discussion and demonstration of this see this 2014 blog post by Uri Simonsohn).
To quote from Andrew Gelman's discussion paper:
In summary, I agree with most of the ASA’s statement on $p$-values but I feel that the problems
are deeper, and that the solution is not to reform $p$-values or to replace them with some other
statistical summary or threshold, but rather to move toward a greater acceptance of uncertainty
and embracing of variation.
And from Stephen Senn:
In short, the problem is less with $P$-values per se but with making an idol of them. Substituting another false god will not help.
And here is how Cohen put it into his well-known and highly-cited (3.5k citations) 1994 paper The Earth is round ($p<0.05$) where he argued very strongly against $p$-values:
[...] don't look for a magic alternative to NHST, some other objective mechanical ritual to replace it. It doesn't exist. | ASA discusses limitations of $p$-values - what are the alternatives? | I will focus this answer on the specific question of what are the alternatives to $p$-values.
There are 21 discussion papers published along with the ASA statement (as Supplemental Materials): by Naom | ASA discusses limitations of $p$-values - what are the alternatives?
I will focus this answer on the specific question of what are the alternatives to $p$-values.
There are 21 discussion papers published along with the ASA statement (as Supplemental Materials): by Naomi Altman, Douglas Altman,
Daniel J. Benjamin, Yoav Benjamini, Jim Berger, Don Berry, John Carlin, George Cobb, Andrew Gelman, Steve Goodman, Sander Greenland, John Ioannidis, Joseph Horowitz, Valen
Johnson, Michael Lavine, Michael Lew, Rod Little, Deborah Mayo, Michele Millar, Charles
Poole, Ken Rothman, Stephen Senn, Dalene Stangl, Philip Stark and Steve Ziliak (some of them wrote together; I list all for future searches). These people probably cover all existing opinions about $p$-values and statistical inference.
I have looked through all 21 papers.
Unfortunately, most of them do not discuss any real alternatives, even though the majority are about the limitations, misunderstandings, and various other problems with $p$-values (for a defense of $p$-values, see Benjamini, Mayo, and Senn). This already suggests that alternatives, if any, are not easy to find and/or to defend.
So let us look at the list of "other approaches" given in the ASA statement itself (as quoted in your question):
[Other approaches] include methods that
emphasize estimation over testing, such as confidence, credibility, or prediction intervals;
Bayesian methods; alternative measures of evidence, such as likelihood ratios or Bayes Factors;
and other approaches such as decision-theoretic modeling and false discovery rates.
Confidence intervals
Confidence intervals are a frequentist tool that goes hand-in-hand with $p$-values; reporting a confidence interval (or some equivalent, e.g., mean $\pm$ standard error of the mean) together with the $p$-value is almost always a good idea.
Some people (not among the ASA disputants) suggest that confidence intervals should replace the $p$-values. One of the most outspoken proponents of this approach is Geoff Cumming who calls it new statistics (a name that I find appalling). See e.g. this blog post by Ulrich Schimmack for a detailed critique: A Critical Review of Cumming’s (2014) New Statistics: Reselling Old Statistics as New Statistics. See also We cannot afford to study effect size in the lab blog post by Uri Simonsohn for a related point.
See also this thread (and my answer therein) about the similiar suggestion by Norm Matloff where I argue that when reporting CIs one would still like to have the $p$-values reported as well: What is a good, convincing example in which p-values are useful?
Some other people (not among the ASA disputants either), however, argue that confidence intervals, being a frequentist tool, are as misguided as $p$-values and should also be disposed of. See, e.g., Morey et al. 2015, The Fallacy of Placing Confidence in Confidence Intervals linked by @Tim here in the comments. This is a very old debate.
Bayesian methods
(I don't like how the ASA statement formulates the list. Credible intervals and Bayes factors are listed separately from "Bayesian methods", but they are obviously Bayesian tools. So I count them together here.)
There is a huge and very opinionated literature on the Bayesian vs. frequentist debate. See, e.g., this recent thread for some thoughts: When (if ever) is a frequentist approach substantively better than a Bayesian? Bayesian analysis makes total sense if one has good informative priors, and everybody would be only happy to compute and report $p(\theta|\text{data})$ or $p(H_0:\theta=0|\text{data})$ instead of $p(\text{data at least as extreme}|H_0)$—but alas, people usually do not have good priors. An experimenter records 20 rats doing something in one condition and 20 rats doing the same thing in another condition; the prediction is that the performance of the former rats will exceed the performance of the latter rats, but nobody would be willing or indeed able to state a clear prior over the performance differences. (But see @FrankHarrell's answer where he advocates using "skeptical priors".)
Die-hard Bayesians suggest to use Bayesian methods even if one does not have any informative priors. One recent example is Krushke, 2012, Bayesian estimation supersedes the $t$-test, humbly abbreviated as BEST. The idea is to use a Bayesian model with weak uninformative priors to compute the posterior for the effect of interest (such as, e.g., a group difference). The practical difference with frequentist reasoning seems usually to be minor, and as far as I can see this approach remains unpopular. See What is an "uninformative prior"? Can we ever have one with truly no information? for the discussion of what is "uninformative" (answer: there is no such thing, hence the controversy).
An alternative approach, going back to Harold Jeffreys, is based on Bayesian testing (as opposed to Bayesian estimation) and uses Bayes factors. One of the more eloquent and prolific proponents is Eric-Jan Wagenmakers, who has published a lot on this topic in recent years. Two features of this approach are worth emphasizing here. First, see Wetzels et al., 2012, A Default Bayesian Hypothesis Test for ANOVA Designs for an illustration of just how strongly the outcome of such a Bayesian test can depend on the specific choice of the alternative hypothesis $H_1$ and the parameter distribution ("prior") it posits. Second, once a "reasonable" prior is chosen (Wagenmakers advertises Jeffreys' so called "default" priors), resulting Bayes factors often turn out to be quite consistent with the standard $p$-values, see e.g. this figure from this preprint by Marsman & Wagenmakers:
So while Wagenmakers et al. keep insisting that $p$-values are deeply flawed and Bayes factors are the way to go, one cannot but wonder... (To be fair, the point of Wetzels et al. 2011 is that for $p$-values close to $0.05$ Bayes factors only indicate very weak evidence against the null; but note that this can be easily dealt with in a frequentist paradigm simply by using a more stringent $\alpha$, something that a lot of people are advocating anyway.)
One of the more popular papers by Wagenmakers et al. in the defense of Bayes factors is 2011, Why psychologists must change the way they analyze their data: The case of psi where he argues that infamous Bem's paper on predicting the future would not have reached their faulty conclusions if only they had used Bayes factors instead of $p$-values. See this thoughtful blog post by Ulrich Schimmack for a detailed (and IMHO convincing) counter-argument: Why Psychologists Should Not Change The Way They Analyze Their Data: The Devil is in the Default Prior.
See also The Default Bayesian Test is Prejudiced Against Small Effects blog post by Uri Simonsohn.
For completeness, I mention that Wagenmakers 2007, A practical solution to the pervasive
problems of $p$-values suggested to use BIC as an approximation to Bayes factor to replace the $p$-values. BIC does not depend on the prior and hence, despite its name, is not really Bayesian; I am not sure what to think about this proposal. It seems that more recently Wagenmakers is more in favour of Bayesian tests with uninformative Jeffreys' priors, see above.
For further discussion of Bayes estimation vs. Bayesian testing, see Bayesian parameter estimation or Bayesian hypothesis testing? and links therein.
Minimum Bayes factors
Among the ASA disputants, this is explicitly suggested by Benjamin & Berger and by Valen Johnson (the only two papers that are all about suggesting a concrete alternative). Their specific suggestions are a bit different but they are similar in spirit.
The ideas of Berger go back to the Berger & Sellke 1987 and there is a number of papers by Berger, Sellke, and collaborators up until last year elaborating on this work. The idea is that under a spike and slab prior where point null $\mu=0$ hypothesis gets probability $0.5$ and all other values of $\mu$ get probability $0.5$ spread symmetrically around $0$ ("local alternative"), then the minimal posterior $p(H_0)$ over all local alternatives, i.e. the minimal Bayes factor, is much higher than the $p$-value. This is the basis of the (much contested) claim that $p$-values "overstate the evidence" against the null. The suggestion is to use a lower bound on Bayes factor in favour of the null instead of the $p$-value; under some broad assumptions this lower bound turns out to be given by $-ep\log(p)$, i.e., the $p$-value is effectively multiplied by $-e\log(p)$ which is a factor of around $10$ to $20$ for the common range of $p$-values. This approach has been endorsed by Steven Goodman too.
Later update: See a nice cartoon explaining these ideas in a simple way.
Even later update: See Held & Ott, 2018, On $p$-Values and Bayes Factors for a comprehensive review and further analysis of converting $p$-values to minimum Bayes factors. Here is one table from there:
Valen Johnson suggested something similar in his PNAS 2013 paper; his suggestion approximately boils down to multiplying $p$-values by $\sqrt{-4\pi\log(p)}$ which is around $5$ to $10$.
For a brief critique of Johnson's paper, see Andrew Gelman's and @Xi'an's reply in PNAS. For the counter-argument to Berger & Sellke 1987, see Casella & Berger 1987 (different Berger!). Among the APA discussion papers, Stephen Senn argues explicitly against any of these approaches:
Error probabilities are not posterior probabilities. Certainly, there is much more to statistical analysis than $P$-values but they should be left alone rather than being deformed in some way to become second class Bayesian posterior probabilities.
See also references in Senn's paper, including the ones to Mayo's blog.
ASA statement lists "decision-theoretic modeling and false discovery rates" as another alternative. I have no idea what they are talking about, and I was happy to see this stated in the discussion paper by Stark:
The "other approaches" section ignores the fact that the assumptions of
some of those methods are identical to those of $p$-values. Indeed, some of
the methods use $p$-values as input (e.g., the False Discovery Rate).
I am highly skeptical that there is anything that can replace $p$-values in actual scientific practice such that the problems that are often associated with $p$-values (replication crisis, $p$-hacking, etc.) would go away. Any fixed decision procedure, e.g. a Bayesian one, can probably be "hacked" in the same way as $p$-values can be $p$-hacked (for some discussion and demonstration of this see this 2014 blog post by Uri Simonsohn).
To quote from Andrew Gelman's discussion paper:
In summary, I agree with most of the ASA’s statement on $p$-values but I feel that the problems
are deeper, and that the solution is not to reform $p$-values or to replace them with some other
statistical summary or threshold, but rather to move toward a greater acceptance of uncertainty
and embracing of variation.
And from Stephen Senn:
In short, the problem is less with $P$-values per se but with making an idol of them. Substituting another false god will not help.
And here is how Cohen put it into his well-known and highly-cited (3.5k citations) 1994 paper The Earth is round ($p<0.05$) where he argued very strongly against $p$-values:
[...] don't look for a magic alternative to NHST, some other objective mechanical ritual to replace it. It doesn't exist. | ASA discusses limitations of $p$-values - what are the alternatives?
I will focus this answer on the specific question of what are the alternatives to $p$-values.
There are 21 discussion papers published along with the ASA statement (as Supplemental Materials): by Naom |
1,468 | ASA discusses limitations of $p$-values - what are the alternatives? | Here is my two cents.
I think that at some point, many applied scientists stated the following "theorem":
Theorem 1: $p\text{-value}<0.05\Leftrightarrow \text{my hypothesis is true}.$
and most of the bad practices come from here.
The $p$-value and scientific induction
I used to work with people using statistics without really understanding it and here is some of the stuff I see:
running many possible tests/reparametrisations (without looking once at the distribution of the data) until finding the "good" one: the one giving $p<0.05$;
trying different preprocessing (e.g. in medical imaging) to get the data to analyse until getting the one giving $p<0.05$;
reach $0.05$ by applying one-tailed t-test in the positive direction for the data with positive effect and in the negative direction for the data with negative effect (!!).
All that is done by well-versed, honest scientists having no strong sensation of cheating. Why ? IMHO, because of Theorem 1.
At a given moment, applied scientist may believe strongly in their hypothesis. I even suspect that they believe they known they are true and the fact is that in many situations they have seen data from years, have thought about them while working, walking, sleeping... and they are the best to say something about the answer to this question. The fact is, in their mind (sorry I think I look a bit arrogant here), by Theorem 1 if they hypothesis is true, the $p$-value must be lower than $0.05$ ; no matter what the amount of data is, how they are distributed, the alternative hypothesis, the size effect, the quality of the data acquisition. If the $p$-value is not $<0.05$ and the hypothesis is true, then something is not correct: the preprocessing, the choice of test, the distribution, the acquisition protocol... so we change them... $p$-value $<0.05$ is just the ultimate key of scientific induction.
To this point, I agree with the two previous answers that confidence intervals or credible intervals make the statistical answer more proper to the discussion and to the interpretation. While $p$-value is difficult to interpret (IMHO) and ends the discussion, interval estimates can serve a scientific induction illustrated by objective statistics but lead by expert arguments.
The $p$-value and the alternative hypothesis
Another consequence of Th.1 is that if $p$-value$>0.05$ then the alternative hypothesis is false. Again this is something that I encounter many times :
try to compare (just because we have the data) a hypothesis of the type $H_0: \mu_1 \ne \mu_2$: take randomly 10 data-points for each of the two groups, compute the $p$-value for $H_0$. Find $p=0.2$, notice in some part of the brain that there is no difference between the two groups.
A main issue with the $p$-value is that the alternative is never mentioned while I think in many cases this could help a lot. A typical example is point 4., where I proposed to my colleague to compute posterior ratio for $p(\mu_1>\mu_2|x)$ vs. $p(\mu_1<\mu_2|x)$ and get something like 3 (I know this figure is ridiculously low). The researcher asks me if it means that the probability that $\mu_1>\mu_2$ is 3 times stronger than those $\mu_2>\mu_1$. I answered that this is a way to interpret it and she finds this amazing and that she should look at more data and write a paper... My point is not that this "3" helps her to understand that there is something in the data (again 3 is clearly anedoctic) but that it underlines that she misinterprets the p-value as "p-value>0.05 means nothing interesting/equivalent groups". So in my opinion, always at least discussing the alternative hypothesis (es!) is mandatory, allows to avoid simplification, gives element to debate.
Another related case is when experts want to :
test $\mu_1>\mu_2>\mu_3$. For that they test and reject $\mu_1=\mu_2=\mu_3$ then conclude $\mu_1>\mu_2>\mu_3$ using the fact that the ML estimates are ordered.
Mentioning the alternative hypothesis is the only solution to solve this case.
So using posterior odds, Bayes factor or likelihood ratio conjointly with confidence/credible intervals seems to reduce the main involved issues.
The common misinterpretation of $p$-value / confidence intervals is a relatively minor flaw (in practice)
While I am a Bayesian enthusiast, I really think that the common misinterpretation of $p$-value and CI (i.e. the $p$-value is not the probability that the null hypothesis is false and the CI is not the interval that contains the parameter value with 95% chance) is not the main concern for this question (while I am sure this is a major point from a philosophical point of view). The Bayesian/Frequentist view have both pertinent answers to help practitioner in this "crisis".
My two cents conclusion
Using credible interval and Bayes factor or posterior odds is what I try to do in my practice with experts (but am also enthusiast in CI+likelihood ratio). I came to statistics a few years ago mainly by self-studying from the web (so many thanks to Cross Validated !) and so grew up with the numerous agitations around $p$-values. I do not know if my practice is a good one but it is what I pragmatically find as a good compromise between being efficient and making my job properly. | ASA discusses limitations of $p$-values - what are the alternatives? | Here is my two cents.
I think that at some point, many applied scientists stated the following "theorem":
Theorem 1: $p\text{-value}<0.05\Leftrightarrow \text{my hypothesis is true}.$
and most of th | ASA discusses limitations of $p$-values - what are the alternatives?
Here is my two cents.
I think that at some point, many applied scientists stated the following "theorem":
Theorem 1: $p\text{-value}<0.05\Leftrightarrow \text{my hypothesis is true}.$
and most of the bad practices come from here.
The $p$-value and scientific induction
I used to work with people using statistics without really understanding it and here is some of the stuff I see:
running many possible tests/reparametrisations (without looking once at the distribution of the data) until finding the "good" one: the one giving $p<0.05$;
trying different preprocessing (e.g. in medical imaging) to get the data to analyse until getting the one giving $p<0.05$;
reach $0.05$ by applying one-tailed t-test in the positive direction for the data with positive effect and in the negative direction for the data with negative effect (!!).
All that is done by well-versed, honest scientists having no strong sensation of cheating. Why ? IMHO, because of Theorem 1.
At a given moment, applied scientist may believe strongly in their hypothesis. I even suspect that they believe they known they are true and the fact is that in many situations they have seen data from years, have thought about them while working, walking, sleeping... and they are the best to say something about the answer to this question. The fact is, in their mind (sorry I think I look a bit arrogant here), by Theorem 1 if they hypothesis is true, the $p$-value must be lower than $0.05$ ; no matter what the amount of data is, how they are distributed, the alternative hypothesis, the size effect, the quality of the data acquisition. If the $p$-value is not $<0.05$ and the hypothesis is true, then something is not correct: the preprocessing, the choice of test, the distribution, the acquisition protocol... so we change them... $p$-value $<0.05$ is just the ultimate key of scientific induction.
To this point, I agree with the two previous answers that confidence intervals or credible intervals make the statistical answer more proper to the discussion and to the interpretation. While $p$-value is difficult to interpret (IMHO) and ends the discussion, interval estimates can serve a scientific induction illustrated by objective statistics but lead by expert arguments.
The $p$-value and the alternative hypothesis
Another consequence of Th.1 is that if $p$-value$>0.05$ then the alternative hypothesis is false. Again this is something that I encounter many times :
try to compare (just because we have the data) a hypothesis of the type $H_0: \mu_1 \ne \mu_2$: take randomly 10 data-points for each of the two groups, compute the $p$-value for $H_0$. Find $p=0.2$, notice in some part of the brain that there is no difference between the two groups.
A main issue with the $p$-value is that the alternative is never mentioned while I think in many cases this could help a lot. A typical example is point 4., where I proposed to my colleague to compute posterior ratio for $p(\mu_1>\mu_2|x)$ vs. $p(\mu_1<\mu_2|x)$ and get something like 3 (I know this figure is ridiculously low). The researcher asks me if it means that the probability that $\mu_1>\mu_2$ is 3 times stronger than those $\mu_2>\mu_1$. I answered that this is a way to interpret it and she finds this amazing and that she should look at more data and write a paper... My point is not that this "3" helps her to understand that there is something in the data (again 3 is clearly anedoctic) but that it underlines that she misinterprets the p-value as "p-value>0.05 means nothing interesting/equivalent groups". So in my opinion, always at least discussing the alternative hypothesis (es!) is mandatory, allows to avoid simplification, gives element to debate.
Another related case is when experts want to :
test $\mu_1>\mu_2>\mu_3$. For that they test and reject $\mu_1=\mu_2=\mu_3$ then conclude $\mu_1>\mu_2>\mu_3$ using the fact that the ML estimates are ordered.
Mentioning the alternative hypothesis is the only solution to solve this case.
So using posterior odds, Bayes factor or likelihood ratio conjointly with confidence/credible intervals seems to reduce the main involved issues.
The common misinterpretation of $p$-value / confidence intervals is a relatively minor flaw (in practice)
While I am a Bayesian enthusiast, I really think that the common misinterpretation of $p$-value and CI (i.e. the $p$-value is not the probability that the null hypothesis is false and the CI is not the interval that contains the parameter value with 95% chance) is not the main concern for this question (while I am sure this is a major point from a philosophical point of view). The Bayesian/Frequentist view have both pertinent answers to help practitioner in this "crisis".
My two cents conclusion
Using credible interval and Bayes factor or posterior odds is what I try to do in my practice with experts (but am also enthusiast in CI+likelihood ratio). I came to statistics a few years ago mainly by self-studying from the web (so many thanks to Cross Validated !) and so grew up with the numerous agitations around $p$-values. I do not know if my practice is a good one but it is what I pragmatically find as a good compromise between being efficient and making my job properly. | ASA discusses limitations of $p$-values - what are the alternatives?
Here is my two cents.
I think that at some point, many applied scientists stated the following "theorem":
Theorem 1: $p\text{-value}<0.05\Leftrightarrow \text{my hypothesis is true}.$
and most of th |
1,469 | ASA discusses limitations of $p$-values - what are the alternatives? | The only reasons I continue to use $P$-values are
More software is available for frequentist methods than Bayesian methods.
Currently, some Bayesian analyses take a long time to run.
Bayesian methods require more thinking and more time investment. I don't mind the thinking part but time is often short so we take shortcuts.
The bootstrap is a highly flexible and useful everyday technique that is more connected to the frequentist world than to the Bayesian.
$P$-values, analogous to highly problematic sensitivity and specificity as accuracy measures, are highly deficient in my humble opinion. The problem with all three of these measures is that they reverse the flow of time and information. When you turn a question from "what is the probability of getting evidence like this if the defendant is innocent" to "what is the probability of guilt of the defendant based on the evidence", things become more coherent and less arbitrary. Reasoning in reverse time makes you have to consider "how did we get here?" as opposed to "what is the evidence now?". $P$-values require consideration of what could have happened instead of what did happen. What could have happened makes one have to do arbitrary multiplicity adjustments, even adjusting for data looks that might have made an impact but actually didn't.
When $P$-values are coupled with highly arbitrary decision thresholds, things get worse. Thresholds almost always invite gaming.
Except for Gaussian linear models and the exponential distribution, almost everything we do with frequentist inference is approximate (a good example is the binary logistic model which causes problems because its log likelihood function is very non-quadratic). With Bayesian inference, everything is exact to within simulation error (and you can always do more simulations to get posterior probabilities/credible intervals).
I've written a more detailed accounting of my thinking and evoluation at http://www.fharrell.com/2017/02/my-journey-from-frequentist-to-bayesian.html | ASA discusses limitations of $p$-values - what are the alternatives? | The only reasons I continue to use $P$-values are
More software is available for frequentist methods than Bayesian methods.
Currently, some Bayesian analyses take a long time to run.
Bayesian methods | ASA discusses limitations of $p$-values - what are the alternatives?
The only reasons I continue to use $P$-values are
More software is available for frequentist methods than Bayesian methods.
Currently, some Bayesian analyses take a long time to run.
Bayesian methods require more thinking and more time investment. I don't mind the thinking part but time is often short so we take shortcuts.
The bootstrap is a highly flexible and useful everyday technique that is more connected to the frequentist world than to the Bayesian.
$P$-values, analogous to highly problematic sensitivity and specificity as accuracy measures, are highly deficient in my humble opinion. The problem with all three of these measures is that they reverse the flow of time and information. When you turn a question from "what is the probability of getting evidence like this if the defendant is innocent" to "what is the probability of guilt of the defendant based on the evidence", things become more coherent and less arbitrary. Reasoning in reverse time makes you have to consider "how did we get here?" as opposed to "what is the evidence now?". $P$-values require consideration of what could have happened instead of what did happen. What could have happened makes one have to do arbitrary multiplicity adjustments, even adjusting for data looks that might have made an impact but actually didn't.
When $P$-values are coupled with highly arbitrary decision thresholds, things get worse. Thresholds almost always invite gaming.
Except for Gaussian linear models and the exponential distribution, almost everything we do with frequentist inference is approximate (a good example is the binary logistic model which causes problems because its log likelihood function is very non-quadratic). With Bayesian inference, everything is exact to within simulation error (and you can always do more simulations to get posterior probabilities/credible intervals).
I've written a more detailed accounting of my thinking and evoluation at http://www.fharrell.com/2017/02/my-journey-from-frequentist-to-bayesian.html | ASA discusses limitations of $p$-values - what are the alternatives?
The only reasons I continue to use $P$-values are
More software is available for frequentist methods than Bayesian methods.
Currently, some Bayesian analyses take a long time to run.
Bayesian methods |
1,470 | ASA discusses limitations of $p$-values - what are the alternatives? | In this thread, there is already a good amount of illuminating discussion on this subject. But let me ask you: "Alternatives to what exactly?" The damning thing about p-values is that they're forced to live between two worlds: decision theoretic inference and distribution free statistics. If you are looking for an alternative to "p<0.05" as a decision theoretic rule to dichotomize studies as positive/negative or significant/non-significant then I tell you: the premise of the question is flawed. You can contrive and find many branded alternatives to $p$-value based inference which have the exact same logical shortcomings.
I'll point out that the way we conduct modern testing in no way agrees with the theory and perspectives of Fisher and Neyman-Pearson who both contributed greatly to modern methods. Fisher's original suggestion was that scientists should qualitatively compare the $p$-value to the power of the study and draw conclusions there. I still think this is an adequate approach, which leaves the question of scientific applicability of the findings in the hands of those content experts. Now, the error we find in modern applications is in no way a fault of statistics as a science. Also at play is fishing, extrapolation, and exaggeration. Indeed, if (say) a cardiologist should lie and claim that a drug which lowers average blood pressure 0.1mmHg is "clinically significant" no statistics will ever protect us from that kind of dishonesty.
We need an end to decision theoretic statistical inference. We should endeavor to think beyond the hypothesis. The growing gap between the clinical utility and hypothesis driven investigation compromises scientific integrity. The "significant" study is extremely suggestive but rarely promises any clinically meaningful findings.
This is evident if we inspect the attributes of hypothesis driven inference:
The null hypothesis stated is contrived, does not agree with current knowledge, and defies reason or expectation.
Hypotheses may be tangential to the point the author is trying to mak. Statistics rarely align with much of the ensuing discussion in articles, with authors making far reaching claims that, for instance, their observational study has implications for public policy and outreach.
Hypotheses tend to be incomplete in the sense that they do not adequately define the population of interest, and tend lead to overgeneralization
To me, the alternative there is a meta-analytic approach, at least a qualitative one. All results should be rigorous vetted against other "similar" findings and differences described very carefully, especially inclusion/exclusion criteria, units or scales used for exposures/outcomes, as well as effect sizes and uncertainty intervals (which are best summarized with 95% CIs).
We also need to conduct independent confirmatory trials. Many people are swayed by one seemingly significant trial, but without replication we cannot trust that the study was done ethically. Many have made scientific careers out of falsification of evidence. | ASA discusses limitations of $p$-values - what are the alternatives? | In this thread, there is already a good amount of illuminating discussion on this subject. But let me ask you: "Alternatives to what exactly?" The damning thing about p-values is that they're forced t | ASA discusses limitations of $p$-values - what are the alternatives?
In this thread, there is already a good amount of illuminating discussion on this subject. But let me ask you: "Alternatives to what exactly?" The damning thing about p-values is that they're forced to live between two worlds: decision theoretic inference and distribution free statistics. If you are looking for an alternative to "p<0.05" as a decision theoretic rule to dichotomize studies as positive/negative or significant/non-significant then I tell you: the premise of the question is flawed. You can contrive and find many branded alternatives to $p$-value based inference which have the exact same logical shortcomings.
I'll point out that the way we conduct modern testing in no way agrees with the theory and perspectives of Fisher and Neyman-Pearson who both contributed greatly to modern methods. Fisher's original suggestion was that scientists should qualitatively compare the $p$-value to the power of the study and draw conclusions there. I still think this is an adequate approach, which leaves the question of scientific applicability of the findings in the hands of those content experts. Now, the error we find in modern applications is in no way a fault of statistics as a science. Also at play is fishing, extrapolation, and exaggeration. Indeed, if (say) a cardiologist should lie and claim that a drug which lowers average blood pressure 0.1mmHg is "clinically significant" no statistics will ever protect us from that kind of dishonesty.
We need an end to decision theoretic statistical inference. We should endeavor to think beyond the hypothesis. The growing gap between the clinical utility and hypothesis driven investigation compromises scientific integrity. The "significant" study is extremely suggestive but rarely promises any clinically meaningful findings.
This is evident if we inspect the attributes of hypothesis driven inference:
The null hypothesis stated is contrived, does not agree with current knowledge, and defies reason or expectation.
Hypotheses may be tangential to the point the author is trying to mak. Statistics rarely align with much of the ensuing discussion in articles, with authors making far reaching claims that, for instance, their observational study has implications for public policy and outreach.
Hypotheses tend to be incomplete in the sense that they do not adequately define the population of interest, and tend lead to overgeneralization
To me, the alternative there is a meta-analytic approach, at least a qualitative one. All results should be rigorous vetted against other "similar" findings and differences described very carefully, especially inclusion/exclusion criteria, units or scales used for exposures/outcomes, as well as effect sizes and uncertainty intervals (which are best summarized with 95% CIs).
We also need to conduct independent confirmatory trials. Many people are swayed by one seemingly significant trial, but without replication we cannot trust that the study was done ethically. Many have made scientific careers out of falsification of evidence. | ASA discusses limitations of $p$-values - what are the alternatives?
In this thread, there is already a good amount of illuminating discussion on this subject. But let me ask you: "Alternatives to what exactly?" The damning thing about p-values is that they're forced t |
1,471 | ASA discusses limitations of $p$-values - what are the alternatives? | A Brilliant forecaster Scott Armstrong from Wharton published an article a almost 10 years ago titled Significance Tests Harm Progress in Forecasting in the international journal of forecasting a journal that he co-founded. Even though this is in forecasting, it could be generalized to any data analysis or decision making. In the article he states that:
"tests of statistical significance harms scientific progress. Efforts
to find exceptions to this conclusion have, to date, turned up none."
This is an excellent read for any one interested in antithetical view of significance testing and P values.
The reason why I like this article is because Armstrong provides alternatives to significance testing which is succinct and could be easily understood especially for a non-statistician like me. This is much better in my opinion than the ASA article cited in the question:
All of which I continue to embrace and ever since stopped using significance testing or looking at P values except when I do randomized experimental studies or quasi experiments. I must add randomized experiments are very rare in practice except in pharmaceutical industry/life sciences and in some fields in Engineering. | ASA discusses limitations of $p$-values - what are the alternatives? | A Brilliant forecaster Scott Armstrong from Wharton published an article a almost 10 years ago titled Significance Tests Harm Progress in Forecasting in the international journal of forecasting a jour | ASA discusses limitations of $p$-values - what are the alternatives?
A Brilliant forecaster Scott Armstrong from Wharton published an article a almost 10 years ago titled Significance Tests Harm Progress in Forecasting in the international journal of forecasting a journal that he co-founded. Even though this is in forecasting, it could be generalized to any data analysis or decision making. In the article he states that:
"tests of statistical significance harms scientific progress. Efforts
to find exceptions to this conclusion have, to date, turned up none."
This is an excellent read for any one interested in antithetical view of significance testing and P values.
The reason why I like this article is because Armstrong provides alternatives to significance testing which is succinct and could be easily understood especially for a non-statistician like me. This is much better in my opinion than the ASA article cited in the question:
All of which I continue to embrace and ever since stopped using significance testing or looking at P values except when I do randomized experimental studies or quasi experiments. I must add randomized experiments are very rare in practice except in pharmaceutical industry/life sciences and in some fields in Engineering. | ASA discusses limitations of $p$-values - what are the alternatives?
A Brilliant forecaster Scott Armstrong from Wharton published an article a almost 10 years ago titled Significance Tests Harm Progress in Forecasting in the international journal of forecasting a jour |
1,472 | ASA discusses limitations of $p$-values - what are the alternatives? | What is preferred and why must depend on the field of study. About 30 years ago articles started appearing in medical journals suggesting that $p$-values should be replaced by estimates with confidence intervals. The basic reasoning was that $p$-values just tell you the effect was there whereas the estimate with its confidence interval tells you how big it was and how precisely it has been estimated. The confidence interval is particularly important when the $p$-value fails to reach the conventional level of significance because it enables the reader to tell whether this is likely due to there genuinely being no difference or the study being inadequate to find a clinically meaningful difference.
Two references from the medical literature are (1) by Langman, M J S entitled Towards estimation and confidence intervals
and Gardner M J and Altman, D G entitled Confidence intervals rather than {P} values: estimation rather than hypothesis testing | ASA discusses limitations of $p$-values - what are the alternatives? | What is preferred and why must depend on the field of study. About 30 years ago articles started appearing in medical journals suggesting that $p$-values should be replaced by estimates with confidenc | ASA discusses limitations of $p$-values - what are the alternatives?
What is preferred and why must depend on the field of study. About 30 years ago articles started appearing in medical journals suggesting that $p$-values should be replaced by estimates with confidence intervals. The basic reasoning was that $p$-values just tell you the effect was there whereas the estimate with its confidence interval tells you how big it was and how precisely it has been estimated. The confidence interval is particularly important when the $p$-value fails to reach the conventional level of significance because it enables the reader to tell whether this is likely due to there genuinely being no difference or the study being inadequate to find a clinically meaningful difference.
Two references from the medical literature are (1) by Langman, M J S entitled Towards estimation and confidence intervals
and Gardner M J and Altman, D G entitled Confidence intervals rather than {P} values: estimation rather than hypothesis testing | ASA discusses limitations of $p$-values - what are the alternatives?
What is preferred and why must depend on the field of study. About 30 years ago articles started appearing in medical journals suggesting that $p$-values should be replaced by estimates with confidenc |
1,473 | ASA discusses limitations of $p$-values - what are the alternatives? | Decision theoretic modeling is superior to $p$-values because it requires the researcher to
develop a more sophisticated model that is capable of simulating outcomes in a target population
identify and measure attributes of a target population in whom a proposed decision, treatment, or policy could be implemented
estimate by way of simulation an expected loss in raw units of a target quantity such as life years, quality adjusted life years, dollars, crop output etc, and to assess the uncertainty of that estimate.
By all means this doesn't preclude normal hypothesis significance testing, but it underscores that statistically significant findings are very early, intermediary steps on the path to real discovery and we should be expecting researchers to do much more with their findings. | ASA discusses limitations of $p$-values - what are the alternatives? | Decision theoretic modeling is superior to $p$-values because it requires the researcher to
develop a more sophisticated model that is capable of simulating outcomes in a target population
identify a | ASA discusses limitations of $p$-values - what are the alternatives?
Decision theoretic modeling is superior to $p$-values because it requires the researcher to
develop a more sophisticated model that is capable of simulating outcomes in a target population
identify and measure attributes of a target population in whom a proposed decision, treatment, or policy could be implemented
estimate by way of simulation an expected loss in raw units of a target quantity such as life years, quality adjusted life years, dollars, crop output etc, and to assess the uncertainty of that estimate.
By all means this doesn't preclude normal hypothesis significance testing, but it underscores that statistically significant findings are very early, intermediary steps on the path to real discovery and we should be expecting researchers to do much more with their findings. | ASA discusses limitations of $p$-values - what are the alternatives?
Decision theoretic modeling is superior to $p$-values because it requires the researcher to
develop a more sophisticated model that is capable of simulating outcomes in a target population
identify a |
1,474 | ASA discusses limitations of $p$-values - what are the alternatives? | There is a collection of alternatives in the special issue Statistical Inference in the 21st Century: A World Beyond p < 0.05 of The American Statistician that, I think, deserves special mention. I cannot possibly and won't try to list all the ideas about alternatives and/or additions to p-values that are given in the 43 papers of this special issue, but I warmly recommend reading the editorial Moving to a World Beyond "p < 0.05" to get a solid overview. | ASA discusses limitations of $p$-values - what are the alternatives? | There is a collection of alternatives in the special issue Statistical Inference in the 21st Century: A World Beyond p < 0.05 of The American Statistician that, I think, deserves special mention. I ca | ASA discusses limitations of $p$-values - what are the alternatives?
There is a collection of alternatives in the special issue Statistical Inference in the 21st Century: A World Beyond p < 0.05 of The American Statistician that, I think, deserves special mention. I cannot possibly and won't try to list all the ideas about alternatives and/or additions to p-values that are given in the 43 papers of this special issue, but I warmly recommend reading the editorial Moving to a World Beyond "p < 0.05" to get a solid overview. | ASA discusses limitations of $p$-values - what are the alternatives?
There is a collection of alternatives in the special issue Statistical Inference in the 21st Century: A World Beyond p < 0.05 of The American Statistician that, I think, deserves special mention. I ca |
1,475 | ASA discusses limitations of $p$-values - what are the alternatives? | My choice would be to continue using p values, but simply adding confidence/credible intervals, and possibly for the primary outcomes prediction intervals. There is a very nice book by Douglas Altman (Statistics with Confidence, Wiley), and thanks to boostrap and MCMC approaches, you can always build reasonably robust intervals. | ASA discusses limitations of $p$-values - what are the alternatives? | My choice would be to continue using p values, but simply adding confidence/credible intervals, and possibly for the primary outcomes prediction intervals. There is a very nice book by Douglas Altman | ASA discusses limitations of $p$-values - what are the alternatives?
My choice would be to continue using p values, but simply adding confidence/credible intervals, and possibly for the primary outcomes prediction intervals. There is a very nice book by Douglas Altman (Statistics with Confidence, Wiley), and thanks to boostrap and MCMC approaches, you can always build reasonably robust intervals. | ASA discusses limitations of $p$-values - what are the alternatives?
My choice would be to continue using p values, but simply adding confidence/credible intervals, and possibly for the primary outcomes prediction intervals. There is a very nice book by Douglas Altman |
1,476 | ASA discusses limitations of $p$-values - what are the alternatives? | The statistical community's responses to the problem tend to assume that the answer lies in statistics. (The applied research community's preferred response is to ignore the problem entirely.)
In a forthcoming comment, colleagues and I argue that purely statistical standard error underestimates uncertainty, and that behavioral researchers should commit to estimating all material components of uncertainty associated with each measurement, as metrologists do in some physical sciences and in legal forensics. When statistical means to estimate some components are unavailable, researchers must rely on nonstatistical means.
Across the broad metrology community, there are research centers devoted to the quantification of uncertainty. Across many other fields, of course, there are no such centers, yet.
Rigdon, E. E., Sarstedt, M., & Becker, J.-M. (in press), Quantify uncertainty in behavioral research. Nature Human Behaviour. | ASA discusses limitations of $p$-values - what are the alternatives? | The statistical community's responses to the problem tend to assume that the answer lies in statistics. (The applied research community's preferred response is to ignore the problem entirely.)
In a fo | ASA discusses limitations of $p$-values - what are the alternatives?
The statistical community's responses to the problem tend to assume that the answer lies in statistics. (The applied research community's preferred response is to ignore the problem entirely.)
In a forthcoming comment, colleagues and I argue that purely statistical standard error underestimates uncertainty, and that behavioral researchers should commit to estimating all material components of uncertainty associated with each measurement, as metrologists do in some physical sciences and in legal forensics. When statistical means to estimate some components are unavailable, researchers must rely on nonstatistical means.
Across the broad metrology community, there are research centers devoted to the quantification of uncertainty. Across many other fields, of course, there are no such centers, yet.
Rigdon, E. E., Sarstedt, M., & Becker, J.-M. (in press), Quantify uncertainty in behavioral research. Nature Human Behaviour. | ASA discusses limitations of $p$-values - what are the alternatives?
The statistical community's responses to the problem tend to assume that the answer lies in statistics. (The applied research community's preferred response is to ignore the problem entirely.)
In a fo |
1,477 | What skills are required to perform large scale statistical analyses? | Good answers have already appeared. I will therefore just share some thoughts based on personal experience: adapt the relevant ones to your own situation as needed.
For background and context--so you can account for any personal biases that might creep in to this message--much of my work has been in helping people make important decisions based on relatively small datasets. They are small because the data can be expensive to collect (10K dollars for the first sample of a groundwater monitoring well, for instance, or several thousand dollars for analyses of unusual chemicals). I'm used to getting as much as possible out of any data that are available, to exploring them to death, and to inventing new methods to analyze them if necessary. However, in the last few years I have been engaged to work on some fairly large databases, such as one of socioeconomic and engineering data covering the entire US at the Census block level (8.5 million records, 300 fields) and various large GIS databases (which nowadays can run from gigabytes to hundreds of gigabytes in size).
With very large datasets one's entire approach and mindset change. There are now too much data to analyze. Some of the immediate (and, in retrospect) obvious implications (with emphasis on regression modeling) include
Any analysis you think about doing can take a lot of time and computation. You will need to develop methods of subsampling and working on partial datasets so you can plan your workflow when computing with the entire dataset. (Subsampling can be complicated, because you need a representative subset of the data that is as rich as the entire dataset. And don't forget about cross-validating your models with the held-out data.)
Because of this, you will spend more time documenting what you do and scripting everything (so that it can be repeated).
As @dsimcha has just noted, good programming skills are useful. Actually, you don't need much in the way of experience with programming environments, but you need a willingness to program, the ability to recognize when programming will help (at just about every step, really) and a good understanding of basic elements of computer science, such as design of appropriate data structures and how to analyze computational complexity of algorithms. That's useful for knowing in advance whether code you plan to write will scale up to the full dataset.
Some datasets are large because they have many variables (thousands or tens of thousands, all of them different). Expect to spend a great deal of time just summarizing and understanding the data. A codebook or data dictionary, and other forms of metadata, become essential.
Much of your time is spent simply moving data around and reformatting them. You need skills with processing large databases and skills with summarizing and graphing large amounts of data. (Tufte's Small Multiple comes to the fore here.)
Some of your favorite software tools will fail. Forget spreadsheets, for instance. A lot of open source and academic software will just not be up to handling large datasets: the processing will take forever or the software will crash. Expect this and make sure you have multiple ways to accomplish your key tasks.
Almost any statistical test you run will be so powerful that it's almost sure to identify a "significant" effect. You have to focus much more on statistical importance, such as effect size, rather than significance.
Similarly, model selection is troublesome because almost any variable and any interaction you might contemplate is going to look significant. You have to focus more on the meaningfulness of the variables you choose to analyze.
There will be more than enough information to identify appropriate nonlinear transformations of the variables. Know how to do this.
You will have enough data to detect nonlinear relationships, changes in trends, nonstationarity, heteroscedasticity, etc.
You will never be finished. There are so much data you could study them forever. It's important, therefore, to establish your analytical objectives at the outset and constantly keep them in mind.
I'll end with a short anecdote which illustrates one unexpected difference between regression modeling with a large dataset compared to a smaller one. At the end of that project with the Census data, a regression model I had developed needed to be implemented in the client's computing system, which meant writing SQL code in a relational database. This is a routine step but the code generated by the database programmers involved thousands of lines of SQL. This made it almost impossible to guarantee it was bug free--although we could detect the bugs (it gave different results on test data), finding them was another matter. (All you need is one typographical error in a coefficient...) Part of the solution was to write a program that generated the SQL commands directly from the model estimates. This assured that what came out of the statistics package was exactly what went into the RDBMS. As a bonus, a few hours spent on writing this script replaced possibly several weeks of SQL coding and testing. This is a small part of what it means for the statistician to be able to communicate their results. | What skills are required to perform large scale statistical analyses? | Good answers have already appeared. I will therefore just share some thoughts based on personal experience: adapt the relevant ones to your own situation as needed.
For background and context--so you | What skills are required to perform large scale statistical analyses?
Good answers have already appeared. I will therefore just share some thoughts based on personal experience: adapt the relevant ones to your own situation as needed.
For background and context--so you can account for any personal biases that might creep in to this message--much of my work has been in helping people make important decisions based on relatively small datasets. They are small because the data can be expensive to collect (10K dollars for the first sample of a groundwater monitoring well, for instance, or several thousand dollars for analyses of unusual chemicals). I'm used to getting as much as possible out of any data that are available, to exploring them to death, and to inventing new methods to analyze them if necessary. However, in the last few years I have been engaged to work on some fairly large databases, such as one of socioeconomic and engineering data covering the entire US at the Census block level (8.5 million records, 300 fields) and various large GIS databases (which nowadays can run from gigabytes to hundreds of gigabytes in size).
With very large datasets one's entire approach and mindset change. There are now too much data to analyze. Some of the immediate (and, in retrospect) obvious implications (with emphasis on regression modeling) include
Any analysis you think about doing can take a lot of time and computation. You will need to develop methods of subsampling and working on partial datasets so you can plan your workflow when computing with the entire dataset. (Subsampling can be complicated, because you need a representative subset of the data that is as rich as the entire dataset. And don't forget about cross-validating your models with the held-out data.)
Because of this, you will spend more time documenting what you do and scripting everything (so that it can be repeated).
As @dsimcha has just noted, good programming skills are useful. Actually, you don't need much in the way of experience with programming environments, but you need a willingness to program, the ability to recognize when programming will help (at just about every step, really) and a good understanding of basic elements of computer science, such as design of appropriate data structures and how to analyze computational complexity of algorithms. That's useful for knowing in advance whether code you plan to write will scale up to the full dataset.
Some datasets are large because they have many variables (thousands or tens of thousands, all of them different). Expect to spend a great deal of time just summarizing and understanding the data. A codebook or data dictionary, and other forms of metadata, become essential.
Much of your time is spent simply moving data around and reformatting them. You need skills with processing large databases and skills with summarizing and graphing large amounts of data. (Tufte's Small Multiple comes to the fore here.)
Some of your favorite software tools will fail. Forget spreadsheets, for instance. A lot of open source and academic software will just not be up to handling large datasets: the processing will take forever or the software will crash. Expect this and make sure you have multiple ways to accomplish your key tasks.
Almost any statistical test you run will be so powerful that it's almost sure to identify a "significant" effect. You have to focus much more on statistical importance, such as effect size, rather than significance.
Similarly, model selection is troublesome because almost any variable and any interaction you might contemplate is going to look significant. You have to focus more on the meaningfulness of the variables you choose to analyze.
There will be more than enough information to identify appropriate nonlinear transformations of the variables. Know how to do this.
You will have enough data to detect nonlinear relationships, changes in trends, nonstationarity, heteroscedasticity, etc.
You will never be finished. There are so much data you could study them forever. It's important, therefore, to establish your analytical objectives at the outset and constantly keep them in mind.
I'll end with a short anecdote which illustrates one unexpected difference between regression modeling with a large dataset compared to a smaller one. At the end of that project with the Census data, a regression model I had developed needed to be implemented in the client's computing system, which meant writing SQL code in a relational database. This is a routine step but the code generated by the database programmers involved thousands of lines of SQL. This made it almost impossible to guarantee it was bug free--although we could detect the bugs (it gave different results on test data), finding them was another matter. (All you need is one typographical error in a coefficient...) Part of the solution was to write a program that generated the SQL commands directly from the model estimates. This assured that what came out of the statistics package was exactly what went into the RDBMS. As a bonus, a few hours spent on writing this script replaced possibly several weeks of SQL coding and testing. This is a small part of what it means for the statistician to be able to communicate their results. | What skills are required to perform large scale statistical analyses?
Good answers have already appeared. I will therefore just share some thoughts based on personal experience: adapt the relevant ones to your own situation as needed.
For background and context--so you |
1,478 | What skills are required to perform large scale statistical analyses? | Your question should yield some good answers. Here are some starting points.
An ability to work with the tradeoffs between precision and the demands placed on computing power.
Facility with data mining techniques that can be used as preliminary screening tools before conducting regression. E.g., chaid, cart, or neural networks.
A deep understanding of the relationship between statistical significance and practical significance. A wide repertoire of methods for variable selection.
The instinct to crossvalidate. | What skills are required to perform large scale statistical analyses? | Your question should yield some good answers. Here are some starting points.
An ability to work with the tradeoffs between precision and the demands placed on computing power.
Facility with data min | What skills are required to perform large scale statistical analyses?
Your question should yield some good answers. Here are some starting points.
An ability to work with the tradeoffs between precision and the demands placed on computing power.
Facility with data mining techniques that can be used as preliminary screening tools before conducting regression. E.g., chaid, cart, or neural networks.
A deep understanding of the relationship between statistical significance and practical significance. A wide repertoire of methods for variable selection.
The instinct to crossvalidate. | What skills are required to perform large scale statistical analyses?
Your question should yield some good answers. Here are some starting points.
An ability to work with the tradeoffs between precision and the demands placed on computing power.
Facility with data min |
1,479 | What skills are required to perform large scale statistical analyses? | Good programming skills are a must. You need to be able to write efficient code that can deal with huge amounts of data without choking, and maybe be able to parallelize said code to get it to run in a reasonable amount of time. | What skills are required to perform large scale statistical analyses? | Good programming skills are a must. You need to be able to write efficient code that can deal with huge amounts of data without choking, and maybe be able to parallelize said code to get it to run in | What skills are required to perform large scale statistical analyses?
Good programming skills are a must. You need to be able to write efficient code that can deal with huge amounts of data without choking, and maybe be able to parallelize said code to get it to run in a reasonable amount of time. | What skills are required to perform large scale statistical analyses?
Good programming skills are a must. You need to be able to write efficient code that can deal with huge amounts of data without choking, and maybe be able to parallelize said code to get it to run in |
1,480 | What skills are required to perform large scale statistical analyses? | I would also add that the large scale data also introduces the problem of potential "Bad data". Not only missing data, but data errors and inconsistent definitions introduced by every piece of a system which ever touched the data. So, in additional to statistical skills, you need to become an expert data cleaner, unless someone else is doing it for you.
-Ralph Winters | What skills are required to perform large scale statistical analyses? | I would also add that the large scale data also introduces the problem of potential "Bad data". Not only missing data, but data errors and inconsistent definitions introduced by every piece of a syste | What skills are required to perform large scale statistical analyses?
I would also add that the large scale data also introduces the problem of potential "Bad data". Not only missing data, but data errors and inconsistent definitions introduced by every piece of a system which ever touched the data. So, in additional to statistical skills, you need to become an expert data cleaner, unless someone else is doing it for you.
-Ralph Winters | What skills are required to perform large scale statistical analyses?
I would also add that the large scale data also introduces the problem of potential "Bad data". Not only missing data, but data errors and inconsistent definitions introduced by every piece of a syste |
1,481 | What skills are required to perform large scale statistical analyses? | Framing the problem in the Map-reduce framework.
The Engineering side of the problem, eg., how much does it hurt to use lower precision for the parameters, or model selection based not only on generalization but storage and computation costs as well. | What skills are required to perform large scale statistical analyses? | Framing the problem in the Map-reduce framework.
The Engineering side of the problem, eg., how much does it hurt to use lower precision for the parameters, or model selection based not only on general | What skills are required to perform large scale statistical analyses?
Framing the problem in the Map-reduce framework.
The Engineering side of the problem, eg., how much does it hurt to use lower precision for the parameters, or model selection based not only on generalization but storage and computation costs as well. | What skills are required to perform large scale statistical analyses?
Framing the problem in the Map-reduce framework.
The Engineering side of the problem, eg., how much does it hurt to use lower precision for the parameters, or model selection based not only on general |
1,482 | What is an ablation study? And is there a systematic way to perform it? | The original meaning of “Ablation” is the surgical removal of body tissue. The term “Ablation study” has its roots in the field of experimental neuropsychology of the 1960s and 1970s, where parts of animals’ brains were removed to study the effect that this had on their behaviour.
In the context of machine learning, and especially complex deep neural networks, “ablation study” has been adopted to describe a procedure where certain parts of the network are removed, in order to gain a better understanding of the network’s behaviour.
The term has received attention since a tweet by Francois Chollet, primary author of the Keras deep learning framework, in June 2018:
Ablation studies are crucial for deep learning research -- can't stress this enough. Understanding causality in your system is the most straightforward way to generate reliable knowledge (the goal of any research). And ablation is a very low-effort way to look into causality.
If you take any complicated deep learning experimental setup, chances are you can remove a few modules (or replace some trained features with random ones) with no loss of performance. Get rid of the noise in the research process: do ablation studies.
Can't fully understand your system? Many moving parts? Want to make sure the reason it's working is really related to your hypothesis? Try removing stuff. Spend at least ~10% of your experimentation time on an honest effort to disprove your thesis.
As an example, Girshick and colleagues (2014) describe an object detection system that consists of three “modules”: The first proposes regions of an image within which to search for an object using the Selective Search algorithm (Uijlings and colleagues 2012), which feeds in to a large convolutional neural network (with 5 convolutional layers and 2 fully connected layers) that performs feature extraction, which in turn feeds into a set of support vector machines for classification. In order to better understand the system, the authors performed an ablation study where different parts of the system were removed - for instance removing one or both of the fully connected layers of the CNN resulted in surprisingly little performance loss, which allowed the authors to conclude
Much of the CNN’s representational power comes from its convolutional layers, rather than from the much larger densely connected layers.
The OP asks for details of /how/ to perform an ablation study, and for comprehensive references. I don't believe there is a "one size fits all" answer to this. Metrics are likely to differ, depending on the application and types of model. If we narrow the problem down simply to one deep neural network then it is relatively straight forward to see that we can remove layers in a principled way and explore how this changes the performance of the network. Beyond this, in practice, every situation is different and in the world of large complex machine learning applications, this will mean that a unique approach is likely to be needed for each situation.
In the context of the example in the OP - linear regression - an ablation study does not make sense, because all that can be "removed" from a linear regression model are some of the predictors. Doing this in a "principled" fashion is simply a reverse stepwise selection procedure, which is generally frowned upon - see here, here and here for details. A regularization procedure such as the Lasso, is a much better option for linear regression.
Refs:
Girshick, R., Donahue, J., Darrell, T. and Malik, J., 2014. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 580-587).
Uijlings, J.R., Van De Sande, K.E., Gevers, T. and Smeulders, A.W., 2013. Selective search for object recognition. International journal of computer vision, 104(2), pp.154-171. | What is an ablation study? And is there a systematic way to perform it? | The original meaning of “Ablation” is the surgical removal of body tissue. The term “Ablation study” has its roots in the field of experimental neuropsychology of the 1960s and 1970s, where parts of | What is an ablation study? And is there a systematic way to perform it?
The original meaning of “Ablation” is the surgical removal of body tissue. The term “Ablation study” has its roots in the field of experimental neuropsychology of the 1960s and 1970s, where parts of animals’ brains were removed to study the effect that this had on their behaviour.
In the context of machine learning, and especially complex deep neural networks, “ablation study” has been adopted to describe a procedure where certain parts of the network are removed, in order to gain a better understanding of the network’s behaviour.
The term has received attention since a tweet by Francois Chollet, primary author of the Keras deep learning framework, in June 2018:
Ablation studies are crucial for deep learning research -- can't stress this enough. Understanding causality in your system is the most straightforward way to generate reliable knowledge (the goal of any research). And ablation is a very low-effort way to look into causality.
If you take any complicated deep learning experimental setup, chances are you can remove a few modules (or replace some trained features with random ones) with no loss of performance. Get rid of the noise in the research process: do ablation studies.
Can't fully understand your system? Many moving parts? Want to make sure the reason it's working is really related to your hypothesis? Try removing stuff. Spend at least ~10% of your experimentation time on an honest effort to disprove your thesis.
As an example, Girshick and colleagues (2014) describe an object detection system that consists of three “modules”: The first proposes regions of an image within which to search for an object using the Selective Search algorithm (Uijlings and colleagues 2012), which feeds in to a large convolutional neural network (with 5 convolutional layers and 2 fully connected layers) that performs feature extraction, which in turn feeds into a set of support vector machines for classification. In order to better understand the system, the authors performed an ablation study where different parts of the system were removed - for instance removing one or both of the fully connected layers of the CNN resulted in surprisingly little performance loss, which allowed the authors to conclude
Much of the CNN’s representational power comes from its convolutional layers, rather than from the much larger densely connected layers.
The OP asks for details of /how/ to perform an ablation study, and for comprehensive references. I don't believe there is a "one size fits all" answer to this. Metrics are likely to differ, depending on the application and types of model. If we narrow the problem down simply to one deep neural network then it is relatively straight forward to see that we can remove layers in a principled way and explore how this changes the performance of the network. Beyond this, in practice, every situation is different and in the world of large complex machine learning applications, this will mean that a unique approach is likely to be needed for each situation.
In the context of the example in the OP - linear regression - an ablation study does not make sense, because all that can be "removed" from a linear regression model are some of the predictors. Doing this in a "principled" fashion is simply a reverse stepwise selection procedure, which is generally frowned upon - see here, here and here for details. A regularization procedure such as the Lasso, is a much better option for linear regression.
Refs:
Girshick, R., Donahue, J., Darrell, T. and Malik, J., 2014. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 580-587).
Uijlings, J.R., Van De Sande, K.E., Gevers, T. and Smeulders, A.W., 2013. Selective search for object recognition. International journal of computer vision, 104(2), pp.154-171. | What is an ablation study? And is there a systematic way to perform it?
The original meaning of “Ablation” is the surgical removal of body tissue. The term “Ablation study” has its roots in the field of experimental neuropsychology of the 1960s and 1970s, where parts of |
1,483 | Why isn't Logistic Regression called Logistic Classification? | Logistic regression is emphatically not a classification algorithm on its own. It is only a classification algorithm in combination with a decision rule that makes dichotomous the predicted probabilities of the outcome. Logistic regression is a regression model because it estimates the probability of class membership as a (transformation of a) multilinear function of the features.
Frank Harrell has posted a number of answers on this website enumerating the pitfalls of regarding logistic regression as a classification algorithm. Among them:
Classification is a decision. To make an optimal decision, you need to asses a utility function, which implies that you need to account for the uncertainty in the outcome, i.e. a probability.
The costs of misclassification are not uniform across all units.
Don't use cutoffs.
Use proper scoring rules.
The problem is actually risk estimation, not classification.
If I recall correctly, he once pointed me to his book on regression strategies for more elaboration on these (and more!) points, but I can't seem to find that particular post. | Why isn't Logistic Regression called Logistic Classification? | Logistic regression is emphatically not a classification algorithm on its own. It is only a classification algorithm in combination with a decision rule that makes dichotomous the predicted probabilit | Why isn't Logistic Regression called Logistic Classification?
Logistic regression is emphatically not a classification algorithm on its own. It is only a classification algorithm in combination with a decision rule that makes dichotomous the predicted probabilities of the outcome. Logistic regression is a regression model because it estimates the probability of class membership as a (transformation of a) multilinear function of the features.
Frank Harrell has posted a number of answers on this website enumerating the pitfalls of regarding logistic regression as a classification algorithm. Among them:
Classification is a decision. To make an optimal decision, you need to asses a utility function, which implies that you need to account for the uncertainty in the outcome, i.e. a probability.
The costs of misclassification are not uniform across all units.
Don't use cutoffs.
Use proper scoring rules.
The problem is actually risk estimation, not classification.
If I recall correctly, he once pointed me to his book on regression strategies for more elaboration on these (and more!) points, but I can't seem to find that particular post. | Why isn't Logistic Regression called Logistic Classification?
Logistic regression is emphatically not a classification algorithm on its own. It is only a classification algorithm in combination with a decision rule that makes dichotomous the predicted probabilit |
1,484 | Why isn't Logistic Regression called Logistic Classification? | Abstractly, regression is the problem of calculating a conditional expectation $E[Y|X=x]$. The form taken by this expectation is different depending on the assumptions of how the data were generated:
Assuming (Y|X=x) to be normally distributed yields with classical linear regression.
Assuming a Poisson distribution yields Poisson regression.
Assuming a Bernoulli distribution yields logistic regression.
The term "regression" has also been used more generally than this, including approaches like quantile regression, which estimates a given quantile of $(Y|X=x)$. | Why isn't Logistic Regression called Logistic Classification? | Abstractly, regression is the problem of calculating a conditional expectation $E[Y|X=x]$. The form taken by this expectation is different depending on the assumptions of how the data were generated:
| Why isn't Logistic Regression called Logistic Classification?
Abstractly, regression is the problem of calculating a conditional expectation $E[Y|X=x]$. The form taken by this expectation is different depending on the assumptions of how the data were generated:
Assuming (Y|X=x) to be normally distributed yields with classical linear regression.
Assuming a Poisson distribution yields Poisson regression.
Assuming a Bernoulli distribution yields logistic regression.
The term "regression" has also been used more generally than this, including approaches like quantile regression, which estimates a given quantile of $(Y|X=x)$. | Why isn't Logistic Regression called Logistic Classification?
Abstractly, regression is the problem of calculating a conditional expectation $E[Y|X=x]$. The form taken by this expectation is different depending on the assumptions of how the data were generated:
|
1,485 | Why isn't Logistic Regression called Logistic Classification? | Blockquote
The U.S. Weather Service has always phrased rain forecasts as probabilities. I do not want a classification of “it will rain today.” There is a slight loss/disutility of carrying an umbrella, and I want to be the one to make the tradeoff.
Blockquote
Dr. Frank Harrell, https://www.fharrell.com/post/classification/
Classification is when you make a concrete determination of what category something is a part of. Binary classification involves two categories, and by the law of the excluded middle, that means binary classification is for determining whether something “is” or “is not” part of a single category. There either are children playing in the park today (1), or there are not (0).
Although the variable you are targeting in logistic regression is a classification, logistic regression does not actually individually classify things for you: it just gives you probabilities (or log odds ratios in the logit form). The only way logistic regression can actually classify stuff is if you apply a rule to the probability output. For example, you may round probabilities greater than or equal to 50% to 1, and probabilities less than 50% to 0, and that’s your classification.
if you want to read more please check this link for more detail
https://ryxcommar.com/2020/06/27/why-do-so-many-practicing-data-scientists-not-understand-logistic-regression/ | Why isn't Logistic Regression called Logistic Classification? | Blockquote
The U.S. Weather Service has always phrased rain forecasts as probabilities. I do not want a classification of “it will rain today.” There is a slight loss/disutility of carrying an umbrell | Why isn't Logistic Regression called Logistic Classification?
Blockquote
The U.S. Weather Service has always phrased rain forecasts as probabilities. I do not want a classification of “it will rain today.” There is a slight loss/disutility of carrying an umbrella, and I want to be the one to make the tradeoff.
Blockquote
Dr. Frank Harrell, https://www.fharrell.com/post/classification/
Classification is when you make a concrete determination of what category something is a part of. Binary classification involves two categories, and by the law of the excluded middle, that means binary classification is for determining whether something “is” or “is not” part of a single category. There either are children playing in the park today (1), or there are not (0).
Although the variable you are targeting in logistic regression is a classification, logistic regression does not actually individually classify things for you: it just gives you probabilities (or log odds ratios in the logit form). The only way logistic regression can actually classify stuff is if you apply a rule to the probability output. For example, you may round probabilities greater than or equal to 50% to 1, and probabilities less than 50% to 0, and that’s your classification.
if you want to read more please check this link for more detail
https://ryxcommar.com/2020/06/27/why-do-so-many-practicing-data-scientists-not-understand-logistic-regression/ | Why isn't Logistic Regression called Logistic Classification?
Blockquote
The U.S. Weather Service has always phrased rain forecasts as probabilities. I do not want a classification of “it will rain today.” There is a slight loss/disutility of carrying an umbrell |
1,486 | Why isn't Logistic Regression called Logistic Classification? | Apart from already provided good answers, another view is that Logistic regression predicts probabilities (which is continuous value) that have got range from 0 to 1. | Why isn't Logistic Regression called Logistic Classification? | Apart from already provided good answers, another view is that Logistic regression predicts probabilities (which is continuous value) that have got range from 0 to 1. | Why isn't Logistic Regression called Logistic Classification?
Apart from already provided good answers, another view is that Logistic regression predicts probabilities (which is continuous value) that have got range from 0 to 1. | Why isn't Logistic Regression called Logistic Classification?
Apart from already provided good answers, another view is that Logistic regression predicts probabilities (which is continuous value) that have got range from 0 to 1. |
1,487 | How do you calculate precision and recall for multiclass classification using confusion matrix? | In a 2-hypothesis case, the confusion matrix is usually:
Declare H1
Declare H0
Is H1
TP
FN
Is H0
FP
TN
where I've used something similar to your notation:
TP = true positive (declare H1 when, in truth, H1),
FN = false negative (declare H0 when, in truth, H1),
FP = false positive
TN = true negative
From the raw data, the values in the table would typically be the counts for each occurrence over the test data. From this, you should be able to compute the quantities you need.
Edit
The generalization to multi-class problems is to sum over rows / columns of the confusion matrix. Given that the matrix is oriented as above, i.e., that
a given row of the matrix corresponds to specific value for the "truth", we have:
$\text{Precision}_{~i} = \cfrac{M_{ii}}{\sum_j M_{ji}}$
$\text{Recall}_{~i} = \cfrac{M_{ii}}{\sum_j M_{ij}}$
That is, precision is the fraction of events where we correctly declared $i$
out of all instances where the algorithm declared $i$. Conversely, recall is the fraction of events where we correctly declared $i$ out of all of the cases where the true of state of the world is $i$. | How do you calculate precision and recall for multiclass classification using confusion matrix? | In a 2-hypothesis case, the confusion matrix is usually:
Declare H1
Declare H0
Is H1
TP
FN
Is H0
FP
TN
where I've used something similar to your notation:
TP = true positive (declare H | How do you calculate precision and recall for multiclass classification using confusion matrix?
In a 2-hypothesis case, the confusion matrix is usually:
Declare H1
Declare H0
Is H1
TP
FN
Is H0
FP
TN
where I've used something similar to your notation:
TP = true positive (declare H1 when, in truth, H1),
FN = false negative (declare H0 when, in truth, H1),
FP = false positive
TN = true negative
From the raw data, the values in the table would typically be the counts for each occurrence over the test data. From this, you should be able to compute the quantities you need.
Edit
The generalization to multi-class problems is to sum over rows / columns of the confusion matrix. Given that the matrix is oriented as above, i.e., that
a given row of the matrix corresponds to specific value for the "truth", we have:
$\text{Precision}_{~i} = \cfrac{M_{ii}}{\sum_j M_{ji}}$
$\text{Recall}_{~i} = \cfrac{M_{ii}}{\sum_j M_{ij}}$
That is, precision is the fraction of events where we correctly declared $i$
out of all instances where the algorithm declared $i$. Conversely, recall is the fraction of events where we correctly declared $i$ out of all of the cases where the true of state of the world is $i$. | How do you calculate precision and recall for multiclass classification using confusion matrix?
In a 2-hypothesis case, the confusion matrix is usually:
Declare H1
Declare H0
Is H1
TP
FN
Is H0
FP
TN
where I've used something similar to your notation:
TP = true positive (declare H |
1,488 | How do you calculate precision and recall for multiclass classification using confusion matrix? | Good summary paper, looking at these metrics for multi-class problems:
Sokolova, M., & Lapalme, G. (2009). A systematic analysis of performance measures for classification tasks. Information Processing and Management, 45, p. 427-437. (pdf)
The abstract reads:
This paper presents a systematic analysis of twenty four performance
measures used in the complete spectrum of Machine Learning
classification tasks, i.e., binary, multi-class, multi-labelled, and
hierarchical. For each classification task, the study relates a set of
changes in a confusion matrix to specific characteristics of data.
Then the analysis concentrates on the type of changes to a confusion
matrix that do not change a measure, therefore, preserve a
classifier’s evaluation (measure invariance). The result is the
measure invariance taxonomy with respect to all relevant label
distribution changes in a classification problem. This formal analysis
is supported by examples of applications where invariance properties
of measures lead to a more reliable evaluation of classifiers. Text
classification supplements the discussion with several case studies. | How do you calculate precision and recall for multiclass classification using confusion matrix? | Good summary paper, looking at these metrics for multi-class problems:
Sokolova, M., & Lapalme, G. (2009). A systematic analysis of performance measures for classification tasks. Information Proce | How do you calculate precision and recall for multiclass classification using confusion matrix?
Good summary paper, looking at these metrics for multi-class problems:
Sokolova, M., & Lapalme, G. (2009). A systematic analysis of performance measures for classification tasks. Information Processing and Management, 45, p. 427-437. (pdf)
The abstract reads:
This paper presents a systematic analysis of twenty four performance
measures used in the complete spectrum of Machine Learning
classification tasks, i.e., binary, multi-class, multi-labelled, and
hierarchical. For each classification task, the study relates a set of
changes in a confusion matrix to specific characteristics of data.
Then the analysis concentrates on the type of changes to a confusion
matrix that do not change a measure, therefore, preserve a
classifier’s evaluation (measure invariance). The result is the
measure invariance taxonomy with respect to all relevant label
distribution changes in a classification problem. This formal analysis
is supported by examples of applications where invariance properties
of measures lead to a more reliable evaluation of classifiers. Text
classification supplements the discussion with several case studies. | How do you calculate precision and recall for multiclass classification using confusion matrix?
Good summary paper, looking at these metrics for multi-class problems:
Sokolova, M., & Lapalme, G. (2009). A systematic analysis of performance measures for classification tasks. Information Proce |
1,489 | How do you calculate precision and recall for multiclass classification using confusion matrix? | Using sklearn or tensorflow and numpy:
from sklearn.metrics import confusion_matrix
# or:
# from tensorflow.math import confusion_matrix
import numpy as np
labels = ...
predictions = ...
cm = confusion_matrix(labels, predictions)
recall = np.diag(cm) / np.sum(cm, axis = 1)
precision = np.diag(cm) / np.sum(cm, axis = 0)
To get overall measures of precision and recall, use then
np.mean(recall)
np.mean(precision) | How do you calculate precision and recall for multiclass classification using confusion matrix? | Using sklearn or tensorflow and numpy:
from sklearn.metrics import confusion_matrix
# or:
# from tensorflow.math import confusion_matrix
import numpy as np
labels = ...
predictions = ...
cm = confu | How do you calculate precision and recall for multiclass classification using confusion matrix?
Using sklearn or tensorflow and numpy:
from sklearn.metrics import confusion_matrix
# or:
# from tensorflow.math import confusion_matrix
import numpy as np
labels = ...
predictions = ...
cm = confusion_matrix(labels, predictions)
recall = np.diag(cm) / np.sum(cm, axis = 1)
precision = np.diag(cm) / np.sum(cm, axis = 0)
To get overall measures of precision and recall, use then
np.mean(recall)
np.mean(precision) | How do you calculate precision and recall for multiclass classification using confusion matrix?
Using sklearn or tensorflow and numpy:
from sklearn.metrics import confusion_matrix
# or:
# from tensorflow.math import confusion_matrix
import numpy as np
labels = ...
predictions = ...
cm = confu |
1,490 | How do you calculate precision and recall for multiclass classification using confusion matrix? | @Cristian Garcia code can be reduced by sklearn.
>>> from sklearn.metrics import precision_score
>>> y_true = [0, 1, 2, 0, 1, 2]
>>> y_pred = [0, 2, 1, 0, 0, 1]
>>> precision_score(y_true, y_pred, average='micro') | How do you calculate precision and recall for multiclass classification using confusion matrix? | @Cristian Garcia code can be reduced by sklearn.
>>> from sklearn.metrics import precision_score
>>> y_true = [0, 1, 2, 0, 1, 2]
>>> y_pred = [0, 2, 1, 0, 0, 1]
>>> precision_score(y_true, y_pred, ave | How do you calculate precision and recall for multiclass classification using confusion matrix?
@Cristian Garcia code can be reduced by sklearn.
>>> from sklearn.metrics import precision_score
>>> y_true = [0, 1, 2, 0, 1, 2]
>>> y_pred = [0, 2, 1, 0, 0, 1]
>>> precision_score(y_true, y_pred, average='micro') | How do you calculate precision and recall for multiclass classification using confusion matrix?
@Cristian Garcia code can be reduced by sklearn.
>>> from sklearn.metrics import precision_score
>>> y_true = [0, 1, 2, 0, 1, 2]
>>> y_pred = [0, 2, 1, 0, 0, 1]
>>> precision_score(y_true, y_pred, ave |
1,491 | How do you calculate precision and recall for multiclass classification using confusion matrix? | Here is a different view from the other answers that I think will be helpful to others. The goal here is to allow you to compute these metrics using basic laws of probability.
First, it helps to understand what a confusion matrix is telling us in general. Let $Y$ represent a class label and $\hat Y$ represent a class prediction. In the binary case, let the two possible values for $Y$ and $\hat Y$ be $0$ and $1$, which represent the classes. Next, suppose that the confusion matrix for $Y$ and $\hat Y$ is:
$\hat Y = 0$
$\hat Y = 1$
$Y = 0$
10
20
$Y = 1$
30
40
With hindsight, let us normalize the rows and columns of this confusion matrix, such that the sum of all elements of the confusion matrix is $1$. Currently, the sum of all elements of the confusion matrix is $10 + 20 + 30 + 40 = 100$, which is our normalization factor. After dividing the elements of the confusion matrix by the normalization factor, we get the following normalized confusion matrix:
$\hat Y = 0$
$\hat Y = 1$
$Y = 0$
$\frac{1}{10}$
$\frac{2}{10}$
$Y = 1$
$\frac{3}{10}$
$\frac{4}{10}$
With this formulation of the confusion matrix, we can interpret $Y$ and $\hat Y$ slightly differently. We can interpret them as jointly Bernoulli (binary) random variables, where their normalized confusion matrix represents their joint probability mass function. When we interpret $Y$ and $\hat Y$ this way, the definitions of precision and recall are much easier to remember using Bayes' rule and the law of total probability:
\begin{align}
\text{Precision} &= P(Y = 1 \mid \hat Y = 1) = \frac{P(Y = 1 , \hat Y = 1)}{P(Y = 1 , \hat Y = 1) + P(Y = 0 , \hat Y = 1)} \\
\text{Recall} &= P(\hat Y = 1 \mid Y = 1) = \frac{P(Y = 1 , \hat Y = 1)}{P(Y = 1 , \hat Y = 1) + P(Y = 1 , \hat Y = 0)}
\end{align}
How do we determine these probabilities? We can estimate them using the normalized confusion matrix. From the table above, we see that
\begin{align}
P(Y = 0 , \hat Y = 0) &\approx \frac{1}{10} \\
P(Y = 0 , \hat Y = 1) &\approx \frac{2}{10} \\
P(Y = 1 , \hat Y = 0) &\approx \frac{3}{10} \\
P(Y = 1 , \hat Y = 1) &\approx \frac{4}{10}
\end{align}
Therefore, the precision and recall for this specific example are
\begin{align}
\text{Precision} &= P(Y = 1 \mid \hat Y = 1) = \frac{\frac{4}{10}}{\frac{4}{10} + \frac{2}{10}} = \frac{4}{4 + 2} = \frac{2}{3} \\
\text{Recall} &= P(\hat Y = 1 \mid Y = 1) = \frac{\frac{4}{10}}{\frac{4}{10} + \frac{3}{10}} = \frac{4}{4 + 3} = \frac{4}{7}
\end{align}
Note that, from the calculations above, we didn't really need to normalize the confusion matrix before computing the precision and recall. The reason for this is that, because of Bayes' rule, we end up dividing one value that is normalized by another value that is normalized, which means that the normalization factor can be cancelled out.
A nice thing about this interpretation is that it can be generalized to confusion matrices of any size. In the case where there are more than 2 classes, $Y$ and $\hat Y$ are no longer considered to be jointly Bernoulli, but rather jointly categorical. Moreover, we would need to specify which class we are computing the precision and recall for. In fact, the definitions above may be interpreted as the precision and recall for class $1$. We can also compute the precision and recall for class $0$, but these have different names in the literature. | How do you calculate precision and recall for multiclass classification using confusion matrix? | Here is a different view from the other answers that I think will be helpful to others. The goal here is to allow you to compute these metrics using basic laws of probability.
First, it helps to under | How do you calculate precision and recall for multiclass classification using confusion matrix?
Here is a different view from the other answers that I think will be helpful to others. The goal here is to allow you to compute these metrics using basic laws of probability.
First, it helps to understand what a confusion matrix is telling us in general. Let $Y$ represent a class label and $\hat Y$ represent a class prediction. In the binary case, let the two possible values for $Y$ and $\hat Y$ be $0$ and $1$, which represent the classes. Next, suppose that the confusion matrix for $Y$ and $\hat Y$ is:
$\hat Y = 0$
$\hat Y = 1$
$Y = 0$
10
20
$Y = 1$
30
40
With hindsight, let us normalize the rows and columns of this confusion matrix, such that the sum of all elements of the confusion matrix is $1$. Currently, the sum of all elements of the confusion matrix is $10 + 20 + 30 + 40 = 100$, which is our normalization factor. After dividing the elements of the confusion matrix by the normalization factor, we get the following normalized confusion matrix:
$\hat Y = 0$
$\hat Y = 1$
$Y = 0$
$\frac{1}{10}$
$\frac{2}{10}$
$Y = 1$
$\frac{3}{10}$
$\frac{4}{10}$
With this formulation of the confusion matrix, we can interpret $Y$ and $\hat Y$ slightly differently. We can interpret them as jointly Bernoulli (binary) random variables, where their normalized confusion matrix represents their joint probability mass function. When we interpret $Y$ and $\hat Y$ this way, the definitions of precision and recall are much easier to remember using Bayes' rule and the law of total probability:
\begin{align}
\text{Precision} &= P(Y = 1 \mid \hat Y = 1) = \frac{P(Y = 1 , \hat Y = 1)}{P(Y = 1 , \hat Y = 1) + P(Y = 0 , \hat Y = 1)} \\
\text{Recall} &= P(\hat Y = 1 \mid Y = 1) = \frac{P(Y = 1 , \hat Y = 1)}{P(Y = 1 , \hat Y = 1) + P(Y = 1 , \hat Y = 0)}
\end{align}
How do we determine these probabilities? We can estimate them using the normalized confusion matrix. From the table above, we see that
\begin{align}
P(Y = 0 , \hat Y = 0) &\approx \frac{1}{10} \\
P(Y = 0 , \hat Y = 1) &\approx \frac{2}{10} \\
P(Y = 1 , \hat Y = 0) &\approx \frac{3}{10} \\
P(Y = 1 , \hat Y = 1) &\approx \frac{4}{10}
\end{align}
Therefore, the precision and recall for this specific example are
\begin{align}
\text{Precision} &= P(Y = 1 \mid \hat Y = 1) = \frac{\frac{4}{10}}{\frac{4}{10} + \frac{2}{10}} = \frac{4}{4 + 2} = \frac{2}{3} \\
\text{Recall} &= P(\hat Y = 1 \mid Y = 1) = \frac{\frac{4}{10}}{\frac{4}{10} + \frac{3}{10}} = \frac{4}{4 + 3} = \frac{4}{7}
\end{align}
Note that, from the calculations above, we didn't really need to normalize the confusion matrix before computing the precision and recall. The reason for this is that, because of Bayes' rule, we end up dividing one value that is normalized by another value that is normalized, which means that the normalization factor can be cancelled out.
A nice thing about this interpretation is that it can be generalized to confusion matrices of any size. In the case where there are more than 2 classes, $Y$ and $\hat Y$ are no longer considered to be jointly Bernoulli, but rather jointly categorical. Moreover, we would need to specify which class we are computing the precision and recall for. In fact, the definitions above may be interpreted as the precision and recall for class $1$. We can also compute the precision and recall for class $0$, but these have different names in the literature. | How do you calculate precision and recall for multiclass classification using confusion matrix?
Here is a different view from the other answers that I think will be helpful to others. The goal here is to allow you to compute these metrics using basic laws of probability.
First, it helps to under |
1,492 | What's a real-world example of "overfitting"? | Here's a nice example of presidential election time series models from xkcd:
There have only been 56 presidential elections and 43 presidents. That is not a lot of data to learn from. When the predictor space expands to include things like having false teeth and the Scrabble point value of names, it's pretty easy for the model to go from fitting the generalizable features of the data (the signal) and to start matching the noise. When this happens, the fit on the historical data may improve, but the model will fail miserably when used to make inferences about future presidential elections. | What's a real-world example of "overfitting"? | Here's a nice example of presidential election time series models from xkcd:
There have only been 56 presidential elections and 43 presidents. That is not a lot of data to learn from. When the predic | What's a real-world example of "overfitting"?
Here's a nice example of presidential election time series models from xkcd:
There have only been 56 presidential elections and 43 presidents. That is not a lot of data to learn from. When the predictor space expands to include things like having false teeth and the Scrabble point value of names, it's pretty easy for the model to go from fitting the generalizable features of the data (the signal) and to start matching the noise. When this happens, the fit on the historical data may improve, but the model will fail miserably when used to make inferences about future presidential elections. | What's a real-world example of "overfitting"?
Here's a nice example of presidential election time series models from xkcd:
There have only been 56 presidential elections and 43 presidents. That is not a lot of data to learn from. When the predic |
1,493 | What's a real-world example of "overfitting"? | My favorite was the Matlab example of US census population versus time:
A linear model is pretty good
A quadratic model is closer
A quartic model predicts total annihilation starting next year
(At least I sincerely hope this is an example of overfitting)
http://www.mathworks.com/help/curvefit/examples/polynomial-curve-fitting.html#zmw57dd0e115 | What's a real-world example of "overfitting"? | My favorite was the Matlab example of US census population versus time:
A linear model is pretty good
A quadratic model is closer
A quartic model predicts total annihilation starting next year
(At l | What's a real-world example of "overfitting"?
My favorite was the Matlab example of US census population versus time:
A linear model is pretty good
A quadratic model is closer
A quartic model predicts total annihilation starting next year
(At least I sincerely hope this is an example of overfitting)
http://www.mathworks.com/help/curvefit/examples/polynomial-curve-fitting.html#zmw57dd0e115 | What's a real-world example of "overfitting"?
My favorite was the Matlab example of US census population versus time:
A linear model is pretty good
A quadratic model is closer
A quartic model predicts total annihilation starting next year
(At l |
1,494 | What's a real-world example of "overfitting"? | The study of Chen et al. (2013) fits two cubics to a supposed discontinuity in life expectancy as a function of latitude.
Chen Y., Ebenstein, A., Greenstone, M., and Li, H. 2013. Evidence on the impact of sustained
exposure to air pollution on life expectancy from China's Huai River policy. Proceedings of the National Academy of Sciences 110: 12936–12941. abstract
Despite its publication in an outstanding journal, etc., its tacit endorsement by distinguished people, etc., I would still present this as a prima facie example of over-fitting.
A tell-tale sign is the implausibility of cubics. Fitting a cubic implicitly assumes there is some reason why life expectancy would vary as a third-degree polynomial of the latitude where you live. That seems rather implausible: it is not easy to imagine a plausible physical mechanism that would cause such an effect.
See also the following blog post for a more detailed analysis of this paper: Evidence on the impact of sustained use of polynomial regression on causal inference (a claim that coal heating is reducing lifespan by 5 years for half a billion people). | What's a real-world example of "overfitting"? | The study of Chen et al. (2013) fits two cubics to a supposed discontinuity in life expectancy as a function of latitude.
Chen Y., Ebenstein, A., Greenstone, M., and Li, H. 2013. Evidence on the impa | What's a real-world example of "overfitting"?
The study of Chen et al. (2013) fits two cubics to a supposed discontinuity in life expectancy as a function of latitude.
Chen Y., Ebenstein, A., Greenstone, M., and Li, H. 2013. Evidence on the impact of sustained
exposure to air pollution on life expectancy from China's Huai River policy. Proceedings of the National Academy of Sciences 110: 12936–12941. abstract
Despite its publication in an outstanding journal, etc., its tacit endorsement by distinguished people, etc., I would still present this as a prima facie example of over-fitting.
A tell-tale sign is the implausibility of cubics. Fitting a cubic implicitly assumes there is some reason why life expectancy would vary as a third-degree polynomial of the latitude where you live. That seems rather implausible: it is not easy to imagine a plausible physical mechanism that would cause such an effect.
See also the following blog post for a more detailed analysis of this paper: Evidence on the impact of sustained use of polynomial regression on causal inference (a claim that coal heating is reducing lifespan by 5 years for half a billion people). | What's a real-world example of "overfitting"?
The study of Chen et al. (2013) fits two cubics to a supposed discontinuity in life expectancy as a function of latitude.
Chen Y., Ebenstein, A., Greenstone, M., and Li, H. 2013. Evidence on the impa |
1,495 | What's a real-world example of "overfitting"? | In a March 14, 2014 article in Science, David Lazer, Ryan Kennedy, Gary King, and Alessandro Vespignani identified problems in Google Flu Trends that they attribute to overfitting.
Here is how they tell the story, including their explanation of the nature of the overfitting and why it caused the algorithm to fail:
In February 2013, ...
Nature reported that GFT was predicting
more than double the proportion
of doctor visits for influenza-like illness (ILI) than the Centers
for Disease Control and Prevention
(CDC) ... . This happened despite the fact
that GFT was built to predict CDC
reports.
...
Essentially, the methodology
was to find the best matches among 50 million
search terms to fit 1152 data points. The odds of finding search terms that
match the propensity of the flu but are structurally
unrelated, and so do not predict the
future, were quite high. GFT developers,
in fact, report weeding out seasonal search
terms unrelated to the flu but strongly correlated
to the CDC data, such as those regarding
high school basketball. This should
have been a warning that the big data were
overfitting the small number of cases—a
standard concern in data analysis. This ad
hoc method of throwing out peculiar search
terms failed when GFT completely missed
the nonseasonal 2009 influenza A–H1N1
pandemic.
[Emphasis added.] | What's a real-world example of "overfitting"? | In a March 14, 2014 article in Science, David Lazer, Ryan Kennedy, Gary King, and Alessandro Vespignani identified problems in Google Flu Trends that they attribute to overfitting.
Here is how they t | What's a real-world example of "overfitting"?
In a March 14, 2014 article in Science, David Lazer, Ryan Kennedy, Gary King, and Alessandro Vespignani identified problems in Google Flu Trends that they attribute to overfitting.
Here is how they tell the story, including their explanation of the nature of the overfitting and why it caused the algorithm to fail:
In February 2013, ...
Nature reported that GFT was predicting
more than double the proportion
of doctor visits for influenza-like illness (ILI) than the Centers
for Disease Control and Prevention
(CDC) ... . This happened despite the fact
that GFT was built to predict CDC
reports.
...
Essentially, the methodology
was to find the best matches among 50 million
search terms to fit 1152 data points. The odds of finding search terms that
match the propensity of the flu but are structurally
unrelated, and so do not predict the
future, were quite high. GFT developers,
in fact, report weeding out seasonal search
terms unrelated to the flu but strongly correlated
to the CDC data, such as those regarding
high school basketball. This should
have been a warning that the big data were
overfitting the small number of cases—a
standard concern in data analysis. This ad
hoc method of throwing out peculiar search
terms failed when GFT completely missed
the nonseasonal 2009 influenza A–H1N1
pandemic.
[Emphasis added.] | What's a real-world example of "overfitting"?
In a March 14, 2014 article in Science, David Lazer, Ryan Kennedy, Gary King, and Alessandro Vespignani identified problems in Google Flu Trends that they attribute to overfitting.
Here is how they t |
1,496 | What's a real-world example of "overfitting"? | I saw this image a few weeks ago and thought it was rather relevant to the question at hand.
Instead of linearly fitting the sequence, it was fitted with a quartic polynomial, which had perfect fit, but resulted in a clearly ridiculous answer. | What's a real-world example of "overfitting"? | I saw this image a few weeks ago and thought it was rather relevant to the question at hand.
Instead of linearly fitting the sequence, it was fitted with a quartic polynomial, which had perfect fit, | What's a real-world example of "overfitting"?
I saw this image a few weeks ago and thought it was rather relevant to the question at hand.
Instead of linearly fitting the sequence, it was fitted with a quartic polynomial, which had perfect fit, but resulted in a clearly ridiculous answer. | What's a real-world example of "overfitting"?
I saw this image a few weeks ago and thought it was rather relevant to the question at hand.
Instead of linearly fitting the sequence, it was fitted with a quartic polynomial, which had perfect fit, |
1,497 | What's a real-world example of "overfitting"? | To me the best example is Ptolemaic system in astronomy. Ptolemy assumed that Earth is at the center of the universe, and created a sophisticated system of nested circular orbits, which would explain movements of object on the sky pretty well. Astronomers had to keep adding circles to explain deviation, until one day it got so convoluted that folks started doubting it. That's when Copernicus came up with a more realistic model.
This is the best example of overfitting to me. You can't overfit data generating process (DGP) to the data. You can only overfit misspecified model. Almost all our models in social sciences are misspecified, so the key is to remember this, and keep them parsimonious. Not to try to catch every aspect of the data set, but try to capture the essential features through simplification. | What's a real-world example of "overfitting"? | To me the best example is Ptolemaic system in astronomy. Ptolemy assumed that Earth is at the center of the universe, and created a sophisticated system of nested circular orbits, which would explain | What's a real-world example of "overfitting"?
To me the best example is Ptolemaic system in astronomy. Ptolemy assumed that Earth is at the center of the universe, and created a sophisticated system of nested circular orbits, which would explain movements of object on the sky pretty well. Astronomers had to keep adding circles to explain deviation, until one day it got so convoluted that folks started doubting it. That's when Copernicus came up with a more realistic model.
This is the best example of overfitting to me. You can't overfit data generating process (DGP) to the data. You can only overfit misspecified model. Almost all our models in social sciences are misspecified, so the key is to remember this, and keep them parsimonious. Not to try to catch every aspect of the data set, but try to capture the essential features through simplification. | What's a real-world example of "overfitting"?
To me the best example is Ptolemaic system in astronomy. Ptolemy assumed that Earth is at the center of the universe, and created a sophisticated system of nested circular orbits, which would explain |
1,498 | What's a real-world example of "overfitting"? | Let's say you have 100 dots on a graph.
You could say: hmm, I want to predict the next one.
with a line
with a 2nd order polynomial
with a 3rd order polynomial
...
with a 100th order polynomial
Here you can see a simplified illustration for this example:
The higher the polynomial order, the better it will fit the existing dots.
However, the high order polynomials, despite looking like to be better models for the dots, are actually overfitting them. It models the noise rather than the true data distribution.
As a consequence, if you add a new dot to the graph with your perfectly fitting curve, it'll probably be further away from the curve than if you used a simpler low order polynomial. | What's a real-world example of "overfitting"? | Let's say you have 100 dots on a graph.
You could say: hmm, I want to predict the next one.
with a line
with a 2nd order polynomial
with a 3rd order polynomial
...
with a 100th order polynomial
Here | What's a real-world example of "overfitting"?
Let's say you have 100 dots on a graph.
You could say: hmm, I want to predict the next one.
with a line
with a 2nd order polynomial
with a 3rd order polynomial
...
with a 100th order polynomial
Here you can see a simplified illustration for this example:
The higher the polynomial order, the better it will fit the existing dots.
However, the high order polynomials, despite looking like to be better models for the dots, are actually overfitting them. It models the noise rather than the true data distribution.
As a consequence, if you add a new dot to the graph with your perfectly fitting curve, it'll probably be further away from the curve than if you used a simpler low order polynomial. | What's a real-world example of "overfitting"?
Let's say you have 100 dots on a graph.
You could say: hmm, I want to predict the next one.
with a line
with a 2nd order polynomial
with a 3rd order polynomial
...
with a 100th order polynomial
Here |
1,499 | What's a real-world example of "overfitting"? | The analysis that may have contributed to the Fukushima disaster is an example of overfitting. There is a well known relationship in Earth Science that describes the probability of earthquakes of a certain size, given the observed frequency of "lesser" earthquakes. This is known as the Gutenberg-Richter relationship, and it provides a straight-line log fit over many decades. Analysis of the earthquake risk in the vicinity of the reactor (this diagram from Nate Silver's excellent book "The Signal and the Noise") show a "kink" in the data. Ignoring the kink leads to an estimate of the annualized risk of a magnitude 9 earthquake as about 1 in 300 - definitely something to prepare for. However, overfitting a dual slope line (as was apparently done during the initial risk assessment for the reactors) reduces the risk prediction to about 1 in 13,000 years. One could not fault the engineers for not designing the reactors to withstand such an unlikely event - but one should definitely fault the statisticians who overfitted (and then extrapolated) the data... | What's a real-world example of "overfitting"? | The analysis that may have contributed to the Fukushima disaster is an example of overfitting. There is a well known relationship in Earth Science that describes the probability of earthquakes of a ce | What's a real-world example of "overfitting"?
The analysis that may have contributed to the Fukushima disaster is an example of overfitting. There is a well known relationship in Earth Science that describes the probability of earthquakes of a certain size, given the observed frequency of "lesser" earthquakes. This is known as the Gutenberg-Richter relationship, and it provides a straight-line log fit over many decades. Analysis of the earthquake risk in the vicinity of the reactor (this diagram from Nate Silver's excellent book "The Signal and the Noise") show a "kink" in the data. Ignoring the kink leads to an estimate of the annualized risk of a magnitude 9 earthquake as about 1 in 300 - definitely something to prepare for. However, overfitting a dual slope line (as was apparently done during the initial risk assessment for the reactors) reduces the risk prediction to about 1 in 13,000 years. One could not fault the engineers for not designing the reactors to withstand such an unlikely event - but one should definitely fault the statisticians who overfitted (and then extrapolated) the data... | What's a real-world example of "overfitting"?
The analysis that may have contributed to the Fukushima disaster is an example of overfitting. There is a well known relationship in Earth Science that describes the probability of earthquakes of a ce |
1,500 | What's a real-world example of "overfitting"? | "Agh! Pat is leaving the company. How are we ever going to find a replacement?"
Job Posting:
Wanted: Electrical Engineer.
42 year old androgynous person with degrees in Electrical Engineering, mathematics, and animal husbandry. Must be 68 inches tall with brown hair, a mole over the left eye, and prone to long winded diatribes against geese and misuse of the word 'counsel'.
In a mathematical sense, overfitting often refers to making a model with more parameters than are necessary, resulting in a better fit for a specific data set, but without capturing relevant details necessary to fit other data sets from the class of interest.
In the above example, the poster is unable to differentiate the relevant from irrelevant characteristics. The resulting qualifications are likely only met by the one person that they already know is right for the job (but no longer wants it). | What's a real-world example of "overfitting"? | "Agh! Pat is leaving the company. How are we ever going to find a replacement?"
Job Posting:
Wanted: Electrical Engineer.
42 year old androgynous person with degrees in Electrical Engineering, mathem | What's a real-world example of "overfitting"?
"Agh! Pat is leaving the company. How are we ever going to find a replacement?"
Job Posting:
Wanted: Electrical Engineer.
42 year old androgynous person with degrees in Electrical Engineering, mathematics, and animal husbandry. Must be 68 inches tall with brown hair, a mole over the left eye, and prone to long winded diatribes against geese and misuse of the word 'counsel'.
In a mathematical sense, overfitting often refers to making a model with more parameters than are necessary, resulting in a better fit for a specific data set, but without capturing relevant details necessary to fit other data sets from the class of interest.
In the above example, the poster is unable to differentiate the relevant from irrelevant characteristics. The resulting qualifications are likely only met by the one person that they already know is right for the job (but no longer wants it). | What's a real-world example of "overfitting"?
"Agh! Pat is leaving the company. How are we ever going to find a replacement?"
Job Posting:
Wanted: Electrical Engineer.
42 year old androgynous person with degrees in Electrical Engineering, mathem |
Subsets and Splits