idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
52,001 | Nonlinear Statistics? | I'd add that the distinction you sense is less than it seems. "Nonlinear statistics" is not really a coherent or standard label, not least because it is defined negatively rather than positively.
On linear: the big positive is that many nonlinearities are coped with by models that are linear in the parameters. You don't have to jump from what you meet in an introductory course straight to nonlinear least-squares. The latter is sometimes a good servant, but often an awkward master.
On normal: only lousy books and courses assume or even imply that it is a case or "normal or die". Modern statistics is cool about data being exponential, gamma, lognormal, binomial, Poisson, or yet more ornery.
The easiest bridge between introductory and slightly more advanced statistics is provided by the idea of a transformation, that some transformation often makes the nonlinear (more nearly) linear and the non-normal (more nearly) normal.
An excellent bridge between introductory and the above is (some variations of taste and judgement here should be expected from statistical people) through the idea of generalized linear models. I would start with an account such as Dobson and Barnett which conveys well the idea that what you know can be extended. | Nonlinear Statistics? | I'd add that the distinction you sense is less than it seems. "Nonlinear statistics" is not really a coherent or standard label, not least because it is defined negatively rather than positively.
On | Nonlinear Statistics?
I'd add that the distinction you sense is less than it seems. "Nonlinear statistics" is not really a coherent or standard label, not least because it is defined negatively rather than positively.
On linear: the big positive is that many nonlinearities are coped with by models that are linear in the parameters. You don't have to jump from what you meet in an introductory course straight to nonlinear least-squares. The latter is sometimes a good servant, but often an awkward master.
On normal: only lousy books and courses assume or even imply that it is a case or "normal or die". Modern statistics is cool about data being exponential, gamma, lognormal, binomial, Poisson, or yet more ornery.
The easiest bridge between introductory and slightly more advanced statistics is provided by the idea of a transformation, that some transformation often makes the nonlinear (more nearly) linear and the non-normal (more nearly) normal.
An excellent bridge between introductory and the above is (some variations of taste and judgement here should be expected from statistical people) through the idea of generalized linear models. I would start with an account such as Dobson and Barnett which conveys well the idea that what you know can be extended. | Nonlinear Statistics?
I'd add that the distinction you sense is less than it seems. "Nonlinear statistics" is not really a coherent or standard label, not least because it is defined negatively rather than positively.
On |
52,002 | Nonlinear Statistics? | The best answer to this may depend on your discipline, but I like Bruce Hansen's Econometrics textbook (free, online): http://www.ssc.wisc.edu/~bhansen/econometrics/Econometrics.pdf
See Section 9.1 for nonlinear least squares -- these results are really not very different from the linear case.
You might also be interested in some nonparametric techniques. The same guy also has good lecture notes on nonparametrics. Here is a link: http://www.ssc.wisc.edu/~bhansen/718/718.htm | Nonlinear Statistics? | The best answer to this may depend on your discipline, but I like Bruce Hansen's Econometrics textbook (free, online): http://www.ssc.wisc.edu/~bhansen/econometrics/Econometrics.pdf
See Section 9.1 fo | Nonlinear Statistics?
The best answer to this may depend on your discipline, but I like Bruce Hansen's Econometrics textbook (free, online): http://www.ssc.wisc.edu/~bhansen/econometrics/Econometrics.pdf
See Section 9.1 for nonlinear least squares -- these results are really not very different from the linear case.
You might also be interested in some nonparametric techniques. The same guy also has good lecture notes on nonparametrics. Here is a link: http://www.ssc.wisc.edu/~bhansen/718/718.htm | Nonlinear Statistics?
The best answer to this may depend on your discipline, but I like Bruce Hansen's Econometrics textbook (free, online): http://www.ssc.wisc.edu/~bhansen/econometrics/Econometrics.pdf
See Section 9.1 fo |
52,003 | Probability Distribution - which to use Normal or Hypergeometric | The random variables $\{X_i\}$, which are defined as the amount of money family $i$ spends in a given month is known to be uniformly distributed, that is, $X_i \sim \text{U}(500, 4500)$.
For a given sample of size 10, you are asked to compute the probability that at least 2 of them will spend more that \$3000. This is given by
$$
\begin{align}
&\sum_{j=2}^{10} {10\choose j}\mathbb{P}[X> 3000]^j \mathbb{P}[X\leq 3000]^{10-j}\\
=&\sum_{j=2}^{10} {10\choose j}\left(1-\mathbb{P}[X\leq 3000]\right)^j \mathbb{P}[X\leq 3000]^{10-j}\\
\end{align}
$$
using the fact that the spending of the families are distributed independently of each other.
Further, for uniformly distributed random variables, $X\sim \text{U}[l, u]$, we know that
$$
\mathbb{P}[X\leq x] = \dfrac{x-l}{u-l}
$$
Then we can write
$$
\begin{align}
&\sum_{j=2}^{10} {10\choose j}\left(1-\mathbb{P}[X\leq 3000]\right)^j \mathbb{P}[X\leq 3000]^{10-j} \\
=&\sum_{j=2}^{10} {10\choose j}\left(\frac{2000}{4500}\right)^j\left(\frac{2500}{4500}\right)^{10-j}
\end{align}
$$
This can be simplified to
$$
=1 - \left({10 \choose 1}\left(\frac{2000}{4500}\right)^1\left(\frac{2500}{4500}\right)^9 + \left(\frac{2500}{4500}\right)^{10}\right)
$$ | Probability Distribution - which to use Normal or Hypergeometric | The random variables $\{X_i\}$, which are defined as the amount of money family $i$ spends in a given month is known to be uniformly distributed, that is, $X_i \sim \text{U}(500, 4500)$.
For a given s | Probability Distribution - which to use Normal or Hypergeometric
The random variables $\{X_i\}$, which are defined as the amount of money family $i$ spends in a given month is known to be uniformly distributed, that is, $X_i \sim \text{U}(500, 4500)$.
For a given sample of size 10, you are asked to compute the probability that at least 2 of them will spend more that \$3000. This is given by
$$
\begin{align}
&\sum_{j=2}^{10} {10\choose j}\mathbb{P}[X> 3000]^j \mathbb{P}[X\leq 3000]^{10-j}\\
=&\sum_{j=2}^{10} {10\choose j}\left(1-\mathbb{P}[X\leq 3000]\right)^j \mathbb{P}[X\leq 3000]^{10-j}\\
\end{align}
$$
using the fact that the spending of the families are distributed independently of each other.
Further, for uniformly distributed random variables, $X\sim \text{U}[l, u]$, we know that
$$
\mathbb{P}[X\leq x] = \dfrac{x-l}{u-l}
$$
Then we can write
$$
\begin{align}
&\sum_{j=2}^{10} {10\choose j}\left(1-\mathbb{P}[X\leq 3000]\right)^j \mathbb{P}[X\leq 3000]^{10-j} \\
=&\sum_{j=2}^{10} {10\choose j}\left(\frac{2000}{4500}\right)^j\left(\frac{2500}{4500}\right)^{10-j}
\end{align}
$$
This can be simplified to
$$
=1 - \left({10 \choose 1}\left(\frac{2000}{4500}\right)^1\left(\frac{2500}{4500}\right)^9 + \left(\frac{2500}{4500}\right)^{10}\right)
$$ | Probability Distribution - which to use Normal or Hypergeometric
The random variables $\{X_i\}$, which are defined as the amount of money family $i$ spends in a given month is known to be uniformly distributed, that is, $X_i \sim \text{U}(500, 4500)$.
For a given s |
52,004 | Probability Distribution - which to use Normal or Hypergeometric | Neither the normal distribution nor the hypergeometric distribution apply to this question. There is no underlying assumption of normality here: the probability distribution for the expenses for a single family selected at random is given to be uniform between 500 and 4500. Furthermore, the hypergeometric distribution is not applicable because the outcome of whether a family's expenses exceeds 3000 is independent of any other families sampled among the group of 10. The only two distributions you need for this question are uniform and binomial. | Probability Distribution - which to use Normal or Hypergeometric | Neither the normal distribution nor the hypergeometric distribution apply to this question. There is no underlying assumption of normality here: the probability distribution for the expenses for a s | Probability Distribution - which to use Normal or Hypergeometric
Neither the normal distribution nor the hypergeometric distribution apply to this question. There is no underlying assumption of normality here: the probability distribution for the expenses for a single family selected at random is given to be uniform between 500 and 4500. Furthermore, the hypergeometric distribution is not applicable because the outcome of whether a family's expenses exceeds 3000 is independent of any other families sampled among the group of 10. The only two distributions you need for this question are uniform and binomial. | Probability Distribution - which to use Normal or Hypergeometric
Neither the normal distribution nor the hypergeometric distribution apply to this question. There is no underlying assumption of normality here: the probability distribution for the expenses for a s |
52,005 | Probability Distribution - which to use Normal or Hypergeometric | Thanks everyone. I've attached my working with the corrected numbers. Now I understand the rationale behind the workings above. :) | Probability Distribution - which to use Normal or Hypergeometric | Thanks everyone. I've attached my working with the corrected numbers. Now I understand the rationale behind the workings above. :) | Probability Distribution - which to use Normal or Hypergeometric
Thanks everyone. I've attached my working with the corrected numbers. Now I understand the rationale behind the workings above. :) | Probability Distribution - which to use Normal or Hypergeometric
Thanks everyone. I've attached my working with the corrected numbers. Now I understand the rationale behind the workings above. :) |
52,006 | Probability Distribution - which to use Normal or Hypergeometric | Although the correct answer is given, I'll give you some hints and a thought process here that may help you understand why the answer turns out the way it does.
The most important thing to arrive at the correct answer is to realize that you should be using the Binomial Distribution. How to see this?
The binomial distribution gives the probability of k successes in n independent yes/no experiments. In your case, the 'independent yes/no experiments' are the drawing of 10 families. That is, from a large population you randomly (i.e. the draws are independent) pick 10 families and look at their monthly expenses. One such draw is an experiment, and you make 10 of them.
Now, using the same naming conventions, we say that a family which expenses exceed \$3000 is a 'success' (remember, this is just a naming convention to be consistent with the language of the wikipedia article;)). The probability of 'success', denoted p in the article, is thus the probability that a uniform random variable on $[500,4500]$ is larger than 3000.
By reading about the binomial distribution you now know how to find the probability of finding exactly k families with expenses over $3000 in a random sample of 10 families. But the question asks about the probability of at least 2 such families out of 10. So, what do do? Note that the probability of at least 2 such families is the same as the probability of exactly 2 out of 10, plus the probability of exactly 3 out of 10, and so on all the way up to exactly 10 out of 10 such families.
However, you can also note that the complementary event of finding at least 2 such families is finding 0 or 1 such families. That is, if you do not find 2 or more such families in your 10 sampled, you must have found 0 or 1. So you can find the probability you seek by writing $$P(\text{At least 2 out of 10 families spend more than \$3000})=1 - P(\text{0 or 1 families spend more than \$3000})=1-P(\text{No family spends more than \$3000})-P(\text{1 family spends more than \$3000})$$Finally, you see that everything in this last row can be computed using the binomial distribution. I leave the actual computations to you. | Probability Distribution - which to use Normal or Hypergeometric | Although the correct answer is given, I'll give you some hints and a thought process here that may help you understand why the answer turns out the way it does.
The most important thing to arrive at t | Probability Distribution - which to use Normal or Hypergeometric
Although the correct answer is given, I'll give you some hints and a thought process here that may help you understand why the answer turns out the way it does.
The most important thing to arrive at the correct answer is to realize that you should be using the Binomial Distribution. How to see this?
The binomial distribution gives the probability of k successes in n independent yes/no experiments. In your case, the 'independent yes/no experiments' are the drawing of 10 families. That is, from a large population you randomly (i.e. the draws are independent) pick 10 families and look at their monthly expenses. One such draw is an experiment, and you make 10 of them.
Now, using the same naming conventions, we say that a family which expenses exceed \$3000 is a 'success' (remember, this is just a naming convention to be consistent with the language of the wikipedia article;)). The probability of 'success', denoted p in the article, is thus the probability that a uniform random variable on $[500,4500]$ is larger than 3000.
By reading about the binomial distribution you now know how to find the probability of finding exactly k families with expenses over $3000 in a random sample of 10 families. But the question asks about the probability of at least 2 such families out of 10. So, what do do? Note that the probability of at least 2 such families is the same as the probability of exactly 2 out of 10, plus the probability of exactly 3 out of 10, and so on all the way up to exactly 10 out of 10 such families.
However, you can also note that the complementary event of finding at least 2 such families is finding 0 or 1 such families. That is, if you do not find 2 or more such families in your 10 sampled, you must have found 0 or 1. So you can find the probability you seek by writing $$P(\text{At least 2 out of 10 families spend more than \$3000})=1 - P(\text{0 or 1 families spend more than \$3000})=1-P(\text{No family spends more than \$3000})-P(\text{1 family spends more than \$3000})$$Finally, you see that everything in this last row can be computed using the binomial distribution. I leave the actual computations to you. | Probability Distribution - which to use Normal or Hypergeometric
Although the correct answer is given, I'll give you some hints and a thought process here that may help you understand why the answer turns out the way it does.
The most important thing to arrive at t |
52,007 | What tools do Machine Learning experts use in the real world? | MATLAB was primarily developed for optimization and mathematical simulations in engineering problems. But yes it has performance issues when it comes to machine learning, optimization, etc., in terms of customizing ability.
Over time, most statistical analysis / machine learning has shifted to R and Python, because of an active community presence for development of almost any complex algorithm. You don't have to write code for SVM or neural networks from scratch, unless you really want to change to algorithm itself, which is also possible, and which is what Google and Facebook do internally.
So if you want to try machine learning for studying purposes, WEKA, R and Python will do the job. but if your really want to develop some data analytics products based on these algorithms, Python and R are the way to go. R has a steep learning curve, though.
WEKA became popular because earlier in industry, because most of the analytics practitioners came from an IT (Information technology) background and hence were comfortable with JAVA, but over the time mathematicians and computer scientists have moved to R and Python. | What tools do Machine Learning experts use in the real world? | MATLAB was primarily developed for optimization and mathematical simulations in engineering problems. But yes it has performance issues when it comes to machine learning, optimization, etc., in terms | What tools do Machine Learning experts use in the real world?
MATLAB was primarily developed for optimization and mathematical simulations in engineering problems. But yes it has performance issues when it comes to machine learning, optimization, etc., in terms of customizing ability.
Over time, most statistical analysis / machine learning has shifted to R and Python, because of an active community presence for development of almost any complex algorithm. You don't have to write code for SVM or neural networks from scratch, unless you really want to change to algorithm itself, which is also possible, and which is what Google and Facebook do internally.
So if you want to try machine learning for studying purposes, WEKA, R and Python will do the job. but if your really want to develop some data analytics products based on these algorithms, Python and R are the way to go. R has a steep learning curve, though.
WEKA became popular because earlier in industry, because most of the analytics practitioners came from an IT (Information technology) background and hence were comfortable with JAVA, but over the time mathematicians and computer scientists have moved to R and Python. | What tools do Machine Learning experts use in the real world?
MATLAB was primarily developed for optimization and mathematical simulations in engineering problems. But yes it has performance issues when it comes to machine learning, optimization, etc., in terms |
52,008 | What tools do Machine Learning experts use in the real world? | If scale is not an issue then any solution you probably already know is fine. It is more a matter of personal and company choice (cost/legacy issues). So it doesn't really matter if it's going be R or MATLAB, Python or Java, Weka or Rapidminer, open source tools or propriety code. However, big players like those you mention have to deal with scale.
If scale is the issue then obviously you can't deploy any fancy algorithm with complexity higher than $O(n)$, like SVMs in the dual or kNN or so many others. Even more, you have to go for algorithms and implementations that are made to work in distributed environments: data are scattered across more than one machine, limited communication is allowed between machines. Obvious choices are algorithms based on Stochastic Gradient Descent, like Vowpal Wabbit (created at Yahoo! labs). You also have libraries that run on top of Hadoop (free version of Map/Reduce framework developed by Google), like Mahout.
Problems and challenges in the big data environment are unlimited. For example, a common assumption is that the model $w$ you try to learn will live in a single machine, which is fine until you start having a few hundred servers reading and updating that same model in a production environment. You can look up papers and video lectures from NIPS and ICML from Yahoo!, Google, Facebook and others where they discuss similar issues they deal with and solutions they deploy (search for scalability). | What tools do Machine Learning experts use in the real world? | If scale is not an issue then any solution you probably already know is fine. It is more a matter of personal and company choice (cost/legacy issues). So it doesn't really matter if it's going be R or | What tools do Machine Learning experts use in the real world?
If scale is not an issue then any solution you probably already know is fine. It is more a matter of personal and company choice (cost/legacy issues). So it doesn't really matter if it's going be R or MATLAB, Python or Java, Weka or Rapidminer, open source tools or propriety code. However, big players like those you mention have to deal with scale.
If scale is the issue then obviously you can't deploy any fancy algorithm with complexity higher than $O(n)$, like SVMs in the dual or kNN or so many others. Even more, you have to go for algorithms and implementations that are made to work in distributed environments: data are scattered across more than one machine, limited communication is allowed between machines. Obvious choices are algorithms based on Stochastic Gradient Descent, like Vowpal Wabbit (created at Yahoo! labs). You also have libraries that run on top of Hadoop (free version of Map/Reduce framework developed by Google), like Mahout.
Problems and challenges in the big data environment are unlimited. For example, a common assumption is that the model $w$ you try to learn will live in a single machine, which is fine until you start having a few hundred servers reading and updating that same model in a production environment. You can look up papers and video lectures from NIPS and ICML from Yahoo!, Google, Facebook and others where they discuss similar issues they deal with and solutions they deploy (search for scalability). | What tools do Machine Learning experts use in the real world?
If scale is not an issue then any solution you probably already know is fine. It is more a matter of personal and company choice (cost/legacy issues). So it doesn't really matter if it's going be R or |
52,009 | What tools do Machine Learning experts use in the real world? | MATLAB is a great tool. However for Machine Learning simulation there is an increasing interest in R. R is emerging as a great platform for Machine Learning, Data Mining, Statistical Modelling (etc.) tasks. R has got a rich range of packages for statistical modelling.
With the emergence of Big Data R has got a positive edge. If you need to perform your computation on large and distributed sets of data then R is a great tool. R has got integration APIs for Hadoop and also for Spark.
Talking about production system and real life intelligent software products could be created using R. In such products you could use a mix of technologies. Like if you need to create an Enterprise Analytics application then you could program the core logic of the application in JAVA and use R for statistical and modelling aspects. Following articles give a nice explanation as how to integrate R with Java:
R Tutorial: How to integrate R with Java using Rserve
R Tutorial: How to integrate R with Java using rJava
So other environments are also great but R is emerging at par with them.
I hope this will be of some help. | What tools do Machine Learning experts use in the real world? | MATLAB is a great tool. However for Machine Learning simulation there is an increasing interest in R. R is emerging as a great platform for Machine Learning, Data Mining, Statistical Modelling (etc.) | What tools do Machine Learning experts use in the real world?
MATLAB is a great tool. However for Machine Learning simulation there is an increasing interest in R. R is emerging as a great platform for Machine Learning, Data Mining, Statistical Modelling (etc.) tasks. R has got a rich range of packages for statistical modelling.
With the emergence of Big Data R has got a positive edge. If you need to perform your computation on large and distributed sets of data then R is a great tool. R has got integration APIs for Hadoop and also for Spark.
Talking about production system and real life intelligent software products could be created using R. In such products you could use a mix of technologies. Like if you need to create an Enterprise Analytics application then you could program the core logic of the application in JAVA and use R for statistical and modelling aspects. Following articles give a nice explanation as how to integrate R with Java:
R Tutorial: How to integrate R with Java using Rserve
R Tutorial: How to integrate R with Java using rJava
So other environments are also great but R is emerging at par with them.
I hope this will be of some help. | What tools do Machine Learning experts use in the real world?
MATLAB is a great tool. However for Machine Learning simulation there is an increasing interest in R. R is emerging as a great platform for Machine Learning, Data Mining, Statistical Modelling (etc.) |
52,010 | Why do they call it "sampling distribution of the sample mean"? | Within a particular setting where the type of distribution is known or implied, "distribution of the sample mean" works just fine. But in general would the "distribution of the sample mean" be its sampling distribution, a bootstrap distribution, a permutation distribution, or perhaps something else?
The existence of different kinds of distribution of a sample statistic requires some linguistic method of disambiguation. Without that, you lose precision and perhaps miscommunicate with your audience. | Why do they call it "sampling distribution of the sample mean"? | Within a particular setting where the type of distribution is known or implied, "distribution of the sample mean" works just fine. But in general would the "distribution of the sample mean" be its sa | Why do they call it "sampling distribution of the sample mean"?
Within a particular setting where the type of distribution is known or implied, "distribution of the sample mean" works just fine. But in general would the "distribution of the sample mean" be its sampling distribution, a bootstrap distribution, a permutation distribution, or perhaps something else?
The existence of different kinds of distribution of a sample statistic requires some linguistic method of disambiguation. Without that, you lose precision and perhaps miscommunicate with your audience. | Why do they call it "sampling distribution of the sample mean"?
Within a particular setting where the type of distribution is known or implied, "distribution of the sample mean" works just fine. But in general would the "distribution of the sample mean" be its sa |
52,011 | Why do they call it "sampling distribution of the sample mean"? | For a given data set, the sample mean provides a single estimate of the population mean. This estimate is a constant and thus its distribution is rather boring.
In contrast, the sampling distribution of the mean refers to the frequentist approach of considering the distribution of the sample means between many hypothesized samples drawn from the same population.
So it kind of makes sense to use a 'new' word. | Why do they call it "sampling distribution of the sample mean"? | For a given data set, the sample mean provides a single estimate of the population mean. This estimate is a constant and thus its distribution is rather boring.
In contrast, the sampling distribution | Why do they call it "sampling distribution of the sample mean"?
For a given data set, the sample mean provides a single estimate of the population mean. This estimate is a constant and thus its distribution is rather boring.
In contrast, the sampling distribution of the mean refers to the frequentist approach of considering the distribution of the sample means between many hypothesized samples drawn from the same population.
So it kind of makes sense to use a 'new' word. | Why do they call it "sampling distribution of the sample mean"?
For a given data set, the sample mean provides a single estimate of the population mean. This estimate is a constant and thus its distribution is rather boring.
In contrast, the sampling distribution |
52,012 | Why do they call it "sampling distribution of the sample mean"? | You can have a sampling distribution of other statistics than the mean, such as the estimated median, or estimated variance.
Sometimes "sampling distribution" might be a loose term referring to the estimated mean and estimated variance of the sample taken together (with the unspoken assumption that the distribution of sample means is approximately normal). | Why do they call it "sampling distribution of the sample mean"? | You can have a sampling distribution of other statistics than the mean, such as the estimated median, or estimated variance.
Sometimes "sampling distribution" might be a loose term referring to the es | Why do they call it "sampling distribution of the sample mean"?
You can have a sampling distribution of other statistics than the mean, such as the estimated median, or estimated variance.
Sometimes "sampling distribution" might be a loose term referring to the estimated mean and estimated variance of the sample taken together (with the unspoken assumption that the distribution of sample means is approximately normal). | Why do they call it "sampling distribution of the sample mean"?
You can have a sampling distribution of other statistics than the mean, such as the estimated median, or estimated variance.
Sometimes "sampling distribution" might be a loose term referring to the es |
52,013 | What is the test statistic in Kolmogorov–Smirnov test? | Note that the Kolmogorov-Smirnov test statistic is very clearly defined in the immediately previous section:
$$D_n=\sup_x|F_n(x)−F(x)|\,.$$
The reason they discuss $\sqrt{n}D_n$ in the next section is that the standard deviation of the distribution of $D_n$ goes down as $1/\sqrt n$, while $\sqrt{n}D_n$ converges in distribution as $n\to\infty$.
Yes, the number of points, $n$, matters to the distribution; for small $n$, tables are given for each sample size, and for large $n$ the asymptotic distribution is given for $\sqrt{n}D_n$ $-$ the very same distribution discussed in the section you quote.
Without some result on asymptotic convergence in distribution, you'd have the problem that you'd have to keep producing tables at larger and larger $n$, but since the distribution of $\sqrt{n}D_n$ pretty rapidly 'stabilizes', only a table with small values of $n$ is required, up to a point where approximating $\sqrt{n}D_n$ by the limiting Kolmogorov distribution is sufficiently good.
Below is a plot of exact 5% and 1% critical values for $D_n$, and the corresponding asymptotic critical values, $K_\alpha/\sqrt n$.
Most tables finish giving the exact critical values for $D_n$ and swap to giving the asymptotic values for $\sqrt n D_n$, $K_\alpha$ (as a single table row) somewhere between $n=20$ and $n=40$, from which the critical values of $D_n$ for any $n$ can readily be obtained.
$\text{Responses to followup questions:}$
1)
How do we obtain the distribution of $D_n$ when $n$ is fixed?
There are a variety of methods for obtaining the distribution of the test statistic for small $n$; for example, recursive methods build the distribution at some given sample size in terms of the distribution for smaller sample sizes.
There's discussion of various methods given here, for example.
2)
If I get the value of $D_\text{max}$ and the sample size is $n$, I have to calculate $Pr(K<=x)$, right?
Your test statistic is your observed sample value of the $D_n$ random variable, which will be some value, $d_n$ (what you're calling $D_\text{max}$, but note the usual convention of upper case for random variables and lower case for observed values). You compare it with the null distribution of $D_n$. Since the rejection rule would be "reject if the distance is 'too big'.", if it is to have level $\alpha$, that means rejecting when $d_n$ is bigger than the $1-\alpha$ quantile of the null distribution.
That is, you either take the p-value approach and compute $P(D_n> d_n)=1-P(D_n\leq d_n)$ and reject when that's $\leq\alpha$ or you take the critical value approach and compute a critical value, $d_\alpha$, which cuts off an upper tail area of $\alpha$ on the null distribution of $D_n$, and reject when $d_n \geq d_\alpha$.
By formula 14.3.9 of Numerical Recipes, we should calculate a value got from the expression in the brackets - should that be the x?
14.3.9 looks like it has a typo (one of many in NR). It is trying to give an approximate formula for the p-value of "observed" (that is, my "$d_n$", your $D_\text{max}$), by adjusting the observed value so you can use the asymptotic distribution for even very small $n$ (in my diagram, that corresponds to changing the $y$-value of the observed test statistic via a function of $n$, equivalent to pushing the circles 'up' to lie very close the dotted lines) but then it (apparently by mistake) puts the random variable (rather than the observed value, as it should) into the RHS of the formula. The actual p-value must be a function of the observed statistic.
3)
We make tests and get a distribution, right?
I don't know what you mean to say there.
Could you please explain your figure in a "test" way?
My figure plots the 5% and 1% critical values of the null distribution of $D_n$ for sample sizes 1 to 40 (the circles) and also the value from the asymptotic approximation $K_\alpha/\sqrt n$ (the lines).
It looks to me like you have some basic issues with understanding hypothesis tests that's getting in the way of understanding what is happening here. I suggest you work on understanding the mechanics of hypothesis tests first.
That means there is no error in 14.4.9 of NR .
(Presumably you mean 14.3.9, since that's what I was discussing.)
Yes there is an error. I think you may have misunderstood where the problem is.
The problem isn't with "$(\sqrt{n}+0.12+0.11/\sqrt{n})$". It's with the meaning of the term they multiply it by. They appear to have used the wrong variable from the LHS in the RHS formula, putting the random variable where its observed value should be.
[When the thing you're reading is confused about that, it's not surprising you have a similar confusion.] | What is the test statistic in Kolmogorov–Smirnov test? | Note that the Kolmogorov-Smirnov test statistic is very clearly defined in the immediately previous section:
$$D_n=\sup_x|F_n(x)−F(x)|\,.$$
The reason they discuss $\sqrt{n}D_n$ in the next section i | What is the test statistic in Kolmogorov–Smirnov test?
Note that the Kolmogorov-Smirnov test statistic is very clearly defined in the immediately previous section:
$$D_n=\sup_x|F_n(x)−F(x)|\,.$$
The reason they discuss $\sqrt{n}D_n$ in the next section is that the standard deviation of the distribution of $D_n$ goes down as $1/\sqrt n$, while $\sqrt{n}D_n$ converges in distribution as $n\to\infty$.
Yes, the number of points, $n$, matters to the distribution; for small $n$, tables are given for each sample size, and for large $n$ the asymptotic distribution is given for $\sqrt{n}D_n$ $-$ the very same distribution discussed in the section you quote.
Without some result on asymptotic convergence in distribution, you'd have the problem that you'd have to keep producing tables at larger and larger $n$, but since the distribution of $\sqrt{n}D_n$ pretty rapidly 'stabilizes', only a table with small values of $n$ is required, up to a point where approximating $\sqrt{n}D_n$ by the limiting Kolmogorov distribution is sufficiently good.
Below is a plot of exact 5% and 1% critical values for $D_n$, and the corresponding asymptotic critical values, $K_\alpha/\sqrt n$.
Most tables finish giving the exact critical values for $D_n$ and swap to giving the asymptotic values for $\sqrt n D_n$, $K_\alpha$ (as a single table row) somewhere between $n=20$ and $n=40$, from which the critical values of $D_n$ for any $n$ can readily be obtained.
$\text{Responses to followup questions:}$
1)
How do we obtain the distribution of $D_n$ when $n$ is fixed?
There are a variety of methods for obtaining the distribution of the test statistic for small $n$; for example, recursive methods build the distribution at some given sample size in terms of the distribution for smaller sample sizes.
There's discussion of various methods given here, for example.
2)
If I get the value of $D_\text{max}$ and the sample size is $n$, I have to calculate $Pr(K<=x)$, right?
Your test statistic is your observed sample value of the $D_n$ random variable, which will be some value, $d_n$ (what you're calling $D_\text{max}$, but note the usual convention of upper case for random variables and lower case for observed values). You compare it with the null distribution of $D_n$. Since the rejection rule would be "reject if the distance is 'too big'.", if it is to have level $\alpha$, that means rejecting when $d_n$ is bigger than the $1-\alpha$ quantile of the null distribution.
That is, you either take the p-value approach and compute $P(D_n> d_n)=1-P(D_n\leq d_n)$ and reject when that's $\leq\alpha$ or you take the critical value approach and compute a critical value, $d_\alpha$, which cuts off an upper tail area of $\alpha$ on the null distribution of $D_n$, and reject when $d_n \geq d_\alpha$.
By formula 14.3.9 of Numerical Recipes, we should calculate a value got from the expression in the brackets - should that be the x?
14.3.9 looks like it has a typo (one of many in NR). It is trying to give an approximate formula for the p-value of "observed" (that is, my "$d_n$", your $D_\text{max}$), by adjusting the observed value so you can use the asymptotic distribution for even very small $n$ (in my diagram, that corresponds to changing the $y$-value of the observed test statistic via a function of $n$, equivalent to pushing the circles 'up' to lie very close the dotted lines) but then it (apparently by mistake) puts the random variable (rather than the observed value, as it should) into the RHS of the formula. The actual p-value must be a function of the observed statistic.
3)
We make tests and get a distribution, right?
I don't know what you mean to say there.
Could you please explain your figure in a "test" way?
My figure plots the 5% and 1% critical values of the null distribution of $D_n$ for sample sizes 1 to 40 (the circles) and also the value from the asymptotic approximation $K_\alpha/\sqrt n$ (the lines).
It looks to me like you have some basic issues with understanding hypothesis tests that's getting in the way of understanding what is happening here. I suggest you work on understanding the mechanics of hypothesis tests first.
That means there is no error in 14.4.9 of NR .
(Presumably you mean 14.3.9, since that's what I was discussing.)
Yes there is an error. I think you may have misunderstood where the problem is.
The problem isn't with "$(\sqrt{n}+0.12+0.11/\sqrt{n})$". It's with the meaning of the term they multiply it by. They appear to have used the wrong variable from the LHS in the RHS formula, putting the random variable where its observed value should be.
[When the thing you're reading is confused about that, it's not surprising you have a similar confusion.] | What is the test statistic in Kolmogorov–Smirnov test?
Note that the Kolmogorov-Smirnov test statistic is very clearly defined in the immediately previous section:
$$D_n=\sup_x|F_n(x)−F(x)|\,.$$
The reason they discuss $\sqrt{n}D_n$ in the next section i |
52,014 | Why does lambda.min value in glmnet tuning cross-validation change, when repeating test? | It is difficult to tell without the data to reproduce things. You didn't list the caret code so I'm not sure what was done there.
I think that the bottom line is that you have 44 samples and 10-fold CV, known to have high variance, is not going to give you repeatable results. I would suggest using several repeats of 10FCV (via trainControl's method = "repeatedcv" option) or go to the bootstrap and accept that your RMSE estimates are going to be a little pessimistic.
HTH,
Max | Why does lambda.min value in glmnet tuning cross-validation change, when repeating test? | It is difficult to tell without the data to reproduce things. You didn't list the caret code so I'm not sure what was done there.
I think that the bottom line is that you have 44 samples and 10-fold C | Why does lambda.min value in glmnet tuning cross-validation change, when repeating test?
It is difficult to tell without the data to reproduce things. You didn't list the caret code so I'm not sure what was done there.
I think that the bottom line is that you have 44 samples and 10-fold CV, known to have high variance, is not going to give you repeatable results. I would suggest using several repeats of 10FCV (via trainControl's method = "repeatedcv" option) or go to the bootstrap and accept that your RMSE estimates are going to be a little pessimistic.
HTH,
Max | Why does lambda.min value in glmnet tuning cross-validation change, when repeating test?
It is difficult to tell without the data to reproduce things. You didn't list the caret code so I'm not sure what was done there.
I think that the bottom line is that you have 44 samples and 10-fold C |
52,015 | Why does lambda.min value in glmnet tuning cross-validation change, when repeating test? | The reason why you're getting different lambda values is because every time you call
cv.glmnet(xMatrix, y, alpha=0.5, nfolds=10), you're essentially creating different cross validation folds.To retain the same lambda value, you need to make sure you're using the same cross validation folds every time, thus you might want to try initializing a random number seed prior to invoking cv.glmnet(xMatrix, y, alpha=0.5, nfolds=10). Try running this whole chunk of code repeatedly:
set.seed(1)
cvGlmnet <- cv.glmnet(xMatrix, y, alpha=0.5, nfolds=10)
cvGlmnet$lambda.min
You'll see that lambda remains the same. Hence, you'll have reproducible results! MichaelJ in the comments below your query essentially answered your question. | Why does lambda.min value in glmnet tuning cross-validation change, when repeating test? | The reason why you're getting different lambda values is because every time you call
cv.glmnet(xMatrix, y, alpha=0.5, nfolds=10), you're essentially creating different cross validation folds.To retain | Why does lambda.min value in glmnet tuning cross-validation change, when repeating test?
The reason why you're getting different lambda values is because every time you call
cv.glmnet(xMatrix, y, alpha=0.5, nfolds=10), you're essentially creating different cross validation folds.To retain the same lambda value, you need to make sure you're using the same cross validation folds every time, thus you might want to try initializing a random number seed prior to invoking cv.glmnet(xMatrix, y, alpha=0.5, nfolds=10). Try running this whole chunk of code repeatedly:
set.seed(1)
cvGlmnet <- cv.glmnet(xMatrix, y, alpha=0.5, nfolds=10)
cvGlmnet$lambda.min
You'll see that lambda remains the same. Hence, you'll have reproducible results! MichaelJ in the comments below your query essentially answered your question. | Why does lambda.min value in glmnet tuning cross-validation change, when repeating test?
The reason why you're getting different lambda values is because every time you call
cv.glmnet(xMatrix, y, alpha=0.5, nfolds=10), you're essentially creating different cross validation folds.To retain |
52,016 | Why does lambda.min value in glmnet tuning cross-validation change, when repeating test? | If you want to get repeatable optimal lambda & alpha, you can use leave-one-out CV (which doesn't support AUC though).
cv <- cv.glmnet(x,y,alpha=1,nfolds=nrow(x))
It's pretty normal for an n-fold CV to bring you very huge variance, since the partitions of sample are generated randomly. | Why does lambda.min value in glmnet tuning cross-validation change, when repeating test? | If you want to get repeatable optimal lambda & alpha, you can use leave-one-out CV (which doesn't support AUC though).
cv <- cv.glmnet(x,y,alpha=1,nfolds=nrow(x))
It's pretty normal for an n-fold CV | Why does lambda.min value in glmnet tuning cross-validation change, when repeating test?
If you want to get repeatable optimal lambda & alpha, you can use leave-one-out CV (which doesn't support AUC though).
cv <- cv.glmnet(x,y,alpha=1,nfolds=nrow(x))
It's pretty normal for an n-fold CV to bring you very huge variance, since the partitions of sample are generated randomly. | Why does lambda.min value in glmnet tuning cross-validation change, when repeating test?
If you want to get repeatable optimal lambda & alpha, you can use leave-one-out CV (which doesn't support AUC though).
cv <- cv.glmnet(x,y,alpha=1,nfolds=nrow(x))
It's pretty normal for an n-fold CV |
52,017 | Why does lambda.min value in glmnet tuning cross-validation change, when repeating test? | Your methodology is not great for reproducibility: you set.seed(1) once then run cv.glmnet() 100 times. Each of those calls to cv.glmnet() is itself calling sample() N times. So if the length of your data ever changes, the reproducibility changes.
Better to explicitly set.seed() right before each run. Or else keep the foldids constant across runs (use the utility functions from caret or ). | Why does lambda.min value in glmnet tuning cross-validation change, when repeating test? | Your methodology is not great for reproducibility: you set.seed(1) once then run cv.glmnet() 100 times. Each of those calls to cv.glmnet() is itself calling sample() N times. So if the length of your | Why does lambda.min value in glmnet tuning cross-validation change, when repeating test?
Your methodology is not great for reproducibility: you set.seed(1) once then run cv.glmnet() 100 times. Each of those calls to cv.glmnet() is itself calling sample() N times. So if the length of your data ever changes, the reproducibility changes.
Better to explicitly set.seed() right before each run. Or else keep the foldids constant across runs (use the utility functions from caret or ). | Why does lambda.min value in glmnet tuning cross-validation change, when repeating test?
Your methodology is not great for reproducibility: you set.seed(1) once then run cv.glmnet() 100 times. Each of those calls to cv.glmnet() is itself calling sample() N times. So if the length of your |
52,018 | What is lambda in an elastic net model (penalized regression)? | You're confused; $\alpha$ and $\lambda$ are totally different.
$\alpha$ sets the degree of mixing between ridge regression and lasso: when $\alpha = 0$, the elastic net does the former, and when $\alpha = 1$, it does the latter. Values of $\alpha$ between those extremes will give a result that is a blend of the two.
Meanwhile, $\lambda$ is the shrinkage parameter: when $\lambda = 0$, no shrinkage is performed, and as $\lambda$ increases, the coefficients are shrunk ever more strongly. This happens regardless of the value of $\alpha$. | What is lambda in an elastic net model (penalized regression)? | You're confused; $\alpha$ and $\lambda$ are totally different.
$\alpha$ sets the degree of mixing between ridge regression and lasso: when $\alpha = 0$, the elastic net does the former, and when $\alp | What is lambda in an elastic net model (penalized regression)?
You're confused; $\alpha$ and $\lambda$ are totally different.
$\alpha$ sets the degree of mixing between ridge regression and lasso: when $\alpha = 0$, the elastic net does the former, and when $\alpha = 1$, it does the latter. Values of $\alpha$ between those extremes will give a result that is a blend of the two.
Meanwhile, $\lambda$ is the shrinkage parameter: when $\lambda = 0$, no shrinkage is performed, and as $\lambda$ increases, the coefficients are shrunk ever more strongly. This happens regardless of the value of $\alpha$. | What is lambda in an elastic net model (penalized regression)?
You're confused; $\alpha$ and $\lambda$ are totally different.
$\alpha$ sets the degree of mixing between ridge regression and lasso: when $\alpha = 0$, the elastic net does the former, and when $\alp |
52,019 | Repeated measures within factors settings for G*Power power calculation | GPower is assuming you have your data set up so that a row is a case (often a person), and a column is a measure.
For example, if we measured Y on three occasions, we'd have Y1, Y2, Y3, and we'd have three measures.
The groups are when you have a between case predictor - for example gender or experimental group. So when you have a 2x2 repeated measures design, you have four measures.
However, GPower assumes that you want to do 1 test with 3 df, which you don't, you want to do 3 tests, with 1 df (2 main effects, 1 interaction). I suspect that you should therefore be entering two as the number of measures. (And then the other parameters depend upon which of the effects you want to base your power on). Power analysis for this type of design gets complicated rapidly, and it's not clear how to enter the appropriate parameters into GPower. I prefer one of two other approaches which allow you to enter the data in matrix format.
First, D'Amico, et al, showed how to do this in SPSS, using the (old) manova command: paper here: http://www.ncbi.nlm.nih.gov/pubmed/11816450
Second, I showed how to do this as a structural equation model, in a paper here: http://www.biomedcentral.com/1471-2288/3/27/
Both of those approaches are a bit trickier to start with, but they are a lot more flexible.
A third approach is to ignore the fact that it's repeated measures when estimating power. The higher the correlations between your measures, the more power you have. But you don't know what those will be. If you estimate power as if the measures were independent, you know that your power analysis is conservative. The problem is that if the correlations are high, the power analysis might be very, very conservative. | Repeated measures within factors settings for G*Power power calculation | GPower is assuming you have your data set up so that a row is a case (often a person), and a column is a measure.
For example, if we measured Y on three occasions, we'd have Y1, Y2, Y3, and we'd have | Repeated measures within factors settings for G*Power power calculation
GPower is assuming you have your data set up so that a row is a case (often a person), and a column is a measure.
For example, if we measured Y on three occasions, we'd have Y1, Y2, Y3, and we'd have three measures.
The groups are when you have a between case predictor - for example gender or experimental group. So when you have a 2x2 repeated measures design, you have four measures.
However, GPower assumes that you want to do 1 test with 3 df, which you don't, you want to do 3 tests, with 1 df (2 main effects, 1 interaction). I suspect that you should therefore be entering two as the number of measures. (And then the other parameters depend upon which of the effects you want to base your power on). Power analysis for this type of design gets complicated rapidly, and it's not clear how to enter the appropriate parameters into GPower. I prefer one of two other approaches which allow you to enter the data in matrix format.
First, D'Amico, et al, showed how to do this in SPSS, using the (old) manova command: paper here: http://www.ncbi.nlm.nih.gov/pubmed/11816450
Second, I showed how to do this as a structural equation model, in a paper here: http://www.biomedcentral.com/1471-2288/3/27/
Both of those approaches are a bit trickier to start with, but they are a lot more flexible.
A third approach is to ignore the fact that it's repeated measures when estimating power. The higher the correlations between your measures, the more power you have. But you don't know what those will be. If you estimate power as if the measures were independent, you know that your power analysis is conservative. The problem is that if the correlations are high, the power analysis might be very, very conservative. | Repeated measures within factors settings for G*Power power calculation
GPower is assuming you have your data set up so that a row is a case (often a person), and a column is a measure.
For example, if we measured Y on three occasions, we'd have Y1, Y2, Y3, and we'd have |
52,020 | Repeated measures within factors settings for G*Power power calculation | I had the same question, so I sent an e-mail to the G*Power team. They informed me that the current version of G*Power (3.1.9.2) cannot conveniently do power analyses for repeated measures designs with more than one within-subject or between-subject factor. It is possible using the "Generic F test" option, but this is considerably more complicated.
So the design with 2 within-subject factors in the original post is currently unsupported: The most complex design that is currently supported by the "ANOVA: repeated measures" option can have a maximum of one between-subject and one-within subject variable (i.e. the repeated measure). In that case:
"Number of groups" is simply the number of levels in your between-subject factor. So say your design contains a factor "gender", the number of groups would be 2 (for male and female). If there is no between-subjects factor, you would enter 1.
"Number of measurements" is simply the number of levels in your within-subject factor/repeated measure. So if you collected data at 4 different time points for example, the number of measurements would be 4. | Repeated measures within factors settings for G*Power power calculation | I had the same question, so I sent an e-mail to the G*Power team. They informed me that the current version of G*Power (3.1.9.2) cannot conveniently do power analyses for repeated measures designs wit | Repeated measures within factors settings for G*Power power calculation
I had the same question, so I sent an e-mail to the G*Power team. They informed me that the current version of G*Power (3.1.9.2) cannot conveniently do power analyses for repeated measures designs with more than one within-subject or between-subject factor. It is possible using the "Generic F test" option, but this is considerably more complicated.
So the design with 2 within-subject factors in the original post is currently unsupported: The most complex design that is currently supported by the "ANOVA: repeated measures" option can have a maximum of one between-subject and one-within subject variable (i.e. the repeated measure). In that case:
"Number of groups" is simply the number of levels in your between-subject factor. So say your design contains a factor "gender", the number of groups would be 2 (for male and female). If there is no between-subjects factor, you would enter 1.
"Number of measurements" is simply the number of levels in your within-subject factor/repeated measure. So if you collected data at 4 different time points for example, the number of measurements would be 4. | Repeated measures within factors settings for G*Power power calculation
I had the same question, so I sent an e-mail to the G*Power team. They informed me that the current version of G*Power (3.1.9.2) cannot conveniently do power analyses for repeated measures designs wit |
52,021 | Repeated measures within factors settings for G*Power power calculation | Jumping in a bit late, but I figured I'd build on @Jeremy's response and add some clarifying examples (as the 2x2 RM design is a bit ambiguous to me).
Assuming the "ANOVA: RM, within factors" option is where we're at, I believe the "number of groups" refers to the number of between-subjects LEVELS (not factors) that you have. Thus if you have two between-subjects factors, one with 2 levels and one with 3, you will have 2x3=6 as your "number of groups". With just one between subjects factor, you simply enter the number of levels in the factor as your groups (for sex, you'd have 2 groups (male/female)). Likewise, entering 1 would indicate that all members are from the same group.
As you have mentioned, "number of measurements" is the number of times that you measure each person. For example, a pre-post would be 2 measurements, and measuring each participant under 3 exercise conditions would be 3 measurements.
A full, simple example would be wishing to look at a pre-post test math score across low, medium, and high Socio-Economic Status (SES). The "number of groups" would be 3 (for the three levels of SES), and the "number of measurements" would be 2, for the two different tests given to each person (pre and post). As we're looking at the sample size to detect within-subject differences, our sample size given this input would be to tell the difference between pre and post test scores. | Repeated measures within factors settings for G*Power power calculation | Jumping in a bit late, but I figured I'd build on @Jeremy's response and add some clarifying examples (as the 2x2 RM design is a bit ambiguous to me).
Assuming the "ANOVA: RM, within factors" option i | Repeated measures within factors settings for G*Power power calculation
Jumping in a bit late, but I figured I'd build on @Jeremy's response and add some clarifying examples (as the 2x2 RM design is a bit ambiguous to me).
Assuming the "ANOVA: RM, within factors" option is where we're at, I believe the "number of groups" refers to the number of between-subjects LEVELS (not factors) that you have. Thus if you have two between-subjects factors, one with 2 levels and one with 3, you will have 2x3=6 as your "number of groups". With just one between subjects factor, you simply enter the number of levels in the factor as your groups (for sex, you'd have 2 groups (male/female)). Likewise, entering 1 would indicate that all members are from the same group.
As you have mentioned, "number of measurements" is the number of times that you measure each person. For example, a pre-post would be 2 measurements, and measuring each participant under 3 exercise conditions would be 3 measurements.
A full, simple example would be wishing to look at a pre-post test math score across low, medium, and high Socio-Economic Status (SES). The "number of groups" would be 3 (for the three levels of SES), and the "number of measurements" would be 2, for the two different tests given to each person (pre and post). As we're looking at the sample size to detect within-subject differences, our sample size given this input would be to tell the difference between pre and post test scores. | Repeated measures within factors settings for G*Power power calculation
Jumping in a bit late, but I figured I'd build on @Jeremy's response and add some clarifying examples (as the 2x2 RM design is a bit ambiguous to me).
Assuming the "ANOVA: RM, within factors" option i |
52,022 | Repeated measures within factors settings for G*Power power calculation | The problem is that you cannot enter "1" in the "number of groups" window.
Therefore, seems it is not possible to use G*Power for RM ANOVAs with no between-subject factors. | Repeated measures within factors settings for G*Power power calculation | The problem is that you cannot enter "1" in the "number of groups" window.
Therefore, seems it is not possible to use G*Power for RM ANOVAs with no between-subject factors. | Repeated measures within factors settings for G*Power power calculation
The problem is that you cannot enter "1" in the "number of groups" window.
Therefore, seems it is not possible to use G*Power for RM ANOVAs with no between-subject factors. | Repeated measures within factors settings for G*Power power calculation
The problem is that you cannot enter "1" in the "number of groups" window.
Therefore, seems it is not possible to use G*Power for RM ANOVAs with no between-subject factors. |
52,023 | Omit 0 lag order in ACF plot | Another possible solution is as follows:
# Create an "acf" object called z
z <- acf(dummy)
# Check class of the object
class(z)
# View attributes of the "acf" object
attributes(z)
# Use "acf" attribute to view the first 13 elements (1 = lag at 0)
z$acf[1:13]
# Get rid of the first element (i.e. lag 0)
z$acf[2:13]
# Plot the autocorrelation function without lag 0
plot(x$acf[2:13],
type="h",
main="Autocorrelation Function",
xlab="Lag",
ylab="ACF",
ylim=c(-0.2,0.2), # this sets the y scale to -0.2 to 0.2
las=1,
xaxt="n")
abline(h=0)
# Add labels to the x-axis
x <- c(1:12)
y <- c(1:12)
axis(1, at=x, labels=y)
So far, this answers the original question. After running the code you should see a plot like the one shown below.
With regard to adding significance bands to the autocorrelation function, it will first be necessary, in this case, to choose a larger scale on the y axis. Otherwise, the significant bands will be outside the range and we won't be able to see them.
For example:
# Plot the autocorrelation function without lag 0
plot(z$acf[2:13],
type="h",
main="Autocorrelation Function",
xlab="Lag",
ylab="ACF",
ylim=c(-1,1), # this sets the y scale to -1 to 1
las=1,
xaxt="n")
abline(h=0)
# Add labels to the x-axis
x <- c(1:12)
y <- c(1:12)
axis(1, at=x, labels=y)
# Add 5% critical levels
abline(h=c(2/sqrt(17),-2/sqrt(17)),lty=c(2,2))
Since you'd probably prefer to have Bartlett's approximations rather than those 5% critical values, you can do the following:
# Store length of dummy
n <- length(dummy)
# Create a vector to store Bartlett's standard errors
bart.error <- c()
# Use a loop to calculate Bartlett's standard errors
for (k in 1:n) {
ends <- k-1
bart.error[k] <- ((1 + sum((2*z$acf[0:(ends)]^2)))^0.5)*(n^-0.5)
}
# Create upper bound of interval (two standard errors above zero)
upper.bart <- 2*bart.error[1:12]
# Create lower bound of interval (two standard errors below zero)
lower.bart <- 2*-bart.error[1:12]
# Add intervals based on Bartlett's approximations to ACF plot
lines(upper.bart, lty=2, col="red"); lines(lower.bart, lty=2, col="red")
After running the code, you should see something like the plot below. The black dashed lines are the 5% critical values and the red dashed lines is the interval based on Bartlett's standard errors.
I hope this answers all of your questions. | Omit 0 lag order in ACF plot | Another possible solution is as follows:
# Create an "acf" object called z
z <- acf(dummy)
# Check class of the object
class(z)
# View attributes of the "acf" object
attributes(z)
# Use "acf" attribut | Omit 0 lag order in ACF plot
Another possible solution is as follows:
# Create an "acf" object called z
z <- acf(dummy)
# Check class of the object
class(z)
# View attributes of the "acf" object
attributes(z)
# Use "acf" attribute to view the first 13 elements (1 = lag at 0)
z$acf[1:13]
# Get rid of the first element (i.e. lag 0)
z$acf[2:13]
# Plot the autocorrelation function without lag 0
plot(x$acf[2:13],
type="h",
main="Autocorrelation Function",
xlab="Lag",
ylab="ACF",
ylim=c(-0.2,0.2), # this sets the y scale to -0.2 to 0.2
las=1,
xaxt="n")
abline(h=0)
# Add labels to the x-axis
x <- c(1:12)
y <- c(1:12)
axis(1, at=x, labels=y)
So far, this answers the original question. After running the code you should see a plot like the one shown below.
With regard to adding significance bands to the autocorrelation function, it will first be necessary, in this case, to choose a larger scale on the y axis. Otherwise, the significant bands will be outside the range and we won't be able to see them.
For example:
# Plot the autocorrelation function without lag 0
plot(z$acf[2:13],
type="h",
main="Autocorrelation Function",
xlab="Lag",
ylab="ACF",
ylim=c(-1,1), # this sets the y scale to -1 to 1
las=1,
xaxt="n")
abline(h=0)
# Add labels to the x-axis
x <- c(1:12)
y <- c(1:12)
axis(1, at=x, labels=y)
# Add 5% critical levels
abline(h=c(2/sqrt(17),-2/sqrt(17)),lty=c(2,2))
Since you'd probably prefer to have Bartlett's approximations rather than those 5% critical values, you can do the following:
# Store length of dummy
n <- length(dummy)
# Create a vector to store Bartlett's standard errors
bart.error <- c()
# Use a loop to calculate Bartlett's standard errors
for (k in 1:n) {
ends <- k-1
bart.error[k] <- ((1 + sum((2*z$acf[0:(ends)]^2)))^0.5)*(n^-0.5)
}
# Create upper bound of interval (two standard errors above zero)
upper.bart <- 2*bart.error[1:12]
# Create lower bound of interval (two standard errors below zero)
lower.bart <- 2*-bart.error[1:12]
# Add intervals based on Bartlett's approximations to ACF plot
lines(upper.bart, lty=2, col="red"); lines(lower.bart, lty=2, col="red")
After running the code, you should see something like the plot below. The black dashed lines are the 5% critical values and the red dashed lines is the interval based on Bartlett's standard errors.
I hope this answers all of your questions. | Omit 0 lag order in ACF plot
Another possible solution is as follows:
# Create an "acf" object called z
z <- acf(dummy)
# Check class of the object
class(z)
# View attributes of the "acf" object
attributes(z)
# Use "acf" attribut |
52,024 | Omit 0 lag order in ACF plot | Use the Acf function from the forecast package. | Omit 0 lag order in ACF plot | Use the Acf function from the forecast package. | Omit 0 lag order in ACF plot
Use the Acf function from the forecast package. | Omit 0 lag order in ACF plot
Use the Acf function from the forecast package. |
52,025 | Omit 0 lag order in ACF plot | Use this code:
suppose;
x = rnorm(100) ## A typical white noise process
plot(acf(x,plot=F)[1:20]) | Omit 0 lag order in ACF plot | Use this code:
suppose;
x = rnorm(100) ## A typical white noise process
plot(acf(x,plot=F)[1:20]) | Omit 0 lag order in ACF plot
Use this code:
suppose;
x = rnorm(100) ## A typical white noise process
plot(acf(x,plot=F)[1:20]) | Omit 0 lag order in ACF plot
Use this code:
suppose;
x = rnorm(100) ## A typical white noise process
plot(acf(x,plot=F)[1:20]) |
52,026 | Omit 0 lag order in ACF plot | Set the xlim and ylim. For example:
acf(x,20,xlim(1,20),ylim(-0.2,0.5)) | Omit 0 lag order in ACF plot | Set the xlim and ylim. For example:
acf(x,20,xlim(1,20),ylim(-0.2,0.5)) | Omit 0 lag order in ACF plot
Set the xlim and ylim. For example:
acf(x,20,xlim(1,20),ylim(-0.2,0.5)) | Omit 0 lag order in ACF plot
Set the xlim and ylim. For example:
acf(x,20,xlim(1,20),ylim(-0.2,0.5)) |
52,027 | Multiple curves when plotting a random forest [closed] | When there is no test result (ytest was empty for training), plot shows:
for classification, black solid line for overall OOB error and a bunch of colour lines, one for each class' error (i.e. 1-this class recall).
for regression, one black solid line for OOB MSE error.
When test is present, documentation (?plot.randomForest) claims additional lines should appear (for respective measures calculated on the test set), but they don't because there is a bug in the randomForest's code.
If you want to customize this plot, it is better to just access interesting elements ($err.rate, $test$err.rate, $mse or $test$mse) and combine them into a plot you want to have. | Multiple curves when plotting a random forest [closed] | When there is no test result (ytest was empty for training), plot shows:
for classification, black solid line for overall OOB error and a bunch of colour lines, one for each class' error (i.e. 1-this | Multiple curves when plotting a random forest [closed]
When there is no test result (ytest was empty for training), plot shows:
for classification, black solid line for overall OOB error and a bunch of colour lines, one for each class' error (i.e. 1-this class recall).
for regression, one black solid line for OOB MSE error.
When test is present, documentation (?plot.randomForest) claims additional lines should appear (for respective measures calculated on the test set), but they don't because there is a bug in the randomForest's code.
If you want to customize this plot, it is better to just access interesting elements ($err.rate, $test$err.rate, $mse or $test$mse) and combine them into a plot you want to have. | Multiple curves when plotting a random forest [closed]
When there is no test result (ytest was empty for training), plot shows:
for classification, black solid line for overall OOB error and a bunch of colour lines, one for each class' error (i.e. 1-this |
52,028 | Multiple curves when plotting a random forest [closed] | I have come across with the same issue and I found a link where it shows the graph you get when plotting the random forest model: http://statweb.stanford.edu/~jtaylo/courses/stats202/ensemble.html
If you scroll down, there is the plot(model) graph and the graph with four curves: one for the oob error in black and three corresponding to the error rates for each class ( Setosa, Versicolor and Virginica in the example). This kind of fits with my case where I just have two classes, so I get a black curve (for the oob error) and two coloured curves for the two classes I've got.
Now, the example matched red=Setosa, green=Versicolor and blue=Virginica. In my case, i've got a binary class: 0 and 1. To know which colour was which, I executed print(rf.model) and that gave me a confusion matrix with the class.error. There, I could sort of match that my class=0 was the red and the class=1 was the green colour. That seems a reasonable way to know which curve matches which class (if your curves are not very close together) and then you can use the command legend to improve the plot.
Hope that helps. | Multiple curves when plotting a random forest [closed] | I have come across with the same issue and I found a link where it shows the graph you get when plotting the random forest model: http://statweb.stanford.edu/~jtaylo/courses/stats202/ensemble.html
If | Multiple curves when plotting a random forest [closed]
I have come across with the same issue and I found a link where it shows the graph you get when plotting the random forest model: http://statweb.stanford.edu/~jtaylo/courses/stats202/ensemble.html
If you scroll down, there is the plot(model) graph and the graph with four curves: one for the oob error in black and three corresponding to the error rates for each class ( Setosa, Versicolor and Virginica in the example). This kind of fits with my case where I just have two classes, so I get a black curve (for the oob error) and two coloured curves for the two classes I've got.
Now, the example matched red=Setosa, green=Versicolor and blue=Virginica. In my case, i've got a binary class: 0 and 1. To know which colour was which, I executed print(rf.model) and that gave me a confusion matrix with the class.error. There, I could sort of match that my class=0 was the red and the class=1 was the green colour. That seems a reasonable way to know which curve matches which class (if your curves are not very close together) and then you can use the command legend to improve the plot.
Hope that helps. | Multiple curves when plotting a random forest [closed]
I have come across with the same issue and I found a link where it shows the graph you get when plotting the random forest model: http://statweb.stanford.edu/~jtaylo/courses/stats202/ensemble.html
If |
52,029 | Multiple curves when plotting a random forest [closed] | I know this post is a little bit old, but I just came accross the same problem and solved it like this:
rndF1 <- randomForest(train.X, train.Y, test.X, test.Y)
plot(rndF1)
rndF1.legend <- if (is.null(rndF1$test$err.rate)) {colnames(rndF1$err.rate)}
else {colnames(rndF1$test$err.rate)}
legend("top", cex =0.5, legend=rndF1.legend, lty=c(1,2,3), col=c(1,2,3), horiz=T)
You can try not providing test sets, to see how the plot and legend change.
Note that my problem had two clases, therefore "lty" and "col" are of length 3, that will need customization. | Multiple curves when plotting a random forest [closed] | I know this post is a little bit old, but I just came accross the same problem and solved it like this:
rndF1 <- randomForest(train.X, train.Y, test.X, test.Y)
plot(rndF1)
rndF1.legend <- if (is.null( | Multiple curves when plotting a random forest [closed]
I know this post is a little bit old, but I just came accross the same problem and solved it like this:
rndF1 <- randomForest(train.X, train.Y, test.X, test.Y)
plot(rndF1)
rndF1.legend <- if (is.null(rndF1$test$err.rate)) {colnames(rndF1$err.rate)}
else {colnames(rndF1$test$err.rate)}
legend("top", cex =0.5, legend=rndF1.legend, lty=c(1,2,3), col=c(1,2,3), horiz=T)
You can try not providing test sets, to see how the plot and legend change.
Note that my problem had two clases, therefore "lty" and "col" are of length 3, that will need customization. | Multiple curves when plotting a random forest [closed]
I know this post is a little bit old, but I just came accross the same problem and solved it like this:
rndF1 <- randomForest(train.X, train.Y, test.X, test.Y)
plot(rndF1)
rndF1.legend <- if (is.null( |
52,030 | Multiple curves when plotting a random forest [closed] | If the lines are very close, I think it is not necessary to distinct them. Otherwise, you can output the value by $err.rate. and compare with the line. Or you can use only several trees in the function randomForest(Species~., iris,importance=TRUE,ntree=24) and then plot it, then you it would be easier to tell and find the correspondence between color and classes. Then you will know the order of the color in the function plot(randomforestmodel,) | Multiple curves when plotting a random forest [closed] | If the lines are very close, I think it is not necessary to distinct them. Otherwise, you can output the value by $err.rate. and compare with the line. Or you can use only several trees in the functio | Multiple curves when plotting a random forest [closed]
If the lines are very close, I think it is not necessary to distinct them. Otherwise, you can output the value by $err.rate. and compare with the line. Or you can use only several trees in the function randomForest(Species~., iris,importance=TRUE,ntree=24) and then plot it, then you it would be easier to tell and find the correspondence between color and classes. Then you will know the order of the color in the function plot(randomforestmodel,) | Multiple curves when plotting a random forest [closed]
If the lines are very close, I think it is not necessary to distinct them. Otherwise, you can output the value by $err.rate. and compare with the line. Or you can use only several trees in the functio |
52,031 | Multiple curves when plotting a random forest [closed] | The plot() function can be very useful here. In fact, you can do something like this to change the labels that appear on the plot. The plot of the RandomForest will show error rates, and are very useful for analyzing the performance of the algorithm, for the different number of trees.
I recently used this:
cat(paste0('Here is a plot of the random forest for this grade, and its error rates\n'))
plot(model1, main = paste('Grade ', g))
In this case, the variable 'g' is the grade level of the student, and this is a looping variable, believe it or not. Each time through the loop, g begins at 1 and ends at 12.
Hope that helps! | Multiple curves when plotting a random forest [closed] | The plot() function can be very useful here. In fact, you can do something like this to change the labels that appear on the plot. The plot of the RandomForest will show error rates, and are very use | Multiple curves when plotting a random forest [closed]
The plot() function can be very useful here. In fact, you can do something like this to change the labels that appear on the plot. The plot of the RandomForest will show error rates, and are very useful for analyzing the performance of the algorithm, for the different number of trees.
I recently used this:
cat(paste0('Here is a plot of the random forest for this grade, and its error rates\n'))
plot(model1, main = paste('Grade ', g))
In this case, the variable 'g' is the grade level of the student, and this is a looping variable, believe it or not. Each time through the loop, g begins at 1 and ends at 12.
Hope that helps! | Multiple curves when plotting a random forest [closed]
The plot() function can be very useful here. In fact, you can do something like this to change the labels that appear on the plot. The plot of the RandomForest will show error rates, and are very use |
52,032 | How to determine if a variable is categorical? | Tax rates are not categorical, they are continuous. A tax rate can vary - e.g. the sales tax in New York City is, I believe, 8.825%.
It appears that the data you have only has certain tax rates. But that is a feature of your data, not an underlying characteristic of the variable. Categorical variables CANNOT take values in between other values. For example, "country of birth" is categorical. You were born in some country. It makes no sense to say (e.g.) that the USA is halfway between Norway and Czechoslovakia - it is not even wrong, it's nonsensical.
A separate question is how you should model these data. I think linear regression is a good first attempt, then you should look at plots of the residuals. | How to determine if a variable is categorical? | Tax rates are not categorical, they are continuous. A tax rate can vary - e.g. the sales tax in New York City is, I believe, 8.825%.
It appears that the data you have only has certain tax rates. But t | How to determine if a variable is categorical?
Tax rates are not categorical, they are continuous. A tax rate can vary - e.g. the sales tax in New York City is, I believe, 8.825%.
It appears that the data you have only has certain tax rates. But that is a feature of your data, not an underlying characteristic of the variable. Categorical variables CANNOT take values in between other values. For example, "country of birth" is categorical. You were born in some country. It makes no sense to say (e.g.) that the USA is halfway between Norway and Czechoslovakia - it is not even wrong, it's nonsensical.
A separate question is how you should model these data. I think linear regression is a good first attempt, then you should look at plots of the residuals. | How to determine if a variable is categorical?
Tax rates are not categorical, they are continuous. A tax rate can vary - e.g. the sales tax in New York City is, I believe, 8.825%.
It appears that the data you have only has certain tax rates. But t |
52,033 | How to determine if a variable is categorical? | It certainly looks as if the variable plotted along the X axis can only take certain discrete values.
However ... a categorical variable is one that takes values in a sample space where neither magnitude nor order have any meaning. Example: a medical study might record the gender of the patient (male/female), which is categorical .. the age (which is numeric) ... and which of several possible OTC cold medications they took -- also categorical.
A categorical variable could have infinite support --- imagine sequences of letters from the Latin alphabet -- of arbitrary length. You have an infinite number of possibilities -- all categorical, because there is no natural way to measure the distance between them, or to rank them (although we could come up with a few).
Contrarywise, a numeric variable could admit to a discrete number of possible outcomes -- such as the spectrum of a particular chemical element. | How to determine if a variable is categorical? | It certainly looks as if the variable plotted along the X axis can only take certain discrete values.
However ... a categorical variable is one that takes values in a sample space where neither magni | How to determine if a variable is categorical?
It certainly looks as if the variable plotted along the X axis can only take certain discrete values.
However ... a categorical variable is one that takes values in a sample space where neither magnitude nor order have any meaning. Example: a medical study might record the gender of the patient (male/female), which is categorical .. the age (which is numeric) ... and which of several possible OTC cold medications they took -- also categorical.
A categorical variable could have infinite support --- imagine sequences of letters from the Latin alphabet -- of arbitrary length. You have an infinite number of possibilities -- all categorical, because there is no natural way to measure the distance between them, or to rank them (although we could come up with a few).
Contrarywise, a numeric variable could admit to a discrete number of possible outcomes -- such as the spectrum of a particular chemical element. | How to determine if a variable is categorical?
It certainly looks as if the variable plotted along the X axis can only take certain discrete values.
However ... a categorical variable is one that takes values in a sample space where neither magni |
52,034 | How robust is ANOVA to violations of normality? | Don't look at it as a binary thing: "either I can trust the results or I can't." Look at it as a spectrum. With all assumptions perfectly satisfied (including the in most cases crucial one of random sampling), statistics such as F- and p-values will allow you to make accurate sample-to-population inferences. The farther one gets from that situation, the more skeptical one should be about such results. You've got a substantial degree of nonnormality; that's one strike against accuracy. Now how about the other assumptions underlying the use of ANOVA? Size it all up the best you can, and document in a footnote or a technical section what you find. You also should look at this page, as @William pointed out.
As to your last question, I don't believe you need to change your strategy vis-a-vis multiple comparisons just because you move from a parametric to a nonparametric test. If you want to describe the rationale for your current approach, I'm sure people will be glad to comment on it. | How robust is ANOVA to violations of normality? | Don't look at it as a binary thing: "either I can trust the results or I can't." Look at it as a spectrum. With all assumptions perfectly satisfied (including the in most cases crucial one of rando | How robust is ANOVA to violations of normality?
Don't look at it as a binary thing: "either I can trust the results or I can't." Look at it as a spectrum. With all assumptions perfectly satisfied (including the in most cases crucial one of random sampling), statistics such as F- and p-values will allow you to make accurate sample-to-population inferences. The farther one gets from that situation, the more skeptical one should be about such results. You've got a substantial degree of nonnormality; that's one strike against accuracy. Now how about the other assumptions underlying the use of ANOVA? Size it all up the best you can, and document in a footnote or a technical section what you find. You also should look at this page, as @William pointed out.
As to your last question, I don't believe you need to change your strategy vis-a-vis multiple comparisons just because you move from a parametric to a nonparametric test. If you want to describe the rationale for your current approach, I'm sure people will be glad to comment on it. | How robust is ANOVA to violations of normality?
Don't look at it as a binary thing: "either I can trust the results or I can't." Look at it as a spectrum. With all assumptions perfectly satisfied (including the in most cases crucial one of rando |
52,035 | How robust is ANOVA to violations of normality? | Let me state a couple of things. First, I think it's best to understand repeated measures ANOVA as actually a multi-level model in disguise, and that may create additional complexities here. I should let one of CV's contributors who are more expert on multi-level models address that issue.
However, in general, it's worth noting that not all assumptions are created equal. People tend to think that the normality assumption is vital, whereas I think of it as the least important. Heterogeneity is a bigger deal. Skew is potentially more damaging than kurtosis, but if it isn't too large, and all groups are skewed in the same direction, it may not be lethal. Basically, whether or not the residuals are normal has to do with whether the p-values are accurate, but the parameter estimates should remain unbiased. On the other hand, heterogeneity of variance has to do with the efficiency of the OLS estimator. | How robust is ANOVA to violations of normality? | Let me state a couple of things. First, I think it's best to understand repeated measures ANOVA as actually a multi-level model in disguise, and that may create additional complexities here. I shoul | How robust is ANOVA to violations of normality?
Let me state a couple of things. First, I think it's best to understand repeated measures ANOVA as actually a multi-level model in disguise, and that may create additional complexities here. I should let one of CV's contributors who are more expert on multi-level models address that issue.
However, in general, it's worth noting that not all assumptions are created equal. People tend to think that the normality assumption is vital, whereas I think of it as the least important. Heterogeneity is a bigger deal. Skew is potentially more damaging than kurtosis, but if it isn't too large, and all groups are skewed in the same direction, it may not be lethal. Basically, whether or not the residuals are normal has to do with whether the p-values are accurate, but the parameter estimates should remain unbiased. On the other hand, heterogeneity of variance has to do with the efficiency of the OLS estimator. | How robust is ANOVA to violations of normality?
Let me state a couple of things. First, I think it's best to understand repeated measures ANOVA as actually a multi-level model in disguise, and that may create additional complexities here. I shoul |
52,036 | How robust is ANOVA to violations of normality? | My understanding is that ANOVA including repeated measures is robust to violations to normality of errors assumptions. However there is indications that the errors should be equal in their variation across different factor levels.
Can I trust ANOVA results for a non-normally distributed DV? | How robust is ANOVA to violations of normality? | My understanding is that ANOVA including repeated measures is robust to violations to normality of errors assumptions. However there is indications that the errors should be equal in their variation | How robust is ANOVA to violations of normality?
My understanding is that ANOVA including repeated measures is robust to violations to normality of errors assumptions. However there is indications that the errors should be equal in their variation across different factor levels.
Can I trust ANOVA results for a non-normally distributed DV? | How robust is ANOVA to violations of normality?
My understanding is that ANOVA including repeated measures is robust to violations to normality of errors assumptions. However there is indications that the errors should be equal in their variation |
52,037 | How to do a pretty scatter plot in R? [closed] | You can to do this in the new version of ggplot2 (0.9).
You can try it out:
library(ggplot2) #make sure the newest is installed
df <- data.frame(v1 = runif(1000), v2 = runif(1000))
bin.plot<-qplot(data=df,
x=v1,
y=v2,
z=v2)
bin.plot+stat_summary_hex(fun=function(z)length(z))
bin.plot+stat_summary2d()(fun=function(z)length(z))
These may also be of interest if you want to bin only on one variable
geom_violin
geom_dotplot
You can also start by binning your data and then jitter it.
The release notes of ggplot2 0.9:
http://cloud.github.com/downloads/hadley/ggplot2/guide-col.pdf
For development versions of ggplot2
#library(devtools)
#dev_mode()
#install_github("ggplot2")
#library(ggplot2) | How to do a pretty scatter plot in R? [closed] | You can to do this in the new version of ggplot2 (0.9).
You can try it out:
library(ggplot2) #make sure the newest is installed
df <- data.frame(v1 = runif(1000), v2 = runif(1000))
bin.plot<-qplot(d | How to do a pretty scatter plot in R? [closed]
You can to do this in the new version of ggplot2 (0.9).
You can try it out:
library(ggplot2) #make sure the newest is installed
df <- data.frame(v1 = runif(1000), v2 = runif(1000))
bin.plot<-qplot(data=df,
x=v1,
y=v2,
z=v2)
bin.plot+stat_summary_hex(fun=function(z)length(z))
bin.plot+stat_summary2d()(fun=function(z)length(z))
These may also be of interest if you want to bin only on one variable
geom_violin
geom_dotplot
You can also start by binning your data and then jitter it.
The release notes of ggplot2 0.9:
http://cloud.github.com/downloads/hadley/ggplot2/guide-col.pdf
For development versions of ggplot2
#library(devtools)
#dev_mode()
#install_github("ggplot2")
#library(ggplot2) | How to do a pretty scatter plot in R? [closed]
You can to do this in the new version of ggplot2 (0.9).
You can try it out:
library(ggplot2) #make sure the newest is installed
df <- data.frame(v1 = runif(1000), v2 = runif(1000))
bin.plot<-qplot(d |
52,038 | How to do a pretty scatter plot in R? [closed] | You may want to look at these two entries from 'SAS and R':
http://sas-and-r.blogspot.com/2011/07/example-91-scatterplots-with-binning.html
http://sas-and-r.blogspot.com/2011/07/example-92-transparency-and-bivariate.html
They cover the use of binning, transparency and bivariate kernel density estimators for scatter plots of large amounts of data. They might serve as decent starting points.
I'm rather biased against ggplot2, so I won't comment on whether or not you need to use it for prettyness - I find the figures in these entries to be perfectly appealing. | How to do a pretty scatter plot in R? [closed] | You may want to look at these two entries from 'SAS and R':
http://sas-and-r.blogspot.com/2011/07/example-91-scatterplots-with-binning.html
http://sas-and-r.blogspot.com/2011/07/example-92-transparen | How to do a pretty scatter plot in R? [closed]
You may want to look at these two entries from 'SAS and R':
http://sas-and-r.blogspot.com/2011/07/example-91-scatterplots-with-binning.html
http://sas-and-r.blogspot.com/2011/07/example-92-transparency-and-bivariate.html
They cover the use of binning, transparency and bivariate kernel density estimators for scatter plots of large amounts of data. They might serve as decent starting points.
I'm rather biased against ggplot2, so I won't comment on whether or not you need to use it for prettyness - I find the figures in these entries to be perfectly appealing. | How to do a pretty scatter plot in R? [closed]
You may want to look at these two entries from 'SAS and R':
http://sas-and-r.blogspot.com/2011/07/example-91-scatterplots-with-binning.html
http://sas-and-r.blogspot.com/2011/07/example-92-transparen |
52,039 | How to do a pretty scatter plot in R? [closed] | It's not really an answer to your question about binning one easy solution in ggplot2 to deal with large amount of data in scatterplots is to use the alpha parameter to set some transparency
> df <- data.frame(v1 = rnorm(100000), v2 = rnorm(100000))
> ggplot(df, aes(x=v1, y=v2)) + geom_point(alpha = .01) + theme_bw() | How to do a pretty scatter plot in R? [closed] | It's not really an answer to your question about binning one easy solution in ggplot2 to deal with large amount of data in scatterplots is to use the alpha parameter to set some transparency
> df <- d | How to do a pretty scatter plot in R? [closed]
It's not really an answer to your question about binning one easy solution in ggplot2 to deal with large amount of data in scatterplots is to use the alpha parameter to set some transparency
> df <- data.frame(v1 = rnorm(100000), v2 = rnorm(100000))
> ggplot(df, aes(x=v1, y=v2)) + geom_point(alpha = .01) + theme_bw() | How to do a pretty scatter plot in R? [closed]
It's not really an answer to your question about binning one easy solution in ggplot2 to deal with large amount of data in scatterplots is to use the alpha parameter to set some transparency
> df <- d |
52,040 | Orthogonalized regression reference? | I think you misremember the end of the process. In R, it would go like this:
# generating random x1 x2 x3 in (0,1) (10 values each)
> x1 <- runif(10)
> x2 <- runif(10)
> x3 <- runif(10)
# generating y
> y <- x1 + 2*x2 + 3*x3 + rnorm(10)
# classical regression
> lm(y ~ x1 + x2 + x3)
Call:
lm(formula = y ~ x1 + x2 + x3)
Coefficients:
(Intercept) x1 x2 x3
0.2270 2.0088 0.2746 3.1529
# "orthogonalized" regression
> lm(x1 ~ x2 + x3)$residuals -> z1
> lm(x2 ~ x1 + x3)$residuals -> z2
> lm(x3 ~ x1 + x2)$residuals -> z3
> lm(y ~ z1)
Call:
lm(formula = y ~ z1)
Coefficients:
(Intercept) z1
3.056 2.009
> lm(y ~ z2)
Call:
lm(formula = y ~ z2)
Coefficients:
(Intercept) z2
3.0560 0.2746
> lm(y ~ z3)
Call:
lm(formula = y ~ z3)
Coefficients:
(Intercept) z3
3.056 3.153
See? You get the same estimates $\hat \beta_i$ for $i = 1,2,3$. Note that the intercepts are differents; the residual $z_i$ are centered so the intercept of eg the regression y ~ z1 is just the mean of $y$ (and similarly for $z_2$, $z_3$). Once you get the $\hat \beta_i$ it is not difficult to find the intercept of the classical regression.
Mathematical explications will be find in page 54-55 of last edition of The elements of statiscal learning — much clearer and accurate that anything I could write (available on line). | Orthogonalized regression reference? | I think you misremember the end of the process. In R, it would go like this:
# generating random x1 x2 x3 in (0,1) (10 values each)
> x1 <- runif(10)
> x2 <- runif(10)
> x3 <- runif(10)
# generating | Orthogonalized regression reference?
I think you misremember the end of the process. In R, it would go like this:
# generating random x1 x2 x3 in (0,1) (10 values each)
> x1 <- runif(10)
> x2 <- runif(10)
> x3 <- runif(10)
# generating y
> y <- x1 + 2*x2 + 3*x3 + rnorm(10)
# classical regression
> lm(y ~ x1 + x2 + x3)
Call:
lm(formula = y ~ x1 + x2 + x3)
Coefficients:
(Intercept) x1 x2 x3
0.2270 2.0088 0.2746 3.1529
# "orthogonalized" regression
> lm(x1 ~ x2 + x3)$residuals -> z1
> lm(x2 ~ x1 + x3)$residuals -> z2
> lm(x3 ~ x1 + x2)$residuals -> z3
> lm(y ~ z1)
Call:
lm(formula = y ~ z1)
Coefficients:
(Intercept) z1
3.056 2.009
> lm(y ~ z2)
Call:
lm(formula = y ~ z2)
Coefficients:
(Intercept) z2
3.0560 0.2746
> lm(y ~ z3)
Call:
lm(formula = y ~ z3)
Coefficients:
(Intercept) z3
3.056 3.153
See? You get the same estimates $\hat \beta_i$ for $i = 1,2,3$. Note that the intercepts are differents; the residual $z_i$ are centered so the intercept of eg the regression y ~ z1 is just the mean of $y$ (and similarly for $z_2$, $z_3$). Once you get the $\hat \beta_i$ it is not difficult to find the intercept of the classical regression.
Mathematical explications will be find in page 54-55 of last edition of The elements of statiscal learning — much clearer and accurate that anything I could write (available on line). | Orthogonalized regression reference?
I think you misremember the end of the process. In R, it would go like this:
# generating random x1 x2 x3 in (0,1) (10 values each)
> x1 <- runif(10)
> x2 <- runif(10)
> x3 <- runif(10)
# generating |
52,041 | Orthogonalized regression reference? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
This is the Frisch Waugh Lovell theorem in action | Orthogonalized regression reference? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Orthogonalized regression reference?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
This is the Frisch Waugh Lovell theorem in action | Orthogonalized regression reference?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
52,042 | Orthogonalized regression reference? | Ruud's An Introduction to Classical Econometric Theory rides that FWL pony about as far as possible. It's a really interesting geometric take on regression. | Orthogonalized regression reference? | Ruud's An Introduction to Classical Econometric Theory rides that FWL pony about as far as possible. It's a really interesting geometric take on regression. | Orthogonalized regression reference?
Ruud's An Introduction to Classical Econometric Theory rides that FWL pony about as far as possible. It's a really interesting geometric take on regression. | Orthogonalized regression reference?
Ruud's An Introduction to Classical Econometric Theory rides that FWL pony about as far as possible. It's a really interesting geometric take on regression. |
52,043 | Orthogonalized regression reference? | Model can be reparametrized in such a way that two new likelihood equations emerge, each with just one unknown parameter. This will facilitate solving the likelihood equations and also help the general interpretation and use of regression models. (7.2.2 in [hendry2007econometric])
Suppose you want to reparametrize the following model: (note that $X_{3}$ can be any transformation of some previous regressor)
$$
Y \sim X_{1} + X_{2} + X_{3}
$$
$X_{1}$, $X_{2}$ and $X_{3}$ can be orthogonalized at the same time. In the book, the operation is based on a constant vector.
$$
\begin{aligned}
Z_{1} &= \mathrm{residuals}\left(X_{1} \sim 1 \right) \\
Z_{2} &= \mathrm{residuals}\left(X_{2} \sim 1 + X_{1} \right) \\
Z_{3} &= \mathrm{residuals}\left(X_{3} \sim 1 + X_{1} + X_{2}\right)
\end{aligned}
$$
Following the example by @Elvis:
library(magrittr)
## generating random x1 x2 x3 in (0,1) (10 values each)
x1 <- runif(10)
x2 <- runif(10)
x3 <- runif(10)
## generating y
y <- x1 + 2 * x2 + 3 * x3 + rnorm(10)
## classical regression
lm(y ~ x1 + x2 + x3) %>% summary()
## orthogonalize regressors on a unit vector
lm(x1 ~ 1)$residuals -> z1
lm(x2 ~ 1 + x1)$residuals -> z2
lm(x3 ~ 1 + x1 + x2)$residuals -> z3
lm(y ~ z1 + z2 + z3) %>% summary()
You will have:
Call:
lm(formula = y ~ x1 + x2 + x3)
Estimate Std. Error t value Pr(>|t|)
(Intercept) -2.1528 0.7973 -2.700 0.03558 *
x1 2.1005 0.9730 2.159 0.07421 .
x2 0.7895 0.9364 0.843 0.43149
x3 6.8008 1.0055 6.764 0.00051 ***
Residual standard error: 0.7628 on 6 degrees of freedom
Multiple R-squared: 0.9293, Adjusted R-squared: 0.8939
F-statistic: 26.27 on 3 and 6 DF, p-value: 0.0007538
Call:
lm(formula = y ~ z1 + z2 + z3)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.18106 0.24121 13.188 1.17e-05 ***
z1 -0.05549 0.72386 -0.077 0.94139
z2 4.41463 0.76784 5.749 0.00121 **
z3 6.80079 1.00551 6.764 0.00051 ***
Residual standard error: 0.7628 on 6 degrees of freedom
Multiple R-squared: 0.9293, Adjusted R-squared: 0.8939
F-statistic: 26.27 on 3 and 6 DF, p-value: 0.0007538
So the intercept in the second model can be interpreted as the expected value for an individual with average values of x1, x2 and x3, and its standard error is reduced by 78.21%. Most of the time, you are very interested in this value.
Also, maximum likelihood estimators become much easier to handle. (5.2.3 in [hendry2007econometric])
Referece
hendry2007econometric Hendry, D. F., & Nielsen, B. (2007). Econometric modeling: a likelihood approach. Princeton University Press. | Orthogonalized regression reference? | Model can be reparametrized in such a way that two new likelihood equations emerge, each with just one unknown parameter. This will facilitate solving the likelihood equations and also help the genera | Orthogonalized regression reference?
Model can be reparametrized in such a way that two new likelihood equations emerge, each with just one unknown parameter. This will facilitate solving the likelihood equations and also help the general interpretation and use of regression models. (7.2.2 in [hendry2007econometric])
Suppose you want to reparametrize the following model: (note that $X_{3}$ can be any transformation of some previous regressor)
$$
Y \sim X_{1} + X_{2} + X_{3}
$$
$X_{1}$, $X_{2}$ and $X_{3}$ can be orthogonalized at the same time. In the book, the operation is based on a constant vector.
$$
\begin{aligned}
Z_{1} &= \mathrm{residuals}\left(X_{1} \sim 1 \right) \\
Z_{2} &= \mathrm{residuals}\left(X_{2} \sim 1 + X_{1} \right) \\
Z_{3} &= \mathrm{residuals}\left(X_{3} \sim 1 + X_{1} + X_{2}\right)
\end{aligned}
$$
Following the example by @Elvis:
library(magrittr)
## generating random x1 x2 x3 in (0,1) (10 values each)
x1 <- runif(10)
x2 <- runif(10)
x3 <- runif(10)
## generating y
y <- x1 + 2 * x2 + 3 * x3 + rnorm(10)
## classical regression
lm(y ~ x1 + x2 + x3) %>% summary()
## orthogonalize regressors on a unit vector
lm(x1 ~ 1)$residuals -> z1
lm(x2 ~ 1 + x1)$residuals -> z2
lm(x3 ~ 1 + x1 + x2)$residuals -> z3
lm(y ~ z1 + z2 + z3) %>% summary()
You will have:
Call:
lm(formula = y ~ x1 + x2 + x3)
Estimate Std. Error t value Pr(>|t|)
(Intercept) -2.1528 0.7973 -2.700 0.03558 *
x1 2.1005 0.9730 2.159 0.07421 .
x2 0.7895 0.9364 0.843 0.43149
x3 6.8008 1.0055 6.764 0.00051 ***
Residual standard error: 0.7628 on 6 degrees of freedom
Multiple R-squared: 0.9293, Adjusted R-squared: 0.8939
F-statistic: 26.27 on 3 and 6 DF, p-value: 0.0007538
Call:
lm(formula = y ~ z1 + z2 + z3)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.18106 0.24121 13.188 1.17e-05 ***
z1 -0.05549 0.72386 -0.077 0.94139
z2 4.41463 0.76784 5.749 0.00121 **
z3 6.80079 1.00551 6.764 0.00051 ***
Residual standard error: 0.7628 on 6 degrees of freedom
Multiple R-squared: 0.9293, Adjusted R-squared: 0.8939
F-statistic: 26.27 on 3 and 6 DF, p-value: 0.0007538
So the intercept in the second model can be interpreted as the expected value for an individual with average values of x1, x2 and x3, and its standard error is reduced by 78.21%. Most of the time, you are very interested in this value.
Also, maximum likelihood estimators become much easier to handle. (5.2.3 in [hendry2007econometric])
Referece
hendry2007econometric Hendry, D. F., & Nielsen, B. (2007). Econometric modeling: a likelihood approach. Princeton University Press. | Orthogonalized regression reference?
Model can be reparametrized in such a way that two new likelihood equations emerge, each with just one unknown parameter. This will facilitate solving the likelihood equations and also help the genera |
52,044 | Using control variates & antithetic method with Monte Carlo | There is no one way to implement either control or antithetic variates, however, a couple of examples may help.
Antithetic variables: Imagine instead of the random number generator you actually used, you generated a $U(0,1)$ variate, call it $u$, and ran it through the inverse CDF of the Weibull(1,5) distribution, thereby generating a Weibull(1,5) variate. For the next random number, use $1-u$ instead of generating a new $u$. For subsequent random numbers, alternate generating a new $u$ and using $1-u$. This helps to "balance" high and low values from your random number stream, thus reducing variability of your final estimates.
Control variates: These are "extra" variables that are correlated with the result, enabling you to do something like run a regression on your results against the control variates to get a more accurate estimate. In your case, for example, you know the true mean of the Weibull dist'n (5), so you could use the $x_i$ as a control variate. You would calculate the improved estimate:
$S^* = S - \frac{\widehat{cov}(x,res)}{\widehat{var}(x)} * (\bar{x} - 5)$
where the covariance and variance terms are estimated from the data. This helps correct the estimate for random number streams that are not, in some relevant way, totally representative of the underlying distribution.
Both methods, esp. control variates, are more general than these two examples might lead you to believe. The wikipedia links are at best rough introductions; plenty of books and papers covering both techniques are out there if you want to go more into depth. | Using control variates & antithetic method with Monte Carlo | There is no one way to implement either control or antithetic variates, however, a couple of examples may help.
Antithetic variables: Imagine instead of the random number generator you actually used | Using control variates & antithetic method with Monte Carlo
There is no one way to implement either control or antithetic variates, however, a couple of examples may help.
Antithetic variables: Imagine instead of the random number generator you actually used, you generated a $U(0,1)$ variate, call it $u$, and ran it through the inverse CDF of the Weibull(1,5) distribution, thereby generating a Weibull(1,5) variate. For the next random number, use $1-u$ instead of generating a new $u$. For subsequent random numbers, alternate generating a new $u$ and using $1-u$. This helps to "balance" high and low values from your random number stream, thus reducing variability of your final estimates.
Control variates: These are "extra" variables that are correlated with the result, enabling you to do something like run a regression on your results against the control variates to get a more accurate estimate. In your case, for example, you know the true mean of the Weibull dist'n (5), so you could use the $x_i$ as a control variate. You would calculate the improved estimate:
$S^* = S - \frac{\widehat{cov}(x,res)}{\widehat{var}(x)} * (\bar{x} - 5)$
where the covariance and variance terms are estimated from the data. This helps correct the estimate for random number streams that are not, in some relevant way, totally representative of the underlying distribution.
Both methods, esp. control variates, are more general than these two examples might lead you to believe. The wikipedia links are at best rough introductions; plenty of books and papers covering both techniques are out there if you want to go more into depth. | Using control variates & antithetic method with Monte Carlo
There is no one way to implement either control or antithetic variates, however, a couple of examples may help.
Antithetic variables: Imagine instead of the random number generator you actually used |
52,045 | Using control variates & antithetic method with Monte Carlo | As correctly explained by jbowman, you have to create a negative correlation between your simulations to implement antithetic variables: with a generic wblrndfunction this is not possible. You thus have to get back to the definition of a Weibull $\mathcal{W}(\lambda,k)$, which is a scale transform of the $k$th power of an exponential variate, i.e.
$$
X\sim \mathcal{W}(\lambda,k)
$$
is equivalent to
$$
(X/\lambda)^k \sim \mathcal{E}(1)
$$
This can be reworded in terms of simulation as
$$
X=\lambda(-\log U)^{1/k}\,,\qquad U\sim\mathcal{U}(0,1)\,.
$$
Therefore, you can implement the antithetic method by using a sample of uniforms, $U_1,\ldots,U_n$ and its complent $1-U_1,\ldots,1-U_n$ and compare the variance of the estimator of $\mathbb{E}[X^{1/3}]$ with the corresponding estimator based on $U_1,\ldots,U_{2n}$. To exhibit the improvement, you have to run a Monte Carlo experiment repeating the computation of those variances on many samples of size $n$. (You cannot see the impact of antithetic simulation on a single run.)
The control variate is implemented in your case by choosing a known moment of the Weibull, for instance as suggested by jbowman,
$$
\mathbb{E}[X] = \lambda \Gamma(1+1/k)
$$
and using $X$ as the control variate. This means you compute the average of the simulated $X_i$'s along the average of the $X_i^{1/3}$ and the empirical covariance between the $X_i$'s and the $X_i^{1/3}$ as well as the empirical variance of the $X_i$'s to use jbowman formula. Again, checking the improvement brought by the control variate requires a Monte Carlo experiment with several runs. | Using control variates & antithetic method with Monte Carlo | As correctly explained by jbowman, you have to create a negative correlation between your simulations to implement antithetic variables: with a generic wblrndfunction this is not possible. You thus ha | Using control variates & antithetic method with Monte Carlo
As correctly explained by jbowman, you have to create a negative correlation between your simulations to implement antithetic variables: with a generic wblrndfunction this is not possible. You thus have to get back to the definition of a Weibull $\mathcal{W}(\lambda,k)$, which is a scale transform of the $k$th power of an exponential variate, i.e.
$$
X\sim \mathcal{W}(\lambda,k)
$$
is equivalent to
$$
(X/\lambda)^k \sim \mathcal{E}(1)
$$
This can be reworded in terms of simulation as
$$
X=\lambda(-\log U)^{1/k}\,,\qquad U\sim\mathcal{U}(0,1)\,.
$$
Therefore, you can implement the antithetic method by using a sample of uniforms, $U_1,\ldots,U_n$ and its complent $1-U_1,\ldots,1-U_n$ and compare the variance of the estimator of $\mathbb{E}[X^{1/3}]$ with the corresponding estimator based on $U_1,\ldots,U_{2n}$. To exhibit the improvement, you have to run a Monte Carlo experiment repeating the computation of those variances on many samples of size $n$. (You cannot see the impact of antithetic simulation on a single run.)
The control variate is implemented in your case by choosing a known moment of the Weibull, for instance as suggested by jbowman,
$$
\mathbb{E}[X] = \lambda \Gamma(1+1/k)
$$
and using $X$ as the control variate. This means you compute the average of the simulated $X_i$'s along the average of the $X_i^{1/3}$ and the empirical covariance between the $X_i$'s and the $X_i^{1/3}$ as well as the empirical variance of the $X_i$'s to use jbowman formula. Again, checking the improvement brought by the control variate requires a Monte Carlo experiment with several runs. | Using control variates & antithetic method with Monte Carlo
As correctly explained by jbowman, you have to create a negative correlation between your simulations to implement antithetic variables: with a generic wblrndfunction this is not possible. You thus ha |
52,046 | Splitting a numeric column for a dataframe | df_split<- strsplit(as.character(df$position), split=":")
df <- transform(df, seq_name= sapply(df_split, "[[", 1),pos2= sapply(df_split, "[[", 2))
>
> df
name position pos seq_name pos2
1 HLA 1:1-15 1:1-15 1 1-15
2 HLA 1:2-16 1:2-16 1 2-16
3 HLA 1:3-17 1:3-17 1 3-17 | Splitting a numeric column for a dataframe | df_split<- strsplit(as.character(df$position), split=":")
df <- transform(df, seq_name= sapply(df_split, "[[", 1),pos2= sapply(df_split, "[[", 2))
>
> df
name position pos seq_name pos2
1 HLA | Splitting a numeric column for a dataframe
df_split<- strsplit(as.character(df$position), split=":")
df <- transform(df, seq_name= sapply(df_split, "[[", 1),pos2= sapply(df_split, "[[", 2))
>
> df
name position pos seq_name pos2
1 HLA 1:1-15 1:1-15 1 1-15
2 HLA 1:2-16 1:2-16 1 2-16
3 HLA 1:3-17 1:3-17 1 3-17 | Splitting a numeric column for a dataframe
df_split<- strsplit(as.character(df$position), split=":")
df <- transform(df, seq_name= sapply(df_split, "[[", 1),pos2= sapply(df_split, "[[", 2))
>
> df
name position pos seq_name pos2
1 HLA |
52,047 | Splitting a numeric column for a dataframe | Here is a one line method using tidyr.separate():
library(tidyr)
df <- separate(df, position, into = c("seq","position"), sep = ":", extra = "merge") | Splitting a numeric column for a dataframe | Here is a one line method using tidyr.separate():
library(tidyr)
df <- separate(df, position, into = c("seq","position"), sep = ":", extra = "merge") | Splitting a numeric column for a dataframe
Here is a one line method using tidyr.separate():
library(tidyr)
df <- separate(df, position, into = c("seq","position"), sep = ":", extra = "merge") | Splitting a numeric column for a dataframe
Here is a one line method using tidyr.separate():
library(tidyr)
df <- separate(df, position, into = c("seq","position"), sep = ":", extra = "merge") |
52,048 | Splitting a numeric column for a dataframe | The "trick" is to use do.call.
> a <- data.frame(x = c("1:1-15", "1:2-16", "1:3-17"))
> a
x
1 1:1-15
2 1:2-16
3 1:3-17
> a$x <- as.character(a$x)
> a.split <- strsplit(a$x, split = ":")
> tmp <-do.call(rbind, a.split)
> data.frame(a, tmp)
x X1 X2
1 1:1-15 1 1-15
2 1:2-16 1 2-16
3 1:3-17 1 3-17 | Splitting a numeric column for a dataframe | The "trick" is to use do.call.
> a <- data.frame(x = c("1:1-15", "1:2-16", "1:3-17"))
> a
x
1 1:1-15
2 1:2-16
3 1:3-17
> a$x <- as.character(a$x)
> a.split <- strsplit(a$x, split = ":")
> tmp < | Splitting a numeric column for a dataframe
The "trick" is to use do.call.
> a <- data.frame(x = c("1:1-15", "1:2-16", "1:3-17"))
> a
x
1 1:1-15
2 1:2-16
3 1:3-17
> a$x <- as.character(a$x)
> a.split <- strsplit(a$x, split = ":")
> tmp <-do.call(rbind, a.split)
> data.frame(a, tmp)
x X1 X2
1 1:1-15 1 1-15
2 1:2-16 1 2-16
3 1:3-17 1 3-17 | Splitting a numeric column for a dataframe
The "trick" is to use do.call.
> a <- data.frame(x = c("1:1-15", "1:2-16", "1:3-17"))
> a
x
1 1:1-15
2 1:2-16
3 1:3-17
> a$x <- as.character(a$x)
> a.split <- strsplit(a$x, split = ":")
> tmp < |
52,049 | Increasing Exam Expected Mark | First a couple of assumptions:
1. All marks are equally likely.
1. If you guess your mark to be 95 and you get 95, your return mark is 100 not
105.
1. Similarly, if your exam mark is 1 and you guess 50 (say), then your return
mark is 0 not -4.
1. I'm only considering discrete marks, that is, values 0, ..., 100.
Suppose your guessed mark is $g=50$. Then your expected return mark is:
$$
\frac{\sum_{i=0}^{34} + \sum_{i=50}^{70} + \sum_{i=56}^{95}}{101} = 49.15482
$$
This is for a particularly $g$. We need to repeat this for all $g$. Using the R
code at the end, we get the following plot:
Since all marks are equally likely you get a plateau with edge effects. If you really have absolutely no idea of what mark you will get, then a sensible strategy would be to maximise the chance of passing the exam. If the pass mark is 40%, then set your guess mark at 35%. This now means that to pass the exam, you only need to get above 35% but more importantly your strategy for sitting the exam is to answer every question to the best of your ability.
If your guess mark was 30%, and towards the end of the exam you thought that you would score 42%, you are now in the strange position of deciding whether to intentionally make an error (as 42% results in a return mark of 37%).
Note: I think in most real life situations you would have some idea of how you would get on. For example, do you really think that you have equal probability of getting between 0-10%, 11-20%, ..., 90-100% in your exam.
R code
f =function(s) {
mark = 0
for(i in 0:100){
if(i < (s-10) | i > (s + 10))
mark = mark + max(0, i-5)
else
mark = mark + min(i+10, 100)
}
return(mark/101)
}
s = 0:100
y = sapply(s, f)
plot(s, y) | Increasing Exam Expected Mark | First a couple of assumptions:
1. All marks are equally likely.
1. If you guess your mark to be 95 and you get 95, your return mark is 100 not
105.
1. Similarly, if your exam mark is 1 and you gues | Increasing Exam Expected Mark
First a couple of assumptions:
1. All marks are equally likely.
1. If you guess your mark to be 95 and you get 95, your return mark is 100 not
105.
1. Similarly, if your exam mark is 1 and you guess 50 (say), then your return
mark is 0 not -4.
1. I'm only considering discrete marks, that is, values 0, ..., 100.
Suppose your guessed mark is $g=50$. Then your expected return mark is:
$$
\frac{\sum_{i=0}^{34} + \sum_{i=50}^{70} + \sum_{i=56}^{95}}{101} = 49.15482
$$
This is for a particularly $g$. We need to repeat this for all $g$. Using the R
code at the end, we get the following plot:
Since all marks are equally likely you get a plateau with edge effects. If you really have absolutely no idea of what mark you will get, then a sensible strategy would be to maximise the chance of passing the exam. If the pass mark is 40%, then set your guess mark at 35%. This now means that to pass the exam, you only need to get above 35% but more importantly your strategy for sitting the exam is to answer every question to the best of your ability.
If your guess mark was 30%, and towards the end of the exam you thought that you would score 42%, you are now in the strange position of deciding whether to intentionally make an error (as 42% results in a return mark of 37%).
Note: I think in most real life situations you would have some idea of how you would get on. For example, do you really think that you have equal probability of getting between 0-10%, 11-20%, ..., 90-100% in your exam.
R code
f =function(s) {
mark = 0
for(i in 0:100){
if(i < (s-10) | i > (s + 10))
mark = mark + max(0, i-5)
else
mark = mark + min(i+10, 100)
}
return(mark/101)
}
s = 0:100
y = sapply(s, f)
plot(s, y) | Increasing Exam Expected Mark
First a couple of assumptions:
1. All marks are equally likely.
1. If you guess your mark to be 95 and you get 95, your return mark is 100 not
105.
1. Similarly, if your exam mark is 1 and you gues |
52,050 | Increasing Exam Expected Mark | I'm not sure if this would be a funny game or your professor is mildly sadistic. It would be torturous for students who are right on the edge of passing (which we may expect them to be the worst guessers!) Sorry not an answer but I couldn't help myself. | Increasing Exam Expected Mark | I'm not sure if this would be a funny game or your professor is mildly sadistic. It would be torturous for students who are right on the edge of passing (which we may expect them to be the worst guess | Increasing Exam Expected Mark
I'm not sure if this would be a funny game or your professor is mildly sadistic. It would be torturous for students who are right on the edge of passing (which we may expect them to be the worst guessers!) Sorry not an answer but I couldn't help myself. | Increasing Exam Expected Mark
I'm not sure if this would be a funny game or your professor is mildly sadistic. It would be torturous for students who are right on the edge of passing (which we may expect them to be the worst guess |
52,051 | Increasing Exam Expected Mark | Use the bootstrap! Take lots of practice exams and estimate what your score will be on the real exam. If it does not improve your estimate, it will probably be good preparation! | Increasing Exam Expected Mark | Use the bootstrap! Take lots of practice exams and estimate what your score will be on the real exam. If it does not improve your estimate, it will probably be good preparation! | Increasing Exam Expected Mark
Use the bootstrap! Take lots of practice exams and estimate what your score will be on the real exam. If it does not improve your estimate, it will probably be good preparation! | Increasing Exam Expected Mark
Use the bootstrap! Take lots of practice exams and estimate what your score will be on the real exam. If it does not improve your estimate, it will probably be good preparation! |
52,052 | R-squared result in linear regression and "unexplained variance" | $R^2$ is the squared correlation of the OLS prediction $\hat{Y}$ and the DV $Y$. In a multiple regression with three predictors $X_{1}, X_{2}, X_{3}$:
# generate some data
> N <- 100
> X1 <- rnorm(N, 175, 7) # predictor 1
> X2 <- rnorm(N, 30, 8) # predictor 2
> X3 <- abs(rnorm(N, 60, 30)) # predictor 3
> Y <- 0.5*X1 - 0.3*X2 - 0.4*X3 + 10 + rnorm(N, 0, 10) # DV
> fitX123 <- lm(Y ~ X1 + X2 + X3) # regression
> summary(fitX123)$r.squared # R^2
[1] 0.6361916
> Yhat <- fitted(fitX123) # OLS prediction Yhat
> cor(Yhat, Y)^2
[1] 0.6361916
$R^2$ is also equal to the variance of $\hat{Y}$ divided by the variance of $Y$. In that sense, it is the "variance accounted for by the predictors".
> var(Yhat) / var(Y)
[1] 0.6361916
The squared semi-partial correlation of $Y$ with a predictor $X_{1}$ is equal to the increase in $R^2$ when adding $X_{1}$ as a predictor to the regression with all remaining predictors. This may be taken as the unique contribution of $X_{1}$ to the proportion of variance explained by all predictors. Here, the semi-partial correlation is the correlation of $Y$ with the residuals from regression where $X_{1}$ is the predicted variable and $X_{2}$ and $X_{3}$ are the predictors.
# residuals from regression with DV X1 and predictors X2, X3
> X1.X23 <- residuals(lm(X1 ~ X2 + X3))
> (spcorYX1.X23 <- cor(Y, X1.X23)) # semi-partial correlation of Y with X1
[1] 0.3172553
> spcorYX1.X23^2 # squared semi-partial correlation
[1] 0.1006509
> fitX23 <- lm(Y ~ X2 + X3) # regression with DV Y and predictors X2, X3
# increase in R^2 when changing to full regression
> summary(fitX123)$r.squared - summary(fitX23)$r.squared
[1] 0.1006509 | R-squared result in linear regression and "unexplained variance" | $R^2$ is the squared correlation of the OLS prediction $\hat{Y}$ and the DV $Y$. In a multiple regression with three predictors $X_{1}, X_{2}, X_{3}$:
# generate some data
> N <- 100
> X1 <- rnorm(N, | R-squared result in linear regression and "unexplained variance"
$R^2$ is the squared correlation of the OLS prediction $\hat{Y}$ and the DV $Y$. In a multiple regression with three predictors $X_{1}, X_{2}, X_{3}$:
# generate some data
> N <- 100
> X1 <- rnorm(N, 175, 7) # predictor 1
> X2 <- rnorm(N, 30, 8) # predictor 2
> X3 <- abs(rnorm(N, 60, 30)) # predictor 3
> Y <- 0.5*X1 - 0.3*X2 - 0.4*X3 + 10 + rnorm(N, 0, 10) # DV
> fitX123 <- lm(Y ~ X1 + X2 + X3) # regression
> summary(fitX123)$r.squared # R^2
[1] 0.6361916
> Yhat <- fitted(fitX123) # OLS prediction Yhat
> cor(Yhat, Y)^2
[1] 0.6361916
$R^2$ is also equal to the variance of $\hat{Y}$ divided by the variance of $Y$. In that sense, it is the "variance accounted for by the predictors".
> var(Yhat) / var(Y)
[1] 0.6361916
The squared semi-partial correlation of $Y$ with a predictor $X_{1}$ is equal to the increase in $R^2$ when adding $X_{1}$ as a predictor to the regression with all remaining predictors. This may be taken as the unique contribution of $X_{1}$ to the proportion of variance explained by all predictors. Here, the semi-partial correlation is the correlation of $Y$ with the residuals from regression where $X_{1}$ is the predicted variable and $X_{2}$ and $X_{3}$ are the predictors.
# residuals from regression with DV X1 and predictors X2, X3
> X1.X23 <- residuals(lm(X1 ~ X2 + X3))
> (spcorYX1.X23 <- cor(Y, X1.X23)) # semi-partial correlation of Y with X1
[1] 0.3172553
> spcorYX1.X23^2 # squared semi-partial correlation
[1] 0.1006509
> fitX23 <- lm(Y ~ X2 + X3) # regression with DV Y and predictors X2, X3
# increase in R^2 when changing to full regression
> summary(fitX123)$r.squared - summary(fitX23)$r.squared
[1] 0.1006509 | R-squared result in linear regression and "unexplained variance"
$R^2$ is the squared correlation of the OLS prediction $\hat{Y}$ and the DV $Y$. In a multiple regression with three predictors $X_{1}, X_{2}, X_{3}$:
# generate some data
> N <- 100
> X1 <- rnorm(N, |
52,053 | R-squared result in linear regression and "unexplained variance" | R^2 is the percent of variance in the DV accounted for by the whole model. That is, your intercept and your IVS combined account for that much of the variance, using the linear regression model.
In your case, you got an R^2 of 0.85, indicating that intercept, plus cdd plus pmax combined account for 85% of the variance in the DV. The other 15% is error. That is, variance that is not accounted for by the model
You cannot tell, from the information given, how much of this 85% is contributed by each. In order to do this, you would have to run more models:
DV ~ . (intercept alone)
DV ~ cdd
DV ~ pmax
Each of these would have an R^2, and you could then tell how much each adds. | R-squared result in linear regression and "unexplained variance" | R^2 is the percent of variance in the DV accounted for by the whole model. That is, your intercept and your IVS combined account for that much of the variance, using the linear regression model.
In y | R-squared result in linear regression and "unexplained variance"
R^2 is the percent of variance in the DV accounted for by the whole model. That is, your intercept and your IVS combined account for that much of the variance, using the linear regression model.
In your case, you got an R^2 of 0.85, indicating that intercept, plus cdd plus pmax combined account for 85% of the variance in the DV. The other 15% is error. That is, variance that is not accounted for by the model
You cannot tell, from the information given, how much of this 85% is contributed by each. In order to do this, you would have to run more models:
DV ~ . (intercept alone)
DV ~ cdd
DV ~ pmax
Each of these would have an R^2, and you could then tell how much each adds. | R-squared result in linear regression and "unexplained variance"
R^2 is the percent of variance in the DV accounted for by the whole model. That is, your intercept and your IVS combined account for that much of the variance, using the linear regression model.
In y |
52,054 | How to use the LOGNORMALDIST function to generate a Cumulative Distribution Function? | I mistrust all but the lowest-level functions in Excel, and for good reason: many procedures that go beyond simple arithmetic operations have flaws or errors and most of them are poorly documented. This includes all the probability distribution functions.
Numerical flaws are inevitable due to limitations in floating point accuracy. For example, no matter what platform you use, if it uses double precision then don't try to compute the tail probability of a standard normal distribution for z = 50: the value equals $10^{-545}$, which underflows. However, using NORMSDIST you'll get increasingly bad values once |z| exceeds 3 or so, and once |z| exceeds 8 Excel gives up and just returns zero. Here is a tabulation of some of the errors (compared to Mathematica's answers):
$$\eqalign{
Z &\quad 10^6\text{(Excel - True)/True} \cr
-3 &\quad 50 \cr
-4 &\quad 468 \cr
-5 &\quad 1580\cr
-6 &\quad 3582\cr
-7 &\quad 6462\cr
-8 &\quad 70789\cr
-9 &\quad -1000000
}$$
Therefore, if you must use Excel's statistical functions, severely limit the ones you do use; learn their flaws and foibles; work around those problems; and use the ones you are familiar with as building blocks for everything else. In this spirit, I recommend computing lognormal probabilities in terms of NORMSDIST: just apply it to the log of the argument. Specifically, in place of LOGNORMDIST(z, mu, sigma) use NORMSDIST((LN(z) - mu)/sigma). Once you have verified that these expressions do return the same values, you have also established exactly what LOGNORMDIST does: there's little possibility of confusion. In particular, you can see that mu is the mean of the logarithms, not the geometric mean, and that sigma is the SD of the logs, not the geometric SD. | How to use the LOGNORMALDIST function to generate a Cumulative Distribution Function? | I mistrust all but the lowest-level functions in Excel, and for good reason: many procedures that go beyond simple arithmetic operations have flaws or errors and most of them are poorly documented. T | How to use the LOGNORMALDIST function to generate a Cumulative Distribution Function?
I mistrust all but the lowest-level functions in Excel, and for good reason: many procedures that go beyond simple arithmetic operations have flaws or errors and most of them are poorly documented. This includes all the probability distribution functions.
Numerical flaws are inevitable due to limitations in floating point accuracy. For example, no matter what platform you use, if it uses double precision then don't try to compute the tail probability of a standard normal distribution for z = 50: the value equals $10^{-545}$, which underflows. However, using NORMSDIST you'll get increasingly bad values once |z| exceeds 3 or so, and once |z| exceeds 8 Excel gives up and just returns zero. Here is a tabulation of some of the errors (compared to Mathematica's answers):
$$\eqalign{
Z &\quad 10^6\text{(Excel - True)/True} \cr
-3 &\quad 50 \cr
-4 &\quad 468 \cr
-5 &\quad 1580\cr
-6 &\quad 3582\cr
-7 &\quad 6462\cr
-8 &\quad 70789\cr
-9 &\quad -1000000
}$$
Therefore, if you must use Excel's statistical functions, severely limit the ones you do use; learn their flaws and foibles; work around those problems; and use the ones you are familiar with as building blocks for everything else. In this spirit, I recommend computing lognormal probabilities in terms of NORMSDIST: just apply it to the log of the argument. Specifically, in place of LOGNORMDIST(z, mu, sigma) use NORMSDIST((LN(z) - mu)/sigma). Once you have verified that these expressions do return the same values, you have also established exactly what LOGNORMDIST does: there's little possibility of confusion. In particular, you can see that mu is the mean of the logarithms, not the geometric mean, and that sigma is the SD of the logs, not the geometric SD. | How to use the LOGNORMALDIST function to generate a Cumulative Distribution Function?
I mistrust all but the lowest-level functions in Excel, and for good reason: many procedures that go beyond simple arithmetic operations have flaws or errors and most of them are poorly documented. T |
52,055 | How to use the LOGNORMALDIST function to generate a Cumulative Distribution Function? | On Microsoft Office for Mac 2008 LOGNORMDIST(5;1;1) gives 0,728882893 in Excel. On R plnorm(5,1,1) gives 0.7288829. In R you need to supply mean and standard deviation on log scale, so it seems that in Excel you need to do the same. | How to use the LOGNORMALDIST function to generate a Cumulative Distribution Function? | On Microsoft Office for Mac 2008 LOGNORMDIST(5;1;1) gives 0,728882893 in Excel. On R plnorm(5,1,1) gives 0.7288829. In R you need to supply mean and standard deviation on log scale, so it seems that i | How to use the LOGNORMALDIST function to generate a Cumulative Distribution Function?
On Microsoft Office for Mac 2008 LOGNORMDIST(5;1;1) gives 0,728882893 in Excel. On R plnorm(5,1,1) gives 0.7288829. In R you need to supply mean and standard deviation on log scale, so it seems that in Excel you need to do the same. | How to use the LOGNORMALDIST function to generate a Cumulative Distribution Function?
On Microsoft Office for Mac 2008 LOGNORMDIST(5;1;1) gives 0,728882893 in Excel. On R plnorm(5,1,1) gives 0.7288829. In R you need to supply mean and standard deviation on log scale, so it seems that i |
52,056 | Calculate median without access to raw data | The question can be construed as requesting a nonparametric estimator of the median of a sample in the form f(min, mean, max, sd). In this circumstance, by contemplating extreme (two-point) distributions, we can trivially establish that
$$ 2\ \text{mean} - \text{max} \le \text{median} \le 2\ \text{mean} - \text{min}.$$
There might be an improvement available by considering the constraint imposed by the known SD. To make any more progress, additional assumptions are needed. Typically, some measure of skewness is essential. (In fact, skewness can be estimated from the deviation between the mean and the median relative to the sd, so one should be able to reverse the process.)
One could, in a pinch, use these four statistics to obtain a maximum-entropy solution and use its median for the estimator. Actually, the min and max probably won't be any good, but in a satellite image there are fixed upper and lower bounds (e.g., 0 and 255 for an eight-bit image); these would constrain the maximum-entropy solution nicely.
It's worth remarking that general-purpose image processing software is capable of producing far more information than this, so it could be worthwhile looking at other software solutions. Alternatively, often one can trick the software into supplying additional information. For example, if you could divide each apparent "object" into two pieces you would have statistics for the two halves. That would provide useful information for estimating a median. | Calculate median without access to raw data | The question can be construed as requesting a nonparametric estimator of the median of a sample in the form f(min, mean, max, sd). In this circumstance, by contemplating extreme (two-point) distribut | Calculate median without access to raw data
The question can be construed as requesting a nonparametric estimator of the median of a sample in the form f(min, mean, max, sd). In this circumstance, by contemplating extreme (two-point) distributions, we can trivially establish that
$$ 2\ \text{mean} - \text{max} \le \text{median} \le 2\ \text{mean} - \text{min}.$$
There might be an improvement available by considering the constraint imposed by the known SD. To make any more progress, additional assumptions are needed. Typically, some measure of skewness is essential. (In fact, skewness can be estimated from the deviation between the mean and the median relative to the sd, so one should be able to reverse the process.)
One could, in a pinch, use these four statistics to obtain a maximum-entropy solution and use its median for the estimator. Actually, the min and max probably won't be any good, but in a satellite image there are fixed upper and lower bounds (e.g., 0 and 255 for an eight-bit image); these would constrain the maximum-entropy solution nicely.
It's worth remarking that general-purpose image processing software is capable of producing far more information than this, so it could be worthwhile looking at other software solutions. Alternatively, often one can trick the software into supplying additional information. For example, if you could divide each apparent "object" into two pieces you would have statistics for the two halves. That would provide useful information for estimating a median. | Calculate median without access to raw data
The question can be construed as requesting a nonparametric estimator of the median of a sample in the form f(min, mean, max, sd). In this circumstance, by contemplating extreme (two-point) distribut |
52,057 | Calculate median without access to raw data | If you know underlying distribution of the data, you can.
For example, for normal distributed data, the mean and median are same (median=mode=mean).
Or for exponential distribution with mean $\lambda^{-1}$ the median is $\lambda^{-1} ln(2)$.
it is impossible to obtain median without having raw data or knowing the actual data distribution. | Calculate median without access to raw data | If you know underlying distribution of the data, you can.
For example, for normal distributed data, the mean and median are same (median=mode=mean).
Or for exponential distribution with mean $\lambda | Calculate median without access to raw data
If you know underlying distribution of the data, you can.
For example, for normal distributed data, the mean and median are same (median=mode=mean).
Or for exponential distribution with mean $\lambda^{-1}$ the median is $\lambda^{-1} ln(2)$.
it is impossible to obtain median without having raw data or knowing the actual data distribution. | Calculate median without access to raw data
If you know underlying distribution of the data, you can.
For example, for normal distributed data, the mean and median are same (median=mode=mean).
Or for exponential distribution with mean $\lambda |
52,058 | Difference between Excel's RAND(), RAND()*RAND(), etc | Standardization is good, but it's not the right standardization for this situation. It helps to see that multiplying values of RAND() is the same as adding their logarithms (followed by a subsequent exponentiation). Because the different calls to RAND() are supposed to be independent, those logarithms are still independently distributed. As a simple calculation shows, their common distribution actually has a mean and variance. (In fact, its negative is an exponential distribution.) The Central Limit Theorem applies. It says that the logs, suitably standardized, converge to a normal distribution. We conclude that these products--standardized to have a constant geometric mean and constant geometric variance--are converging to the exponential of a normally distributed variable: that is, a lognormal distribution. | Difference between Excel's RAND(), RAND()*RAND(), etc | Standardization is good, but it's not the right standardization for this situation. It helps to see that multiplying values of RAND() is the same as adding their logarithms (followed by a subsequent | Difference between Excel's RAND(), RAND()*RAND(), etc
Standardization is good, but it's not the right standardization for this situation. It helps to see that multiplying values of RAND() is the same as adding their logarithms (followed by a subsequent exponentiation). Because the different calls to RAND() are supposed to be independent, those logarithms are still independently distributed. As a simple calculation shows, their common distribution actually has a mean and variance. (In fact, its negative is an exponential distribution.) The Central Limit Theorem applies. It says that the logs, suitably standardized, converge to a normal distribution. We conclude that these products--standardized to have a constant geometric mean and constant geometric variance--are converging to the exponential of a normally distributed variable: that is, a lognormal distribution. | Difference between Excel's RAND(), RAND()*RAND(), etc
Standardization is good, but it's not the right standardization for this situation. It helps to see that multiplying values of RAND() is the same as adding their logarithms (followed by a subsequent |
52,059 | Difference between Excel's RAND(), RAND()*RAND(), etc | "In Excel, the Rand function returns a random number that is greater than or equal to 0 and less than 1. The Rand function returns a new random number each time your spreadsheet recalculates." -http://www.techonthenet.com/excel/formulas/rand.php
Because RAND() is always less than one and greater than zero, multiplying it by itself will make it smaller. As you do that over and over, you will get closer to zero. If you want something that gives you a random number between 0 and a, you can do a*RAND() instead. | Difference between Excel's RAND(), RAND()*RAND(), etc | "In Excel, the Rand function returns a random number that is greater than or equal to 0 and less than 1. The Rand function returns a new random number each time your spreadsheet recalculates." -http:/ | Difference between Excel's RAND(), RAND()*RAND(), etc
"In Excel, the Rand function returns a random number that is greater than or equal to 0 and less than 1. The Rand function returns a new random number each time your spreadsheet recalculates." -http://www.techonthenet.com/excel/formulas/rand.php
Because RAND() is always less than one and greater than zero, multiplying it by itself will make it smaller. As you do that over and over, you will get closer to zero. If you want something that gives you a random number between 0 and a, you can do a*RAND() instead. | Difference between Excel's RAND(), RAND()*RAND(), etc
"In Excel, the Rand function returns a random number that is greater than or equal to 0 and less than 1. The Rand function returns a new random number each time your spreadsheet recalculates." -http:/ |
52,060 | Difference between Excel's RAND(), RAND()*RAND(), etc | I am not sure why your graph has values from -2 to 4 but for what it is worth here is the answer to the text of your question:
Suppose that $U \sim U[0,1]$. Then the cdf of $U$ is given by $F(u) = u$ for $u \in (0,1)$ and 1 otherwise.
When you multiply different iid realizations of the random draws you are essentially computing the following:
$Y = U^n$ where $n$ is the number of times you are multiplying the random draws.
Thus, the corresponding cdf is:
$F(y) = P(Y \le y)$
i.e.,
$F(y) = P(U^n \le y)$
i.e.,
$F(y) = P(U \le y^{1/n})$
i.e.,
$F(y) = y^{1/n}$ for $y \in (0,1)$ and 1 otherwise.
The above cdf of $Y$ converges to a dirac-delta function on $Y=0$ as $n \rightarrow \infty$. Thus, $E(y) \rightarrow 0$ as $n \rightarrow \infty$.
The above convergence is also related to first-order stochastic dominance in the following sense:
Suppose that $n_1 > n_2$. Then, it is the case that:
$F(y|n_1) \ge F(y|n_2)$
Intuitively, the above result states that: In visual terms as $n$ increases the cdf of $Y$ shifts to the right. This happens because the pdf associated with $Y$ starts concentrating at the lower end of the interval $[0,1]$ and asymptotically all the pdf concentrates at 0 which explains the observed behavior.
General Case
@whuber's comment to this answer gives the solution when $Y$ is the product of $n$ independent, different random variables drawn from [0,1]. | Difference between Excel's RAND(), RAND()*RAND(), etc | I am not sure why your graph has values from -2 to 4 but for what it is worth here is the answer to the text of your question:
Suppose that $U \sim U[0,1]$. Then the cdf of $U$ is given by $F(u) = u$ | Difference between Excel's RAND(), RAND()*RAND(), etc
I am not sure why your graph has values from -2 to 4 but for what it is worth here is the answer to the text of your question:
Suppose that $U \sim U[0,1]$. Then the cdf of $U$ is given by $F(u) = u$ for $u \in (0,1)$ and 1 otherwise.
When you multiply different iid realizations of the random draws you are essentially computing the following:
$Y = U^n$ where $n$ is the number of times you are multiplying the random draws.
Thus, the corresponding cdf is:
$F(y) = P(Y \le y)$
i.e.,
$F(y) = P(U^n \le y)$
i.e.,
$F(y) = P(U \le y^{1/n})$
i.e.,
$F(y) = y^{1/n}$ for $y \in (0,1)$ and 1 otherwise.
The above cdf of $Y$ converges to a dirac-delta function on $Y=0$ as $n \rightarrow \infty$. Thus, $E(y) \rightarrow 0$ as $n \rightarrow \infty$.
The above convergence is also related to first-order stochastic dominance in the following sense:
Suppose that $n_1 > n_2$. Then, it is the case that:
$F(y|n_1) \ge F(y|n_2)$
Intuitively, the above result states that: In visual terms as $n$ increases the cdf of $Y$ shifts to the right. This happens because the pdf associated with $Y$ starts concentrating at the lower end of the interval $[0,1]$ and asymptotically all the pdf concentrates at 0 which explains the observed behavior.
General Case
@whuber's comment to this answer gives the solution when $Y$ is the product of $n$ independent, different random variables drawn from [0,1]. | Difference between Excel's RAND(), RAND()*RAND(), etc
I am not sure why your graph has values from -2 to 4 but for what it is worth here is the answer to the text of your question:
Suppose that $U \sim U[0,1]$. Then the cdf of $U$ is given by $F(u) = u$ |
52,061 | Difference between Excel's RAND(), RAND()*RAND(), etc | There is no mysterious reason. If you multiply a bunch of numbers between 0 an 1, the result will forcibly be close to 0. The average result for RAND()*RAND()*RAND()*RAND()*RAND()*RAND() should be something close to (0.5^6), that is, 0.015625.
Be careful using Excel's RAND() function, though. It's not the best random number generator in the world. | Difference between Excel's RAND(), RAND()*RAND(), etc | There is no mysterious reason. If you multiply a bunch of numbers between 0 an 1, the result will forcibly be close to 0. The average result for RAND()*RAND()*RAND()*RAND()*RAND()*RAND() should be som | Difference between Excel's RAND(), RAND()*RAND(), etc
There is no mysterious reason. If you multiply a bunch of numbers between 0 an 1, the result will forcibly be close to 0. The average result for RAND()*RAND()*RAND()*RAND()*RAND()*RAND() should be something close to (0.5^6), that is, 0.015625.
Be careful using Excel's RAND() function, though. It's not the best random number generator in the world. | Difference between Excel's RAND(), RAND()*RAND(), etc
There is no mysterious reason. If you multiply a bunch of numbers between 0 an 1, the result will forcibly be close to 0. The average result for RAND()*RAND()*RAND()*RAND()*RAND()*RAND() should be som |
52,062 | Is there a clean way to derive the start parameters for running the `fitdist()` function for the Gumbel distribution? | The NIST page on Gumbel distributions shows the method of moments estimators for the parameters of both the maximum and minimum extreme-value distributions. Those are easily calculated and should provide reliable initial estimates.
The parameterization of the density function for the minimum extreme value distribution on that page is:
$$f(x) = \frac{1}{\beta} \exp({\frac{x-\mu}{\beta}})\exp({-\exp({\frac{x-\mu}{\beta}}}))$$
with location parameter $\mu$ and scale parameter $\beta$.
In that parameterization, with sample mean $\bar X$ and sample standard deviation $s$, the method of moments estimators are:
$$\tilde{\beta}=\frac{s\sqrt6}{\pi}$$
and:
$$\tilde{\mu}=\bar X +0.5772 \tilde{\beta} $$
where 0.5772 is an approximation to Euler's constant.
That's used by survreg() in R, which you can see by typing the following at the command prompt:
survival::survreg.distributions$extreme$init | Is there a clean way to derive the start parameters for running the `fitdist()` function for the Gum | The NIST page on Gumbel distributions shows the method of moments estimators for the parameters of both the maximum and minimum extreme-value distributions. Those are easily calculated and should prov | Is there a clean way to derive the start parameters for running the `fitdist()` function for the Gumbel distribution?
The NIST page on Gumbel distributions shows the method of moments estimators for the parameters of both the maximum and minimum extreme-value distributions. Those are easily calculated and should provide reliable initial estimates.
The parameterization of the density function for the minimum extreme value distribution on that page is:
$$f(x) = \frac{1}{\beta} \exp({\frac{x-\mu}{\beta}})\exp({-\exp({\frac{x-\mu}{\beta}}}))$$
with location parameter $\mu$ and scale parameter $\beta$.
In that parameterization, with sample mean $\bar X$ and sample standard deviation $s$, the method of moments estimators are:
$$\tilde{\beta}=\frac{s\sqrt6}{\pi}$$
and:
$$\tilde{\mu}=\bar X +0.5772 \tilde{\beta} $$
where 0.5772 is an approximation to Euler's constant.
That's used by survreg() in R, which you can see by typing the following at the command prompt:
survival::survreg.distributions$extreme$init | Is there a clean way to derive the start parameters for running the `fitdist()` function for the Gum
The NIST page on Gumbel distributions shows the method of moments estimators for the parameters of both the maximum and minimum extreme-value distributions. Those are easily calculated and should prov |
52,063 | Is there a clean way to derive the start parameters for running the `fitdist()` function for the Gumbel distribution? | Putting EdM's answer into code, which seems to work well and is very concise:
library(evd)
library(fitdistrplus)
library(survival)
time <- seq(0, 1000, by = 1)
deathTime <- lung$time[lung$status == 2]
scale_est <- (sd(deathTime)*sqrt(6))/pi
loc_est <- mean(deathTime) + 0.5772157*scale_est
fitGum <- fitdist(deathTime, "gumbel",start=list(a = loc_est, b = scale_est))
survGum <- 1-evd::pgumbel(time, fitGum$estimate[1], fitGum$estimate[2])
plot(time,survGum,type="n",xlab="Time",ylab="Survival Probability", main="Lung Survival")
lines(survGum, type = "l", col = "red", lwd = 3) # plot Gumbel
Alternative: in referencing Why is nls() giving me "singular gradient matrix at initial parameter estimates" errors? per whuber's comment, which says to paraphrase: "Automatically finding good starting values for a nonlinear model is an art. (It's relatively easy for one-off datasets when you can just plot the data and make some good guesses visually.) One approach is to linearize the model and use least squares estimates." I came up with the following which appears to work in this case.
library(evd)
library(fitdistrplus)
library(survival)
time <- seq(0, 1000, by = 1)
deathTime <- lung$time[lung$status == 2]
# Define the linearized model
linearized_model <- function(time, a, b) {
log_time <- log(time)
log_time - a / b
}
# Define the objective function for least squares estimation
objective_function <- function(params) {
a <- params[1]
b <- params[2]
predicted_values <- linearized_model(deathTime, a, b)
residuals <- predicted_values - log(deathTime)
sum(residuals^2)
}
# Least squares estimation to obtain starting parameters
starting_params <- optim(c(1, 1), objective_function)$par
fitGum <- fitdist(deathTime, "gumbel",start=list(a = starting_params[1], b = starting_params[2]))
survGum <- 1-evd::pgumbel(time, fitGum$estimate[1], fitGum$estimate[2])
plot(time,survGum,type="n",xlab="Time",ylab="Survival Probability", main="Lung Survival")
lines(survGum, type = "l", col = "red", lwd = 3) # plot Gumbel | Is there a clean way to derive the start parameters for running the `fitdist()` function for the Gum | Putting EdM's answer into code, which seems to work well and is very concise:
library(evd)
library(fitdistrplus)
library(survival)
time <- seq(0, 1000, by = 1)
deathTime <- lung$time[lung$status == 2 | Is there a clean way to derive the start parameters for running the `fitdist()` function for the Gumbel distribution?
Putting EdM's answer into code, which seems to work well and is very concise:
library(evd)
library(fitdistrplus)
library(survival)
time <- seq(0, 1000, by = 1)
deathTime <- lung$time[lung$status == 2]
scale_est <- (sd(deathTime)*sqrt(6))/pi
loc_est <- mean(deathTime) + 0.5772157*scale_est
fitGum <- fitdist(deathTime, "gumbel",start=list(a = loc_est, b = scale_est))
survGum <- 1-evd::pgumbel(time, fitGum$estimate[1], fitGum$estimate[2])
plot(time,survGum,type="n",xlab="Time",ylab="Survival Probability", main="Lung Survival")
lines(survGum, type = "l", col = "red", lwd = 3) # plot Gumbel
Alternative: in referencing Why is nls() giving me "singular gradient matrix at initial parameter estimates" errors? per whuber's comment, which says to paraphrase: "Automatically finding good starting values for a nonlinear model is an art. (It's relatively easy for one-off datasets when you can just plot the data and make some good guesses visually.) One approach is to linearize the model and use least squares estimates." I came up with the following which appears to work in this case.
library(evd)
library(fitdistrplus)
library(survival)
time <- seq(0, 1000, by = 1)
deathTime <- lung$time[lung$status == 2]
# Define the linearized model
linearized_model <- function(time, a, b) {
log_time <- log(time)
log_time - a / b
}
# Define the objective function for least squares estimation
objective_function <- function(params) {
a <- params[1]
b <- params[2]
predicted_values <- linearized_model(deathTime, a, b)
residuals <- predicted_values - log(deathTime)
sum(residuals^2)
}
# Least squares estimation to obtain starting parameters
starting_params <- optim(c(1, 1), objective_function)$par
fitGum <- fitdist(deathTime, "gumbel",start=list(a = starting_params[1], b = starting_params[2]))
survGum <- 1-evd::pgumbel(time, fitGum$estimate[1], fitGum$estimate[2])
plot(time,survGum,type="n",xlab="Time",ylab="Survival Probability", main="Lung Survival")
lines(survGum, type = "l", col = "red", lwd = 3) # plot Gumbel | Is there a clean way to derive the start parameters for running the `fitdist()` function for the Gum
Putting EdM's answer into code, which seems to work well and is very concise:
library(evd)
library(fitdistrplus)
library(survival)
time <- seq(0, 1000, by = 1)
deathTime <- lung$time[lung$status == 2 |
52,064 | What does it mean for $\hat\beta_1$ and $\hat\beta_0$ to have a variance? | The result that we obtain from linear regression is a function of random variables, so the parameters are random variables. You can calculate variance for any random variable. The variance tells us how uncertain the estimates are (in absolutely noiseless data, or with an overfitting model, the variances would be zeros). | What does it mean for $\hat\beta_1$ and $\hat\beta_0$ to have a variance? | The result that we obtain from linear regression is a function of random variables, so the parameters are random variables. You can calculate variance for any random variable. The variance tells us ho | What does it mean for $\hat\beta_1$ and $\hat\beta_0$ to have a variance?
The result that we obtain from linear regression is a function of random variables, so the parameters are random variables. You can calculate variance for any random variable. The variance tells us how uncertain the estimates are (in absolutely noiseless data, or with an overfitting model, the variances would be zeros). | What does it mean for $\hat\beta_1$ and $\hat\beta_0$ to have a variance?
The result that we obtain from linear regression is a function of random variables, so the parameters are random variables. You can calculate variance for any random variable. The variance tells us ho |
52,065 | What does it mean for $\hat\beta_1$ and $\hat\beta_0$ to have a variance? | The data you are feeding into your OLS model are random draws from some underlying population. You could either be drawing from the joint distribution of the predictors and the outcome, or from the distribution of the outcome conditional on the predictors (so you consider the predictors fixed - this is the assumption in OLS).
In either case, your regression coefficient estimates will be random variables themselves, because they are functions of the random variable $y$. The statistical properties of your parameter estimates, like the mean and the variance, will depend on the properties of the outcomes, which is why the variance of parameter estimates depends on both the model (via the hat matrix) and the variance of the outcome. | What does it mean for $\hat\beta_1$ and $\hat\beta_0$ to have a variance? | The data you are feeding into your OLS model are random draws from some underlying population. You could either be drawing from the joint distribution of the predictors and the outcome, or from the di | What does it mean for $\hat\beta_1$ and $\hat\beta_0$ to have a variance?
The data you are feeding into your OLS model are random draws from some underlying population. You could either be drawing from the joint distribution of the predictors and the outcome, or from the distribution of the outcome conditional on the predictors (so you consider the predictors fixed - this is the assumption in OLS).
In either case, your regression coefficient estimates will be random variables themselves, because they are functions of the random variable $y$. The statistical properties of your parameter estimates, like the mean and the variance, will depend on the properties of the outcomes, which is why the variance of parameter estimates depends on both the model (via the hat matrix) and the variance of the outcome. | What does it mean for $\hat\beta_1$ and $\hat\beta_0$ to have a variance?
The data you are feeding into your OLS model are random draws from some underlying population. You could either be drawing from the joint distribution of the predictors and the outcome, or from the di |
52,066 | What does it mean for $\hat\beta_1$ and $\hat\beta_0$ to have a variance? | The other answers are correct, but I think it might be helpful to simulate what is happening.
library(ggplot2)
set.seed(2023)
# Define sample size
#
N <- 100
# Define number of times to repeat the simulation
#
R <- 1000
# Fix values of x
#
x <- seq(0, 1, 1/(N - 1))
# Define conditional expected values of y as E[y|x] = 1 + 2x
#
Ey <- 1 + 2*x
B_01 <- B_02 <- B_03 <- B_11 <- B_12 <- B_13 <- rep(NA, R)
for (i in 1:R){
# Simulate iid Gaussian error terms
#
e1 <- rnorm(N, 0, 1)
e2 <- rnorm(N, 0, 2)
e3 <- rnorm(N, 0, 3)
# Define observed values of y as the sum of the expected value and the error
#
y1 <- Ey + e1
y2 <- Ey + e2
y3 <- Ey + e3
# Fit regressions and extract the estimated regression coefficients
#
L1 <- lm(y1 ~ x)
L2 <- lm(y2 ~ x)
L3 <- lm(y3 ~ x)
#
B_01[i] <- summary(L1)$coefficients[1, 1]
B_02[i] <- summary(L2)$coefficients[1, 1]
B_03[i] <- summary(L3)$coefficients[1, 1]
#
B_11[i] <- summary(L1)$coefficients[2, 1]
B_12[i] <- summary(L2)$coefficients[2, 1]
B_13[i] <- summary(L3)$coefficients[2, 1]
}
# Make a data frame of the coefficients
#
d_01 <- data.frame(
Estimate = B_01,
Coefficient = "Intercept",
Variance = "1"
)
d_02 <- data.frame(
Estimate = B_02,
Coefficient = "Intercept",
Variance = "2"
)
d_03 <- data.frame(
Estimate = B_03,
Coefficient = "Intercept",
Variance = "3"
)
d_11 <- data.frame(
Estimate = B_11,
Coefficient = "Slope",
Variance = "1"
)
d_12 <- data.frame(
Estimate = B_12,
Coefficient = "Slope",
Variance = "2"
)
d_13 <- data.frame(
Estimate = B_13,
Coefficient = "Slope",
Variance = "3"
)
d <- rbind(d_01, d_02, d_03, d_11, d_12, d_13)
# Plot
#
ggplot(d, aes(x = Estimate, fill = Variance)) +
geom_density(alpha = 0.25) +
facet_grid(~Coefficient) +
theme(legend.position="bottom")
As the variance of the error term gets larger, the estimated slope $\hat\beta_1$ and estimated intercept $\hat\beta_0$ bounce around more.
In terms of the math, below is a common way to write the OLS estimates, which may show why the coefficient estimates vary.
$$
\hat\beta_1 = \dfrac{
\overset{N}{\underset{i = 1}{\sum}}\left[
(x_i - \bar x)(y_i - \bar y)
\right]
}{
\overset{N}{\underset{i = 1}{\sum}}\left[
(x_i - \bar x)^2
\right]
}\\
\hat\beta_0 = \bar y - \hat\beta_1\bar x
$$
Since $y$ is a random variable (due to the randomness of the error term), each of these estimates is random and depends on the exact observed values of $y$. As these observed values of $y$ change, so do the estimated coefficients. If those observed values of $y$ change a lot, such as with a large error variance, then the estimated coefficients change a lot which is the exact behavior in the above simulation.
(It could also be argued that $x$ has randomness, such as in an observational study. However, this is not necessary for the discussion of why the coefficient estimates have variances, and it adds confusion without any clear benefit, so I am setting aside $x$ randomness.) | What does it mean for $\hat\beta_1$ and $\hat\beta_0$ to have a variance? | The other answers are correct, but I think it might be helpful to simulate what is happening.
library(ggplot2)
set.seed(2023)
# Define sample size
#
N <- 100
# Define number of times to repeat the s | What does it mean for $\hat\beta_1$ and $\hat\beta_0$ to have a variance?
The other answers are correct, but I think it might be helpful to simulate what is happening.
library(ggplot2)
set.seed(2023)
# Define sample size
#
N <- 100
# Define number of times to repeat the simulation
#
R <- 1000
# Fix values of x
#
x <- seq(0, 1, 1/(N - 1))
# Define conditional expected values of y as E[y|x] = 1 + 2x
#
Ey <- 1 + 2*x
B_01 <- B_02 <- B_03 <- B_11 <- B_12 <- B_13 <- rep(NA, R)
for (i in 1:R){
# Simulate iid Gaussian error terms
#
e1 <- rnorm(N, 0, 1)
e2 <- rnorm(N, 0, 2)
e3 <- rnorm(N, 0, 3)
# Define observed values of y as the sum of the expected value and the error
#
y1 <- Ey + e1
y2 <- Ey + e2
y3 <- Ey + e3
# Fit regressions and extract the estimated regression coefficients
#
L1 <- lm(y1 ~ x)
L2 <- lm(y2 ~ x)
L3 <- lm(y3 ~ x)
#
B_01[i] <- summary(L1)$coefficients[1, 1]
B_02[i] <- summary(L2)$coefficients[1, 1]
B_03[i] <- summary(L3)$coefficients[1, 1]
#
B_11[i] <- summary(L1)$coefficients[2, 1]
B_12[i] <- summary(L2)$coefficients[2, 1]
B_13[i] <- summary(L3)$coefficients[2, 1]
}
# Make a data frame of the coefficients
#
d_01 <- data.frame(
Estimate = B_01,
Coefficient = "Intercept",
Variance = "1"
)
d_02 <- data.frame(
Estimate = B_02,
Coefficient = "Intercept",
Variance = "2"
)
d_03 <- data.frame(
Estimate = B_03,
Coefficient = "Intercept",
Variance = "3"
)
d_11 <- data.frame(
Estimate = B_11,
Coefficient = "Slope",
Variance = "1"
)
d_12 <- data.frame(
Estimate = B_12,
Coefficient = "Slope",
Variance = "2"
)
d_13 <- data.frame(
Estimate = B_13,
Coefficient = "Slope",
Variance = "3"
)
d <- rbind(d_01, d_02, d_03, d_11, d_12, d_13)
# Plot
#
ggplot(d, aes(x = Estimate, fill = Variance)) +
geom_density(alpha = 0.25) +
facet_grid(~Coefficient) +
theme(legend.position="bottom")
As the variance of the error term gets larger, the estimated slope $\hat\beta_1$ and estimated intercept $\hat\beta_0$ bounce around more.
In terms of the math, below is a common way to write the OLS estimates, which may show why the coefficient estimates vary.
$$
\hat\beta_1 = \dfrac{
\overset{N}{\underset{i = 1}{\sum}}\left[
(x_i - \bar x)(y_i - \bar y)
\right]
}{
\overset{N}{\underset{i = 1}{\sum}}\left[
(x_i - \bar x)^2
\right]
}\\
\hat\beta_0 = \bar y - \hat\beta_1\bar x
$$
Since $y$ is a random variable (due to the randomness of the error term), each of these estimates is random and depends on the exact observed values of $y$. As these observed values of $y$ change, so do the estimated coefficients. If those observed values of $y$ change a lot, such as with a large error variance, then the estimated coefficients change a lot which is the exact behavior in the above simulation.
(It could also be argued that $x$ has randomness, such as in an observational study. However, this is not necessary for the discussion of why the coefficient estimates have variances, and it adds confusion without any clear benefit, so I am setting aside $x$ randomness.) | What does it mean for $\hat\beta_1$ and $\hat\beta_0$ to have a variance?
The other answers are correct, but I think it might be helpful to simulate what is happening.
library(ggplot2)
set.seed(2023)
# Define sample size
#
N <- 100
# Define number of times to repeat the s |
52,067 | Is a data size in a binomial distribution random variable? | One can well think of situations in which $N$ is random, even with a frequentist interpretation (there may be a sequence of experiments with different $N$ that can be interpreted as varying randomly). For example, a service center may be interested in what percentage of calls they receive is about a certain product. They may collect data from one or several hours over the day. Number of calls in an hour $N$ can be modelled by a Poisson distribution, and the percentage of interest can then be estimated conditionally, but note that the distribution of $N$ may have impact on the sampling distribution of the estimator and, in advance, on data collection planning.
The use of the term "parameter" doesn't necessarily imply that a quantity is not random (quite obviously not in Bayesian analysis, but neither necessarily in a frequentist approach, where in some situations parameters can be randomly drawn, see above).
Of course, in the most standard frequentist analyses, parameters are in fact fixed and nonrandom, and many frequentists then would think that writing down distributions using "|" as if they were conditional on something random is misleading and wrong (unless of course you either do a Bayesian analysis or there is a frequentist mechanism behind the parameter). Personally I think to some extent this is a matter of taste. I see why people think it's wrong, however in many places it can smoothly be used without leading to any trouble. If you want, it offers a hand to the Bayesian who may want to model the parameters as random even if the frequentist thinks they are not.
Even if you do a Bayesian analysis, in fact you could treat the $N$ as fixed if there is no uncertainty about it (I expect this to happen in the majority of situations, but not in all of them). In other situations (see above) there may be uncertainty about it, and then obviously the $p(N)$ has to be defined expressing this uncertainty appropriately. For example it could be a Poisson in the situation 1 above, potentially with a hyperprior on top for the $\lambda$ parameter of the Poisson. In any case there is no fixed choice for this, it depends on the situation and how uncertainty there plays out. | Is a data size in a binomial distribution random variable? | One can well think of situations in which $N$ is random, even with a frequentist interpretation (there may be a sequence of experiments with different $N$ that can be interpreted as varying randomly). | Is a data size in a binomial distribution random variable?
One can well think of situations in which $N$ is random, even with a frequentist interpretation (there may be a sequence of experiments with different $N$ that can be interpreted as varying randomly). For example, a service center may be interested in what percentage of calls they receive is about a certain product. They may collect data from one or several hours over the day. Number of calls in an hour $N$ can be modelled by a Poisson distribution, and the percentage of interest can then be estimated conditionally, but note that the distribution of $N$ may have impact on the sampling distribution of the estimator and, in advance, on data collection planning.
The use of the term "parameter" doesn't necessarily imply that a quantity is not random (quite obviously not in Bayesian analysis, but neither necessarily in a frequentist approach, where in some situations parameters can be randomly drawn, see above).
Of course, in the most standard frequentist analyses, parameters are in fact fixed and nonrandom, and many frequentists then would think that writing down distributions using "|" as if they were conditional on something random is misleading and wrong (unless of course you either do a Bayesian analysis or there is a frequentist mechanism behind the parameter). Personally I think to some extent this is a matter of taste. I see why people think it's wrong, however in many places it can smoothly be used without leading to any trouble. If you want, it offers a hand to the Bayesian who may want to model the parameters as random even if the frequentist thinks they are not.
Even if you do a Bayesian analysis, in fact you could treat the $N$ as fixed if there is no uncertainty about it (I expect this to happen in the majority of situations, but not in all of them). In other situations (see above) there may be uncertainty about it, and then obviously the $p(N)$ has to be defined expressing this uncertainty appropriately. For example it could be a Poisson in the situation 1 above, potentially with a hyperprior on top for the $\lambda$ parameter of the Poisson. In any case there is no fixed choice for this, it depends on the situation and how uncertainty there plays out. | Is a data size in a binomial distribution random variable?
One can well think of situations in which $N$ is random, even with a frequentist interpretation (there may be a sequence of experiments with different $N$ that can be interpreted as varying randomly). |
52,068 | Is a data size in a binomial distribution random variable? | Binomial distribution has two parameters: sample size and probability of success. The parameters can either be fixed or be random variables, can be known or need to be estimated. If they are to be estimated, then depending on your preferred approach they can be treated as random variables (Bayesian school of thought) or not (frequentist or likelihoodist). So both cases are possible and you need to consult the book about what exactly the author says.
That said, using conditional “$| $” (vs “$; $”) notation often, but not necessary, suggests that the author treats them as random variables. If they talk about posterior, that this is clearly the case, since there are no non-Bayesian posteriors. | Is a data size in a binomial distribution random variable? | Binomial distribution has two parameters: sample size and probability of success. The parameters can either be fixed or be random variables, can be known or need to be estimated. If they are to be est | Is a data size in a binomial distribution random variable?
Binomial distribution has two parameters: sample size and probability of success. The parameters can either be fixed or be random variables, can be known or need to be estimated. If they are to be estimated, then depending on your preferred approach they can be treated as random variables (Bayesian school of thought) or not (frequentist or likelihoodist). So both cases are possible and you need to consult the book about what exactly the author says.
That said, using conditional “$| $” (vs “$; $”) notation often, but not necessary, suggests that the author treats them as random variables. If they talk about posterior, that this is clearly the case, since there are no non-Bayesian posteriors. | Is a data size in a binomial distribution random variable?
Binomial distribution has two parameters: sample size and probability of success. The parameters can either be fixed or be random variables, can be known or need to be estimated. If they are to be est |
52,069 | What does the I operator stand for in the context of time series modeling? | It is the identity operator, $IX_t=X_t$, and is typically used in ARIMA type formulas where you also have the backshift operator $B$ (sometimes people use $\nabla$ for the backshift), or polynomials in $B$. Essentially, it's a way to make notation more compact.
For instance, if you have a time series $(X_t)$, then
$$(I-B)X_t = IX_t-BX_t = X_t-X_{t-1},$$
and
$$(I-B^2)X_t = IX_t-B(BX_t) = X_t-X_{t-2}.$$ | What does the I operator stand for in the context of time series modeling? | It is the identity operator, $IX_t=X_t$, and is typically used in ARIMA type formulas where you also have the backshift operator $B$ (sometimes people use $\nabla$ for the backshift), or polynomials i | What does the I operator stand for in the context of time series modeling?
It is the identity operator, $IX_t=X_t$, and is typically used in ARIMA type formulas where you also have the backshift operator $B$ (sometimes people use $\nabla$ for the backshift), or polynomials in $B$. Essentially, it's a way to make notation more compact.
For instance, if you have a time series $(X_t)$, then
$$(I-B)X_t = IX_t-BX_t = X_t-X_{t-1},$$
and
$$(I-B^2)X_t = IX_t-B(BX_t) = X_t-X_{t-2}.$$ | What does the I operator stand for in the context of time series modeling?
It is the identity operator, $IX_t=X_t$, and is typically used in ARIMA type formulas where you also have the backshift operator $B$ (sometimes people use $\nabla$ for the backshift), or polynomials i |
52,070 | What does the I operator stand for in the context of time series modeling? | The main answer by Stephan is correct, but it is odd to use the identity operator (with the symbol $I$) in a scalar context instead of just using the number one. It would be simpler here if they just said that:
$$(1-B)X_t = X_t-X_{t-1}.$$ | What does the I operator stand for in the context of time series modeling? | The main answer by Stephan is correct, but it is odd to use the identity operator (with the symbol $I$) in a scalar context instead of just using the number one. It would be simpler here if they just | What does the I operator stand for in the context of time series modeling?
The main answer by Stephan is correct, but it is odd to use the identity operator (with the symbol $I$) in a scalar context instead of just using the number one. It would be simpler here if they just said that:
$$(1-B)X_t = X_t-X_{t-1}.$$ | What does the I operator stand for in the context of time series modeling?
The main answer by Stephan is correct, but it is odd to use the identity operator (with the symbol $I$) in a scalar context instead of just using the number one. It would be simpler here if they just |
52,071 | Why is error of OLS not zero? | The major issue in your proof is $ (X^TX)^{-1}X^T = X^T(XX^T)^{-1} $
One obvious proof of the statement does not hold is by choosing a random matrix and then compute value on both sides. You would find they are different.
(I try X= np.array([[1,2,3],[4,5,6]])).
The reason of your proof goes wrong is because when you performing SVD, the dimension of matrix $\sum$ is $ n \times d$, which is not square if $ d<n $. This means that the firse line should be $ (X^TX)^{-1}X^T = (V \sum^T U^T...) $ instead of $ (V \sum U^T...)$
Also, the quantity $ V \sum^{-1} U^T$ does not exists at all if $d<n$, since there is not inverse for non-square matrix. This lead to false result.
The only case your proof holds is when the inverse exist, which is the case of $ n=d$ and we would expect a perfect OLS fit too. | Why is error of OLS not zero? | The major issue in your proof is $ (X^TX)^{-1}X^T = X^T(XX^T)^{-1} $
One obvious proof of the statement does not hold is by choosing a random matrix and then compute value on both sides. You would fin | Why is error of OLS not zero?
The major issue in your proof is $ (X^TX)^{-1}X^T = X^T(XX^T)^{-1} $
One obvious proof of the statement does not hold is by choosing a random matrix and then compute value on both sides. You would find they are different.
(I try X= np.array([[1,2,3],[4,5,6]])).
The reason of your proof goes wrong is because when you performing SVD, the dimension of matrix $\sum$ is $ n \times d$, which is not square if $ d<n $. This means that the firse line should be $ (X^TX)^{-1}X^T = (V \sum^T U^T...) $ instead of $ (V \sum U^T...)$
Also, the quantity $ V \sum^{-1} U^T$ does not exists at all if $d<n$, since there is not inverse for non-square matrix. This lead to false result.
The only case your proof holds is when the inverse exist, which is the case of $ n=d$ and we would expect a perfect OLS fit too. | Why is error of OLS not zero?
The major issue in your proof is $ (X^TX)^{-1}X^T = X^T(XX^T)^{-1} $
One obvious proof of the statement does not hold is by choosing a random matrix and then compute value on both sides. You would fin |
52,072 | Why is error of OLS not zero? | How do you go from 3b to 3c?
You seem to be using the following $$(V\Sigma\Sigma V^T)^{-1}V\Sigma = V\Sigma^{-1}$$ But, that is only allowed when $\Sigma$ is square (ie when $n=p$). In that case $$(V\Sigma\Sigma V^T)^{-1} = (V^T)^{-1}\Sigma^{-1}\Sigma^{-1} V^{-1}$$
For $p=n$ it is correct that your cost function is zero. | Why is error of OLS not zero? | How do you go from 3b to 3c?
You seem to be using the following $$(V\Sigma\Sigma V^T)^{-1}V\Sigma = V\Sigma^{-1}$$ But, that is only allowed when $\Sigma$ is square (ie when $n=p$). In that case $$(V\ | Why is error of OLS not zero?
How do you go from 3b to 3c?
You seem to be using the following $$(V\Sigma\Sigma V^T)^{-1}V\Sigma = V\Sigma^{-1}$$ But, that is only allowed when $\Sigma$ is square (ie when $n=p$). In that case $$(V\Sigma\Sigma V^T)^{-1} = (V^T)^{-1}\Sigma^{-1}\Sigma^{-1} V^{-1}$$
For $p=n$ it is correct that your cost function is zero. | Why is error of OLS not zero?
How do you go from 3b to 3c?
You seem to be using the following $$(V\Sigma\Sigma V^T)^{-1}V\Sigma = V\Sigma^{-1}$$ But, that is only allowed when $\Sigma$ is square (ie when $n=p$). In that case $$(V\ |
52,073 | Is the probability of a continuous variable obtained via integrating over an interval of the probability density curve *cumulative* probability? | For easier reading, I have combined three extensive Comments (now deleted) into
an Answer:
You don't have the true PDF $f(x)$ from density in R. From the code, we know $X$ is standard normal, so the exact value of $p=P(0<X<1)$ could be found from in R as $0.3413447.$
diff(pnorm(c(0,1)))
[1] 0.3413447
However, I suppose you want to get $p$ from your $n=200$ observations a.
The most direct way to do that is to find the proportion of values of a in (0,1):
set.seed(42)
a = rnorm(200) # generate random data
mean((a > 0) & (a < 1))
[1] 0.33
Alternatively, if you know data are normal, then you could estimate $μ,σ$ from data and use R's normal pdf function to get $0.3429.$
mu = mean(a); sd = sd(a)
diff(pnorm(c(0,1), mu, sd))
[1] 0.3428732
I wouldn't expect a density estimator to do very much better.
The output of density in R is a sequence of 512 x-values and 512 y-values that can be used to plot the estimated PDF (enclosing unit area).
density(a)
Call:
density.default(x = a)
Data: a (200 obs.); Bandwidth 'bw' = 0.2895
x y
Min. :-3.8616 Min. :0.0000781
1st Qu.:-2.0036 1st Qu.:0.0124148
Median :-0.1456 Median :0.0685148
Mean :-0.1456 Mean :0.1344194
3rd Qu.: 1.7124 3rd Qu.:0.2425435
Max. : 3.5704 Max. :0.4123541
hist(a, prob=T, col="skyblue2")
lines(density(a), col="brown", lwd=2)
rug(a)
The figure below shows a histogram of a along with the density estimator.
Tick marks along the horizontal axis show locations of
the $n=200$ observations. [Sometimes density estimators are informally called 'smoothed histograms', but they are based on individual data points without reference to the binning of any histogram. The density estimator used here is the default estimator from density in R; variations are available via parameters not used here.]
You might try to use this output to estimate $p,$ as follows, to get $p \approx 0.337867.$
xx = density(a)$x; yy = density(a)$y
sum(yy[xx > 0 & xx < 1])/sum(yy)
[1] 0.337867
This method does have the advantage of not needing to know the population family of distributions (e.g., normal).
Addendum, showing results for a much larger sample: $n=10\,000.$
set.seed(2022)
# counting points
A = rnorm(10000)
mean((A > 0) & (A < 1))
[1] 0.3396
# assuming normality
mu = mean(A); sd = sd(A)
diff(pnorm(c(0,1), mu, sd))
[1] 0.3407714
# density estimation
xx = density(A)$x; yy = density(A)$y
sum(yy[xx > 0 & xx < 1])/sum(yy)
[1] 0.3373269 | Is the probability of a continuous variable obtained via integrating over an interval of the probabi | For easier reading, I have combined three extensive Comments (now deleted) into
an Answer:
You don't have the true PDF $f(x)$ from density in R. From the code, we know $X$ is standard normal, so the e | Is the probability of a continuous variable obtained via integrating over an interval of the probability density curve *cumulative* probability?
For easier reading, I have combined three extensive Comments (now deleted) into
an Answer:
You don't have the true PDF $f(x)$ from density in R. From the code, we know $X$ is standard normal, so the exact value of $p=P(0<X<1)$ could be found from in R as $0.3413447.$
diff(pnorm(c(0,1)))
[1] 0.3413447
However, I suppose you want to get $p$ from your $n=200$ observations a.
The most direct way to do that is to find the proportion of values of a in (0,1):
set.seed(42)
a = rnorm(200) # generate random data
mean((a > 0) & (a < 1))
[1] 0.33
Alternatively, if you know data are normal, then you could estimate $μ,σ$ from data and use R's normal pdf function to get $0.3429.$
mu = mean(a); sd = sd(a)
diff(pnorm(c(0,1), mu, sd))
[1] 0.3428732
I wouldn't expect a density estimator to do very much better.
The output of density in R is a sequence of 512 x-values and 512 y-values that can be used to plot the estimated PDF (enclosing unit area).
density(a)
Call:
density.default(x = a)
Data: a (200 obs.); Bandwidth 'bw' = 0.2895
x y
Min. :-3.8616 Min. :0.0000781
1st Qu.:-2.0036 1st Qu.:0.0124148
Median :-0.1456 Median :0.0685148
Mean :-0.1456 Mean :0.1344194
3rd Qu.: 1.7124 3rd Qu.:0.2425435
Max. : 3.5704 Max. :0.4123541
hist(a, prob=T, col="skyblue2")
lines(density(a), col="brown", lwd=2)
rug(a)
The figure below shows a histogram of a along with the density estimator.
Tick marks along the horizontal axis show locations of
the $n=200$ observations. [Sometimes density estimators are informally called 'smoothed histograms', but they are based on individual data points without reference to the binning of any histogram. The density estimator used here is the default estimator from density in R; variations are available via parameters not used here.]
You might try to use this output to estimate $p,$ as follows, to get $p \approx 0.337867.$
xx = density(a)$x; yy = density(a)$y
sum(yy[xx > 0 & xx < 1])/sum(yy)
[1] 0.337867
This method does have the advantage of not needing to know the population family of distributions (e.g., normal).
Addendum, showing results for a much larger sample: $n=10\,000.$
set.seed(2022)
# counting points
A = rnorm(10000)
mean((A > 0) & (A < 1))
[1] 0.3396
# assuming normality
mu = mean(A); sd = sd(A)
diff(pnorm(c(0,1), mu, sd))
[1] 0.3407714
# density estimation
xx = density(A)$x; yy = density(A)$y
sum(yy[xx > 0 & xx < 1])/sum(yy)
[1] 0.3373269 | Is the probability of a continuous variable obtained via integrating over an interval of the probabi
For easier reading, I have combined three extensive Comments (now deleted) into
an Answer:
You don't have the true PDF $f(x)$ from density in R. From the code, we know $X$ is standard normal, so the e |
52,074 | Is the probability of a continuous variable obtained via integrating over an interval of the probability density curve *cumulative* probability? | The cumulative density function (cdf, $F(x)$) and the probability density function (pdf, $f(x)$) are related by the equation:
$F(x) = P(X \leq x) = \int_{-\infty}^{x} f(x)dx$
By the fundamental theorem of calculus, we have that $\frac{\partial}{\partial x}F(x) = f(x)$
(We can replace $-\infty$ with the lower limit of the distribution if that distribution doesn't have support extending to $-\infty$.)
For continuous distributions, we can show that $P(X = a) = 0$ for all points $a$. So we can be a little flexible with our uses of $<$ vs $\leq$.
The probability that $X$ takes a value in a range is defined as
$P(a < X < b) = \int_{a}^{b}f(x)dx$
Which is also equal to
$P(a < X \leq b) = F(b) - F(a)$
So when we take an integral $\int_{a}^{b}f(x)dx$, we are not getting a cumulative probability, except in the case that $a = -\infty$ (or the lower limit of the distribution). | Is the probability of a continuous variable obtained via integrating over an interval of the probabi | The cumulative density function (cdf, $F(x)$) and the probability density function (pdf, $f(x)$) are related by the equation:
$F(x) = P(X \leq x) = \int_{-\infty}^{x} f(x)dx$
By the fundamental theore | Is the probability of a continuous variable obtained via integrating over an interval of the probability density curve *cumulative* probability?
The cumulative density function (cdf, $F(x)$) and the probability density function (pdf, $f(x)$) are related by the equation:
$F(x) = P(X \leq x) = \int_{-\infty}^{x} f(x)dx$
By the fundamental theorem of calculus, we have that $\frac{\partial}{\partial x}F(x) = f(x)$
(We can replace $-\infty$ with the lower limit of the distribution if that distribution doesn't have support extending to $-\infty$.)
For continuous distributions, we can show that $P(X = a) = 0$ for all points $a$. So we can be a little flexible with our uses of $<$ vs $\leq$.
The probability that $X$ takes a value in a range is defined as
$P(a < X < b) = \int_{a}^{b}f(x)dx$
Which is also equal to
$P(a < X \leq b) = F(b) - F(a)$
So when we take an integral $\int_{a}^{b}f(x)dx$, we are not getting a cumulative probability, except in the case that $a = -\infty$ (or the lower limit of the distribution). | Is the probability of a continuous variable obtained via integrating over an interval of the probabi
The cumulative density function (cdf, $F(x)$) and the probability density function (pdf, $f(x)$) are related by the equation:
$F(x) = P(X \leq x) = \int_{-\infty}^{x} f(x)dx$
By the fundamental theore |
52,075 | Which statistical tests can I conduct to analyse the trend of series data? | The plot itself is perhaps the best way to present the tendency.
Consider supplementing it with a robust visual indication of trend, such as a lightly colored line or curve. Building on psychometric principles (lightly and with some diffidence), I would favor an exponential curve determined by, say, the median values of the first third of the questions and the median values of the last third of the questions.
An equivalent description is to fit a straight line on a log-linear plot, as shown here.
This visualization has been engineered to support the apparent objectives of the question:
A title tells the reader what you want them to know.
The connecting line segments are visually suppressed because they are not the message.
The fitted line is made most prominent visually because it is the basic statistical summary -- it is the message.
Points that are significantly beyond the values of the fitted line (with a Bonferroni adjustment for 20 comparisons) are highlighted by making them brighter and coloring them prominently. (This assumes the vertical error bars are two-sided confidence intervals for a confidence level near 95%.)
The line is summarized by a single statistical measure of trend, displayed in the subtitle at the bottom: it represents an average 6.2% decrease in working time for each successive question.
This line passes through the median of the first five answer times (horizontally located at the median of the corresponding question numbers 0,1,2,3,4) and the median of the last five answer times (horizontally located at the median of the corresponding question numbers (16, 17, 18, 19, 20). This technique of using medians of the data at either extreme is advocated by John Tukey in his book EDA (Addison-Wesley 1977).
Some judgment is needed. Tukey often used the first third and last third of the data when making such exploratory fits. When I do that here, the left part of the line barely changes (it should not, since the data are consistent in that part of the plot) while the right part changes appreciably, reflecting both the greater variation in times and the greater standard errors there:
This time, however, (a) there are more badly fit points and (b) they consistently fall below the line. This suggests this fit does not have a sufficiently negative slope. Thus, we can have confidence that the initial exploratory estimate of $-6\%$ (or so) is one of the best possible descriptions of the trend. | Which statistical tests can I conduct to analyse the trend of series data? | The plot itself is perhaps the best way to present the tendency.
Consider supplementing it with a robust visual indication of trend, such as a lightly colored line or curve. Building on psychometri | Which statistical tests can I conduct to analyse the trend of series data?
The plot itself is perhaps the best way to present the tendency.
Consider supplementing it with a robust visual indication of trend, such as a lightly colored line or curve. Building on psychometric principles (lightly and with some diffidence), I would favor an exponential curve determined by, say, the median values of the first third of the questions and the median values of the last third of the questions.
An equivalent description is to fit a straight line on a log-linear plot, as shown here.
This visualization has been engineered to support the apparent objectives of the question:
A title tells the reader what you want them to know.
The connecting line segments are visually suppressed because they are not the message.
The fitted line is made most prominent visually because it is the basic statistical summary -- it is the message.
Points that are significantly beyond the values of the fitted line (with a Bonferroni adjustment for 20 comparisons) are highlighted by making them brighter and coloring them prominently. (This assumes the vertical error bars are two-sided confidence intervals for a confidence level near 95%.)
The line is summarized by a single statistical measure of trend, displayed in the subtitle at the bottom: it represents an average 6.2% decrease in working time for each successive question.
This line passes through the median of the first five answer times (horizontally located at the median of the corresponding question numbers 0,1,2,3,4) and the median of the last five answer times (horizontally located at the median of the corresponding question numbers (16, 17, 18, 19, 20). This technique of using medians of the data at either extreme is advocated by John Tukey in his book EDA (Addison-Wesley 1977).
Some judgment is needed. Tukey often used the first third and last third of the data when making such exploratory fits. When I do that here, the left part of the line barely changes (it should not, since the data are consistent in that part of the plot) while the right part changes appreciably, reflecting both the greater variation in times and the greater standard errors there:
This time, however, (a) there are more badly fit points and (b) they consistently fall below the line. This suggests this fit does not have a sufficiently negative slope. Thus, we can have confidence that the initial exploratory estimate of $-6\%$ (or so) is one of the best possible descriptions of the trend. | Which statistical tests can I conduct to analyse the trend of series data?
The plot itself is perhaps the best way to present the tendency.
Consider supplementing it with a robust visual indication of trend, such as a lightly colored line or curve. Building on psychometri |
52,076 | Which statistical tests can I conduct to analyse the trend of series data? | While your question variables are categorical, they could also be treated as ordinal, since they are done in sequence, so there is a natural ordering of the questions. In that case something like Spearman's rho correlation coefficient would be... ok. | Which statistical tests can I conduct to analyse the trend of series data? | While your question variables are categorical, they could also be treated as ordinal, since they are done in sequence, so there is a natural ordering of the questions. In that case something like Spea | Which statistical tests can I conduct to analyse the trend of series data?
While your question variables are categorical, they could also be treated as ordinal, since they are done in sequence, so there is a natural ordering of the questions. In that case something like Spearman's rho correlation coefficient would be... ok. | Which statistical tests can I conduct to analyse the trend of series data?
While your question variables are categorical, they could also be treated as ordinal, since they are done in sequence, so there is a natural ordering of the questions. In that case something like Spea |
52,077 | Which statistical tests can I conduct to analyse the trend of series data? | Caveat The OP presents an interesting experiment that produced (up to) 200x20 = 4000 measurements. It's best to analyze the data at the student level, not the 20 averages per question, using for example spline regression as the averages don't follow a simple trend and the variances don't look constant either. That being said, the actual question is how to summarize the trend in the averages and what that trend is.
As @whuber illustrates, a plot is a great way to present 20 numbers (at least in two dimensions). If the numbers follow a pattern, this pattern can be highlighted with a judicious use of color and superimposing a line or a curve to emphasize the apparent relationship between the x and y variables.
These are powerful tools because the human visual system is very strong at seeing patterns. Sometimes however the additional graphical elements may suggest a relationship that's not strongly supported by the data.
I, for example, see a negative correlation between question order and time to answer only for questions 1 to 10. For questions 11 to 20, I see a leveling-off of average time to answer and an increase in variance (except for question 18).
Qualitatively, the data is consistent with both the "constantly negative" relationship" and "first-half negative, second-half level" relationship between question and time.
To demonstrate I plot the data twice, superimposing the two relationship patterns in each plot. I also add some commentary in the title, deliberately provoking, to stress the point I want to make with the functional relationship, in front of my imaginary audience. [For completeness, I include my "data" below.]
Neither alternative shown below is convincing (though the constant trend is less unconvincing?) It's better to fit a model that doesn't make strong assumptions about the relationship between question order and time to the full dataset of obseverations without averaging first. Perhaps spline regression, as recommended in Regression Modeling Strategies, is a place to start. See for example here.
And the alternative:
My fake data. I took the average response time from the original plot by eye. I assume time is measured in seconds because the total time is about 400 units and 400 minutes of math is a lot.
math <- tribble(
~question, ~time, ~sd,
1, 40, 4,
2, 31, 3,
3, 27, 3,
4, 24, 2,
5, 27, 4,
6, 28, 4,
7, 26, 2,
8, 24, 3,
9, 21, 4,
10, 24, 5,
11, 13, 3,
12, 10, 2,
13, 13, 3,
14, 24, 7,
15, 17, 8,
16, 9, 1,
17, 16, 6,
18, 16, 0.5,
19, 10, 5,
20, 11, 6
) | Which statistical tests can I conduct to analyse the trend of series data? | Caveat The OP presents an interesting experiment that produced (up to) 200x20 = 4000 measurements. It's best to analyze the data at the student level, not the 20 averages per question, using for examp | Which statistical tests can I conduct to analyse the trend of series data?
Caveat The OP presents an interesting experiment that produced (up to) 200x20 = 4000 measurements. It's best to analyze the data at the student level, not the 20 averages per question, using for example spline regression as the averages don't follow a simple trend and the variances don't look constant either. That being said, the actual question is how to summarize the trend in the averages and what that trend is.
As @whuber illustrates, a plot is a great way to present 20 numbers (at least in two dimensions). If the numbers follow a pattern, this pattern can be highlighted with a judicious use of color and superimposing a line or a curve to emphasize the apparent relationship between the x and y variables.
These are powerful tools because the human visual system is very strong at seeing patterns. Sometimes however the additional graphical elements may suggest a relationship that's not strongly supported by the data.
I, for example, see a negative correlation between question order and time to answer only for questions 1 to 10. For questions 11 to 20, I see a leveling-off of average time to answer and an increase in variance (except for question 18).
Qualitatively, the data is consistent with both the "constantly negative" relationship" and "first-half negative, second-half level" relationship between question and time.
To demonstrate I plot the data twice, superimposing the two relationship patterns in each plot. I also add some commentary in the title, deliberately provoking, to stress the point I want to make with the functional relationship, in front of my imaginary audience. [For completeness, I include my "data" below.]
Neither alternative shown below is convincing (though the constant trend is less unconvincing?) It's better to fit a model that doesn't make strong assumptions about the relationship between question order and time to the full dataset of obseverations without averaging first. Perhaps spline regression, as recommended in Regression Modeling Strategies, is a place to start. See for example here.
And the alternative:
My fake data. I took the average response time from the original plot by eye. I assume time is measured in seconds because the total time is about 400 units and 400 minutes of math is a lot.
math <- tribble(
~question, ~time, ~sd,
1, 40, 4,
2, 31, 3,
3, 27, 3,
4, 24, 2,
5, 27, 4,
6, 28, 4,
7, 26, 2,
8, 24, 3,
9, 21, 4,
10, 24, 5,
11, 13, 3,
12, 10, 2,
13, 13, 3,
14, 24, 7,
15, 17, 8,
16, 9, 1,
17, 16, 6,
18, 16, 0.5,
19, 10, 5,
20, 11, 6
) | Which statistical tests can I conduct to analyse the trend of series data?
Caveat The OP presents an interesting experiment that produced (up to) 200x20 = 4000 measurements. It's best to analyze the data at the student level, not the 20 averages per question, using for examp |
52,078 | Which statistical tests can I conduct to analyse the trend of series data? | I've written a second answer inspired by a newer question which asks how fit a nonlinear model for tree growth: nls() singular gradient matrix at initial parameter estimates error.
On the surface the study of how trees grow doesn't have much in common with the study of how students learn. However, if we flip a growth curve we get a learning curve. Here is Figure 1 in (Karlsson, 2000).
K. Karlsson. Height growth patterns of Scots pine and Norway spruce in the coastal areas of western Finland. Forest Ecology and Management, 135(1):205–216, 2000.
The figure shows a variety of growth curves, some with more curvature than others. This is interesting because, as discussed in the thread about fitting the nonlinear growth model, the data might trend linearly (without any curvature). So we might prefer a model that allows for curvature if there is evidence for it in the data.
So in my second answer I fit four models:
[blue] y ~ x (linear)
[green] log(y) ~ x (log linear, suggested by @whuber)
[yellow] segmented at the mid-point, suggested by me after eyeballing the data
$$
\begin{aligned}
y =
\begin{cases}
\beta_0 + \beta_1 x & x \leq 10\\
\beta^*_0 & x > 10
\end{cases}
\end{aligned}
$$
[red] nonlinear learning curve = inverted growth curve
$$
\begin{aligned}
y =
\beta_0 - \beta_1\left\{1 - \exp(-\beta_2x)\right\}^{\beta_3}
\end{aligned}
$$
where $x$ is the question id and $y$ is the time to answer the question (in seconds). R code to reproduce the analysis is attached at the end.
For what it's worth, the segmented model has the smallest residual sum of squares (RSS). As @whuber points out in a comment, it's also the most likely model to be overfitted to the data. In fact, this model would only make sense if the experimental design supports it. For example, the students got a break half-way through the exam. Or questions 11 to 20 are designed to have the same difficulty.
Another observation is that the non-linear model suggests that response times decrease rapidly during the first one third of the test and then begins to level off more quickly than implied by the log linear model.
sum(resid(m_linear)^2)
#> [1] 391.5296
sum(m_log_linear$.resid^2)
#> [1] 341.9794
sum(resid(m_nonlinear)^2)
#> [1] 314.8828
sum(m_segmented$.resid^2)
#> [1] 285.9755
R code to fit the four models, reproduce the figure and compute the residual sums of squares.
# I've extracted the data with WebPlotDigitizer.
# https://automeris.io/WebPlotDigitizer/
x <- seq(20)
y <- c(40.5, 30.9, 28, 23.5, 26.6, 27.7, 25.8, 23.5, 20.8, 23.1, 12.8, 10.1, 12.6, 23.3, 17.6, 9.5, 17.3, 17.2, 9.7, 11.4)
library("tidyverse")
fit_nonlinear <- function(x, y) {
# See this answer by @whuber for an explanation how to fit
# a nonlinear model with least squares.
# https://stats.stackexchange.com/a/599097/237901
# The model.
f <- function(x, beta) {
b0 <- beta["b0"]
b1 <- beta["b1"]
b2 <- beta["b2"]
b3 <- beta["b3"]
exp(b0) - exp(b1) * (1 - exp(-b2^2 * x))^(1 + sin(b3))
}
# Make a guess for an initial fit.
soln0 <- c(b0 = log(40), b1 = log(40), b2 = sqrt(0.1), b3 = 1)
# Polish this fit.
# The `control` object shows how to specify some of the most useful aspects
# of the search.
nls(
y ~ exp(b0) - exp(b1) * (1 - exp(-b2^2 * x))^(1 + sin(b3)),
start = soln0,
control = list(minFactor = 2^(-16), maxiter = 1e4, warnOnly = TRUE)
)
}
fit_log_linear <- function(x, y) {
tibble(x, y) %>%
mutate(
.fitted = exp(fitted(lm(log(y) ~ x))),
.resid = y - .fitted
)
}
fit_segments <- function(x, y) {
m4 <- tibble(x, y)
m4.le10 <- lm(y ~ x, data = subset(m4, x <= 10))
m4.gt10 <- lm(y ~ 1, data = subset(m4, x > 10))
m4 %>%
mutate(
.fitted = if_else(x <= 10,
predict(m4.le10, newdata = .),
predict(m4.gt10, newdata = .)
),
.resid = y - .fitted
)
}
m_linear <- lm(y ~ x)
m_log_linear <- fit_log_linear(x, y)
m_nonlinear <- fit_nonlinear(x, y)
m_segmented <- fit_segments(x, y)
plot(x, y, xlab = "Question", ylab = "Time")
legend(
15, 40,
legend=c("linear", "log linear", "nonlinear", "segmented"),
col=c("#2297E6", "#61D04F", "#DF536B", "#F5C710"),
lty=1
)
abline(
m_linear,
lwd = 2, col = "#2297E6"
)
lines(
x, m_log_linear$.fitted,
lwd = 2, col = "#61D04F"
)
lines(
x, fitted(m_nonlinear),
lwd = 2, col = "#DF536B"
)
lines(
x[x <= 10], m_segmented$.fitted[x <= 10],
lwd = 2, col = "#F5C710"
)
lines(
x[x > 10], m_segmented$.fitted[x > 10],
lwd = 2, col = "#F5C710"
)
Created on 2022-12-26 with reprex v2.0.2 | Which statistical tests can I conduct to analyse the trend of series data? | I've written a second answer inspired by a newer question which asks how fit a nonlinear model for tree growth: nls() singular gradient matrix at initial parameter estimates error.
On the surface the | Which statistical tests can I conduct to analyse the trend of series data?
I've written a second answer inspired by a newer question which asks how fit a nonlinear model for tree growth: nls() singular gradient matrix at initial parameter estimates error.
On the surface the study of how trees grow doesn't have much in common with the study of how students learn. However, if we flip a growth curve we get a learning curve. Here is Figure 1 in (Karlsson, 2000).
K. Karlsson. Height growth patterns of Scots pine and Norway spruce in the coastal areas of western Finland. Forest Ecology and Management, 135(1):205–216, 2000.
The figure shows a variety of growth curves, some with more curvature than others. This is interesting because, as discussed in the thread about fitting the nonlinear growth model, the data might trend linearly (without any curvature). So we might prefer a model that allows for curvature if there is evidence for it in the data.
So in my second answer I fit four models:
[blue] y ~ x (linear)
[green] log(y) ~ x (log linear, suggested by @whuber)
[yellow] segmented at the mid-point, suggested by me after eyeballing the data
$$
\begin{aligned}
y =
\begin{cases}
\beta_0 + \beta_1 x & x \leq 10\\
\beta^*_0 & x > 10
\end{cases}
\end{aligned}
$$
[red] nonlinear learning curve = inverted growth curve
$$
\begin{aligned}
y =
\beta_0 - \beta_1\left\{1 - \exp(-\beta_2x)\right\}^{\beta_3}
\end{aligned}
$$
where $x$ is the question id and $y$ is the time to answer the question (in seconds). R code to reproduce the analysis is attached at the end.
For what it's worth, the segmented model has the smallest residual sum of squares (RSS). As @whuber points out in a comment, it's also the most likely model to be overfitted to the data. In fact, this model would only make sense if the experimental design supports it. For example, the students got a break half-way through the exam. Or questions 11 to 20 are designed to have the same difficulty.
Another observation is that the non-linear model suggests that response times decrease rapidly during the first one third of the test and then begins to level off more quickly than implied by the log linear model.
sum(resid(m_linear)^2)
#> [1] 391.5296
sum(m_log_linear$.resid^2)
#> [1] 341.9794
sum(resid(m_nonlinear)^2)
#> [1] 314.8828
sum(m_segmented$.resid^2)
#> [1] 285.9755
R code to fit the four models, reproduce the figure and compute the residual sums of squares.
# I've extracted the data with WebPlotDigitizer.
# https://automeris.io/WebPlotDigitizer/
x <- seq(20)
y <- c(40.5, 30.9, 28, 23.5, 26.6, 27.7, 25.8, 23.5, 20.8, 23.1, 12.8, 10.1, 12.6, 23.3, 17.6, 9.5, 17.3, 17.2, 9.7, 11.4)
library("tidyverse")
fit_nonlinear <- function(x, y) {
# See this answer by @whuber for an explanation how to fit
# a nonlinear model with least squares.
# https://stats.stackexchange.com/a/599097/237901
# The model.
f <- function(x, beta) {
b0 <- beta["b0"]
b1 <- beta["b1"]
b2 <- beta["b2"]
b3 <- beta["b3"]
exp(b0) - exp(b1) * (1 - exp(-b2^2 * x))^(1 + sin(b3))
}
# Make a guess for an initial fit.
soln0 <- c(b0 = log(40), b1 = log(40), b2 = sqrt(0.1), b3 = 1)
# Polish this fit.
# The `control` object shows how to specify some of the most useful aspects
# of the search.
nls(
y ~ exp(b0) - exp(b1) * (1 - exp(-b2^2 * x))^(1 + sin(b3)),
start = soln0,
control = list(minFactor = 2^(-16), maxiter = 1e4, warnOnly = TRUE)
)
}
fit_log_linear <- function(x, y) {
tibble(x, y) %>%
mutate(
.fitted = exp(fitted(lm(log(y) ~ x))),
.resid = y - .fitted
)
}
fit_segments <- function(x, y) {
m4 <- tibble(x, y)
m4.le10 <- lm(y ~ x, data = subset(m4, x <= 10))
m4.gt10 <- lm(y ~ 1, data = subset(m4, x > 10))
m4 %>%
mutate(
.fitted = if_else(x <= 10,
predict(m4.le10, newdata = .),
predict(m4.gt10, newdata = .)
),
.resid = y - .fitted
)
}
m_linear <- lm(y ~ x)
m_log_linear <- fit_log_linear(x, y)
m_nonlinear <- fit_nonlinear(x, y)
m_segmented <- fit_segments(x, y)
plot(x, y, xlab = "Question", ylab = "Time")
legend(
15, 40,
legend=c("linear", "log linear", "nonlinear", "segmented"),
col=c("#2297E6", "#61D04F", "#DF536B", "#F5C710"),
lty=1
)
abline(
m_linear,
lwd = 2, col = "#2297E6"
)
lines(
x, m_log_linear$.fitted,
lwd = 2, col = "#61D04F"
)
lines(
x, fitted(m_nonlinear),
lwd = 2, col = "#DF536B"
)
lines(
x[x <= 10], m_segmented$.fitted[x <= 10],
lwd = 2, col = "#F5C710"
)
lines(
x[x > 10], m_segmented$.fitted[x > 10],
lwd = 2, col = "#F5C710"
)
Created on 2022-12-26 with reprex v2.0.2 | Which statistical tests can I conduct to analyse the trend of series data?
I've written a second answer inspired by a newer question which asks how fit a nonlinear model for tree growth: nls() singular gradient matrix at initial parameter estimates error.
On the surface the |
52,079 | How does logistic growth rate coincide with the slope of the line in the exponential phase of the growth? | Let's do the calculations to see what the answers are.
By changing the units of measurement of $x$ to the origin $x_0$ we may assume $x_0=0$ (to simplify the work and the notation) and--therefore--the middle of the curve is at $x=0.$ Thus
$$\frac{\mathrm{d}}{\mathrm{d}x} \log(f(x)) = \frac{\mathrm{d}}{\mathrm{d}x}\left(\log(L) - \log(1 + e^{-kx})\right) = \frac{ke^{-kx}}{1 + e^{-kx}}.$$
In the middle at $x=0$ this simplifies to $k/2.$
However, if we take "exponential phase of the growth" to refer to the region where $x\ll x_0,$ its slope has a limiting value
$$\lim_{x\to-\infty} \frac{ke^{-kx}}{1 + e^{-kx}} = \lim_{x\to-\infty} \frac{k}{1 + e^{kx}} = k.$$
The derivative of $f$ itself is obtained by multiplying the logarithmic derivative by $f,$ giving
$$\frac{\mathrm{d}}{\mathrm{d}x} f(x) = \frac{ke^{-kx}}{1 + e^{-kx}}\frac{L}{1 + e^{-kx}}.$$
In the middle at $x=0$ this simplifies to $kL/4.$
This leads to three useful rules, illustrated below:
The limiting slope at the left of the graph of $\log f$ is $k.$
The middle slope of the graph of $\log f$ is half the rate, $k/2.$
When $L=4,$ the middle slope of the graph of $f$ equals the rate $k.$
The special role of $L=4$ is revealed by studying the folded log and its inverse.
The left hand curve suggests the limiting log slope is achieved closely for $x \ll -3/k.$ The right hand curve indicates $kL/4$ is a good approximation of the slope for $|x| \lt 1/k.$ These observations provide guidance for estimating $k$ from data, especially when only a rough "eyeball" estimate might be needed. | How does logistic growth rate coincide with the slope of the line in the exponential phase of the gr | Let's do the calculations to see what the answers are.
By changing the units of measurement of $x$ to the origin $x_0$ we may assume $x_0=0$ (to simplify the work and the notation) and--therefore--the | How does logistic growth rate coincide with the slope of the line in the exponential phase of the growth?
Let's do the calculations to see what the answers are.
By changing the units of measurement of $x$ to the origin $x_0$ we may assume $x_0=0$ (to simplify the work and the notation) and--therefore--the middle of the curve is at $x=0.$ Thus
$$\frac{\mathrm{d}}{\mathrm{d}x} \log(f(x)) = \frac{\mathrm{d}}{\mathrm{d}x}\left(\log(L) - \log(1 + e^{-kx})\right) = \frac{ke^{-kx}}{1 + e^{-kx}}.$$
In the middle at $x=0$ this simplifies to $k/2.$
However, if we take "exponential phase of the growth" to refer to the region where $x\ll x_0,$ its slope has a limiting value
$$\lim_{x\to-\infty} \frac{ke^{-kx}}{1 + e^{-kx}} = \lim_{x\to-\infty} \frac{k}{1 + e^{kx}} = k.$$
The derivative of $f$ itself is obtained by multiplying the logarithmic derivative by $f,$ giving
$$\frac{\mathrm{d}}{\mathrm{d}x} f(x) = \frac{ke^{-kx}}{1 + e^{-kx}}\frac{L}{1 + e^{-kx}}.$$
In the middle at $x=0$ this simplifies to $kL/4.$
This leads to three useful rules, illustrated below:
The limiting slope at the left of the graph of $\log f$ is $k.$
The middle slope of the graph of $\log f$ is half the rate, $k/2.$
When $L=4,$ the middle slope of the graph of $f$ equals the rate $k.$
The special role of $L=4$ is revealed by studying the folded log and its inverse.
The left hand curve suggests the limiting log slope is achieved closely for $x \ll -3/k.$ The right hand curve indicates $kL/4$ is a good approximation of the slope for $|x| \lt 1/k.$ These observations provide guidance for estimating $k$ from data, especially when only a rough "eyeball" estimate might be needed. | How does logistic growth rate coincide with the slope of the line in the exponential phase of the gr
Let's do the calculations to see what the answers are.
By changing the units of measurement of $x$ to the origin $x_0$ we may assume $x_0=0$ (to simplify the work and the notation) and--therefore--the |
52,080 | How does logistic growth rate coincide with the slope of the line in the exponential phase of the growth? | Here are some exponential curves:
$e^x$ in cyan
$e^{2x}$ in pink
$1-e^{-x}$ in green
$1-e^{-2x}$ in orange
In a sense on the left the pink curve has twice the exponential growth rate of the cyan curve, and symmetrically on the right the orange curve has twice the exponential growth rate of the green curve though in a negative sense. When $x=0$ this translates into one slope being twice the other ($1$ and $2$ at that point), though not at other points. If you translated the curves horizontally, for example looking at $e^{x-5}$ and $e^{2(x-3)}$, then they would cross at a different point in the curves, here $x=1$, and at that point one slope would be twice the other ($e^{-4}$ and $2e^{-4}$ at that point).
Now add two logistic sigmoid curves
$\dfrac{1}{1+e^{-x}}$ in blue
$\dfrac{1}{1+e^{-2x}}$ in red
Note the blue curve is close to the cyan curve and the red curve is close to the pink curve for large negative $x$, while the blue curve is close to the green curve and the red curve is close to the orange curve for large positive $x$, so in those parts of the chart sharing their growth rates. Again when $x=0$ this translates into one slope being twice the other $\big(\frac14$ and $\frac12$ at that point$\big)$, though not at other points. Again, if you translated the curves horizontally, for example looking at $\frac{1}{1+e^{-(x-5)}}$ and $\frac{1}{1+e^{-2(x-3)}}$, then these would cross when $x=1$, and at that point one slope would be twice the other $\Big(\frac{e^{-4}}{(1+e^{-4})^2}$ and $\frac{2e^{-4}}{(1+e^{-4})^2}$ at that point$\Big)$.
So it could be reasonable to suggest the red curve in some sense has twice the logistic growth rate of the blue curve. | How does logistic growth rate coincide with the slope of the line in the exponential phase of the gr | Here are some exponential curves:
$e^x$ in cyan
$e^{2x}$ in pink
$1-e^{-x}$ in green
$1-e^{-2x}$ in orange
In a sense on the left the pink curve has twice the exponential growth rate of the cyan cu | How does logistic growth rate coincide with the slope of the line in the exponential phase of the growth?
Here are some exponential curves:
$e^x$ in cyan
$e^{2x}$ in pink
$1-e^{-x}$ in green
$1-e^{-2x}$ in orange
In a sense on the left the pink curve has twice the exponential growth rate of the cyan curve, and symmetrically on the right the orange curve has twice the exponential growth rate of the green curve though in a negative sense. When $x=0$ this translates into one slope being twice the other ($1$ and $2$ at that point), though not at other points. If you translated the curves horizontally, for example looking at $e^{x-5}$ and $e^{2(x-3)}$, then they would cross at a different point in the curves, here $x=1$, and at that point one slope would be twice the other ($e^{-4}$ and $2e^{-4}$ at that point).
Now add two logistic sigmoid curves
$\dfrac{1}{1+e^{-x}}$ in blue
$\dfrac{1}{1+e^{-2x}}$ in red
Note the blue curve is close to the cyan curve and the red curve is close to the pink curve for large negative $x$, while the blue curve is close to the green curve and the red curve is close to the orange curve for large positive $x$, so in those parts of the chart sharing their growth rates. Again when $x=0$ this translates into one slope being twice the other $\big(\frac14$ and $\frac12$ at that point$\big)$, though not at other points. Again, if you translated the curves horizontally, for example looking at $\frac{1}{1+e^{-(x-5)}}$ and $\frac{1}{1+e^{-2(x-3)}}$, then these would cross when $x=1$, and at that point one slope would be twice the other $\Big(\frac{e^{-4}}{(1+e^{-4})^2}$ and $\frac{2e^{-4}}{(1+e^{-4})^2}$ at that point$\Big)$.
So it could be reasonable to suggest the red curve in some sense has twice the logistic growth rate of the blue curve. | How does logistic growth rate coincide with the slope of the line in the exponential phase of the gr
Here are some exponential curves:
$e^x$ in cyan
$e^{2x}$ in pink
$1-e^{-x}$ in green
$1-e^{-2x}$ in orange
In a sense on the left the pink curve has twice the exponential growth rate of the cyan cu |
52,081 | How does logistic growth rate coincide with the slope of the line in the exponential phase of the growth? | If L = 1 and one transforms to the logit (log odds) scale $p(x)$ then the slope is exactly k. The right hand side of the first equals is by definition of log odds and the second is by algebraic simplification.
$p(x) = log(\frac{f(x)}{1-f(x)}) = k(x - x_0)$
Now
$dp(x)/dx = k$
This is the usual interpretation of k used in logistic regression.
Added
In general, define $p(x) = log(\frac{f(x)}{L-f(x)})$ in which case L cancels, $p(x) = k(x-x_0)$ as above and the above derivative still holds. | How does logistic growth rate coincide with the slope of the line in the exponential phase of the gr | If L = 1 and one transforms to the logit (log odds) scale $p(x)$ then the slope is exactly k. The right hand side of the first equals is by definition of log odds and the second is by algebraic simpl | How does logistic growth rate coincide with the slope of the line in the exponential phase of the growth?
If L = 1 and one transforms to the logit (log odds) scale $p(x)$ then the slope is exactly k. The right hand side of the first equals is by definition of log odds and the second is by algebraic simplification.
$p(x) = log(\frac{f(x)}{1-f(x)}) = k(x - x_0)$
Now
$dp(x)/dx = k$
This is the usual interpretation of k used in logistic regression.
Added
In general, define $p(x) = log(\frac{f(x)}{L-f(x)})$ in which case L cancels, $p(x) = k(x-x_0)$ as above and the above derivative still holds. | How does logistic growth rate coincide with the slope of the line in the exponential phase of the gr
If L = 1 and one transforms to the logit (log odds) scale $p(x)$ then the slope is exactly k. The right hand side of the first equals is by definition of log odds and the second is by algebraic simpl |
52,082 | Is classification only a machine learning problem? | You are exactly right. Machine/statistical learning is one approach to classification, but not the only one. Simple rules created by humans are probably more common in computer programs than ones created by ML. | Is classification only a machine learning problem? | You are exactly right. Machine/statistical learning is one approach to classification, but not the only one. Simple rules created by humans are probably more common in computer programs than ones crea | Is classification only a machine learning problem?
You are exactly right. Machine/statistical learning is one approach to classification, but not the only one. Simple rules created by humans are probably more common in computer programs than ones created by ML. | Is classification only a machine learning problem?
You are exactly right. Machine/statistical learning is one approach to classification, but not the only one. Simple rules created by humans are probably more common in computer programs than ones crea |
52,083 | Is classification only a machine learning problem? | Actually, classification methodology has been around in classical probability and statistics for the better part of a century, well before "machine learning" was a deal. See, e.g., the classic multivariate analysis text by Johnson and Wichern, Applied Multivariate Statistical Analysis, 6th Edition, Section IV. CLASSIFICATION AND GROUPING TECHNIQUES (subsection "Classification with Several Populations"), for the optimal (nonparametric, even!) approach to this problem. Machine learning algorithms are simply attempts to approximate this historically well-known (at least by statisticians) optimal solution. | Is classification only a machine learning problem? | Actually, classification methodology has been around in classical probability and statistics for the better part of a century, well before "machine learning" was a deal. See, e.g., the classic multiv | Is classification only a machine learning problem?
Actually, classification methodology has been around in classical probability and statistics for the better part of a century, well before "machine learning" was a deal. See, e.g., the classic multivariate analysis text by Johnson and Wichern, Applied Multivariate Statistical Analysis, 6th Edition, Section IV. CLASSIFICATION AND GROUPING TECHNIQUES (subsection "Classification with Several Populations"), for the optimal (nonparametric, even!) approach to this problem. Machine learning algorithms are simply attempts to approximate this historically well-known (at least by statisticians) optimal solution. | Is classification only a machine learning problem?
Actually, classification methodology has been around in classical probability and statistics for the better part of a century, well before "machine learning" was a deal. See, e.g., the classic multiv |
52,084 | Expectation of conditional uniform variates [duplicate] | No, the event $X_1>X_2$ provides some information. Consider a more general case where you have independent $X_1,..,X_n$ and the event $\bigcap_{i=2}^n X_1>X_i$, surely you have the right to suspect that $X_1$ is closer to $1$ more probably than $0$.
For your question, there are various ways to calculate it, normalise the joint distribution in the region where $X_1>X_2$ and take the expectation. The answer will be $2/3$ (or a more straightforward way: $x_1$ coordinate of the center of mass of the triangle region). | Expectation of conditional uniform variates [duplicate] | No, the event $X_1>X_2$ provides some information. Consider a more general case where you have independent $X_1,..,X_n$ and the event $\bigcap_{i=2}^n X_1>X_i$, surely you have the right to suspect th | Expectation of conditional uniform variates [duplicate]
No, the event $X_1>X_2$ provides some information. Consider a more general case where you have independent $X_1,..,X_n$ and the event $\bigcap_{i=2}^n X_1>X_i$, surely you have the right to suspect that $X_1$ is closer to $1$ more probably than $0$.
For your question, there are various ways to calculate it, normalise the joint distribution in the region where $X_1>X_2$ and take the expectation. The answer will be $2/3$ (or a more straightforward way: $x_1$ coordinate of the center of mass of the triangle region). | Expectation of conditional uniform variates [duplicate]
No, the event $X_1>X_2$ provides some information. Consider a more general case where you have independent $X_1,..,X_n$ and the event $\bigcap_{i=2}^n X_1>X_i$, surely you have the right to suspect th |
52,085 | Expectation of conditional uniform variates [duplicate] | Comment. illustrating @gunes (+1) argument via simulation in R.
set.seed(2021)
X1 = runif(10^6); X2 = runif(10^6)
mean(X1[X1>X2])
[1] 0.6668622 # aprx 2/3
In the figure below, $E(X_1|X_1 >X_2)= 2/3$ is the
the average horizontal value of the blue points.
#smaller samples for clearer figure
x1 = X1[1:30000]; x2 = X2[1:30000]
plot(x1, x2, pch=".")
points(x1[x1 > x2], x2[x1 > x2], pch=".", col="blue")
hist(X1[X1 > X2], prob=T, col="skyblue2")
curve(dbeta(x,2,1), add=T, col="orange", lwd=2, n = 10001) | Expectation of conditional uniform variates [duplicate] | Comment. illustrating @gunes (+1) argument via simulation in R.
set.seed(2021)
X1 = runif(10^6); X2 = runif(10^6)
mean(X1[X1>X2])
[1] 0.6668622 # aprx 2/3
In the figure below, $E(X_1|X_1 >X_2)= 2/3 | Expectation of conditional uniform variates [duplicate]
Comment. illustrating @gunes (+1) argument via simulation in R.
set.seed(2021)
X1 = runif(10^6); X2 = runif(10^6)
mean(X1[X1>X2])
[1] 0.6668622 # aprx 2/3
In the figure below, $E(X_1|X_1 >X_2)= 2/3$ is the
the average horizontal value of the blue points.
#smaller samples for clearer figure
x1 = X1[1:30000]; x2 = X2[1:30000]
plot(x1, x2, pch=".")
points(x1[x1 > x2], x2[x1 > x2], pch=".", col="blue")
hist(X1[X1 > X2], prob=T, col="skyblue2")
curve(dbeta(x,2,1), add=T, col="orange", lwd=2, n = 10001) | Expectation of conditional uniform variates [duplicate]
Comment. illustrating @gunes (+1) argument via simulation in R.
set.seed(2021)
X1 = runif(10^6); X2 = runif(10^6)
mean(X1[X1>X2])
[1] 0.6668622 # aprx 2/3
In the figure below, $E(X_1|X_1 >X_2)= 2/3 |
52,086 | One class classifier vs binary classifier | Suppose you are trying to perform two-class classification on faulty and non-faulty machinery data, where each example in the dataset is represented using the feature vector $\mathbf{x} = [x_1 \ x_2]^T$. This could be done as shown below:
Here, the faulty machinery data is represented by the orange area, and the non-faulty machinery data represented by the blue area. Suppose that the machines in question are rarely ever faulty, such that there are so many different examples of non-faulty machinery, but very few examples of faulty machinery. Given that you have trained on the data shown above, it is possible that you observe an example of non-faulty machinery that you mis-classify as faulty:
Again, the reason that this happens is because the set of all possible feature vectors representing non-faulty machinery is just too big. It is not possible to capture all of them and train on them. You could argue that you just need to collect more data and train on that, but what you would end up doing is this:
This, by the way, can be done using a neural network, for example, and the act of drawing these lines is the basic idea behind discriminative modelling.
However, why bother collecting so much data and creating a very complex model that can draw all of these lines, when you can just try to draw the shape of the faulty data like this?
In the figure above, the orange area is the faulty machinery data that you collected, while the rest of the feature space is assumed to represent non-faulty machinery. This is the basic idea behind generative modelling, where, instead of trying to draw a line that splits the feature space, you instead try to estimate the distribution of the faulty data to learn what it looks like. Then, given a new test vector $\mathbf{x}$, all you need to do is to measure the distance between the center of the distribution of the faulty data and this new test vector $\mathbf{x}$. If this distance is greater than a specific threshold, then the test vector is classified as non-faulty. Otherwise, it is classified as faulty. This is also one way of performing anomaly detection. | One class classifier vs binary classifier | Suppose you are trying to perform two-class classification on faulty and non-faulty machinery data, where each example in the dataset is represented using the feature vector $\mathbf{x} = [x_1 \ x_2]^ | One class classifier vs binary classifier
Suppose you are trying to perform two-class classification on faulty and non-faulty machinery data, where each example in the dataset is represented using the feature vector $\mathbf{x} = [x_1 \ x_2]^T$. This could be done as shown below:
Here, the faulty machinery data is represented by the orange area, and the non-faulty machinery data represented by the blue area. Suppose that the machines in question are rarely ever faulty, such that there are so many different examples of non-faulty machinery, but very few examples of faulty machinery. Given that you have trained on the data shown above, it is possible that you observe an example of non-faulty machinery that you mis-classify as faulty:
Again, the reason that this happens is because the set of all possible feature vectors representing non-faulty machinery is just too big. It is not possible to capture all of them and train on them. You could argue that you just need to collect more data and train on that, but what you would end up doing is this:
This, by the way, can be done using a neural network, for example, and the act of drawing these lines is the basic idea behind discriminative modelling.
However, why bother collecting so much data and creating a very complex model that can draw all of these lines, when you can just try to draw the shape of the faulty data like this?
In the figure above, the orange area is the faulty machinery data that you collected, while the rest of the feature space is assumed to represent non-faulty machinery. This is the basic idea behind generative modelling, where, instead of trying to draw a line that splits the feature space, you instead try to estimate the distribution of the faulty data to learn what it looks like. Then, given a new test vector $\mathbf{x}$, all you need to do is to measure the distance between the center of the distribution of the faulty data and this new test vector $\mathbf{x}$. If this distance is greater than a specific threshold, then the test vector is classified as non-faulty. Otherwise, it is classified as faulty. This is also one way of performing anomaly detection. | One class classifier vs binary classifier
Suppose you are trying to perform two-class classification on faulty and non-faulty machinery data, where each example in the dataset is represented using the feature vector $\mathbf{x} = [x_1 \ x_2]^ |
52,087 | One class classifier vs binary classifier | You could, but why would you? There is much more to gain by training a model on both classes, so that your model can learn from both classes and better distinguish between them. Imagine trying to learn to distinguish between cats and dogs, would it help you to see only images of cats, and then try to guess which image is a dog, without ever having seen a dog?
Using a one-class classifier would make sense only if you do not have any examples from the other class at this particular point in time. Or if the "other" class could be composed of many unknown classes, which cannot be easily grouped into one official class. For ex. when doing anomaly detection, when you have only a few very weird cases, which could be the causes of many very varied reasons. | One class classifier vs binary classifier | You could, but why would you? There is much more to gain by training a model on both classes, so that your model can learn from both classes and better distinguish between them. Imagine trying to lear | One class classifier vs binary classifier
You could, but why would you? There is much more to gain by training a model on both classes, so that your model can learn from both classes and better distinguish between them. Imagine trying to learn to distinguish between cats and dogs, would it help you to see only images of cats, and then try to guess which image is a dog, without ever having seen a dog?
Using a one-class classifier would make sense only if you do not have any examples from the other class at this particular point in time. Or if the "other" class could be composed of many unknown classes, which cannot be easily grouped into one official class. For ex. when doing anomaly detection, when you have only a few very weird cases, which could be the causes of many very varied reasons. | One class classifier vs binary classifier
You could, but why would you? There is much more to gain by training a model on both classes, so that your model can learn from both classes and better distinguish between them. Imagine trying to lear |
52,088 | Probability Mass Function making the Truncated Normal Discrete | The other answer here uses the normal density values at the exact points. Another similar method would be to take the normal probabilities across intervals centred on those points. In the latter case, taking the support to be $X = 1,...,m$ you get:
$$p_X(x) = \frac{\Phi \Big( \frac{x - \mu + 1/2}{\sigma} \Big) - \Phi \Big( \frac{x - \mu - 1/2}{\sigma} \Big)}{\Phi \Big( \frac{m - \mu + 1/2}{\sigma} \Big) - \Phi \Big( \frac{1/2 - \mu}{\sigma} \Big)}
\quad \quad \quad
\text{for } x=1,...,m,$$
where $\Phi$ is the CDF of the standard normal distribution. You can then adjust to an arbitrary arithmetic progression by taking the appropriate linear transformation. This has the first three properties you stipulated in your question. | Probability Mass Function making the Truncated Normal Discrete | The other answer here uses the normal density values at the exact points. Another similar method would be to take the normal probabilities across intervals centred on those points. In the latter cas | Probability Mass Function making the Truncated Normal Discrete
The other answer here uses the normal density values at the exact points. Another similar method would be to take the normal probabilities across intervals centred on those points. In the latter case, taking the support to be $X = 1,...,m$ you get:
$$p_X(x) = \frac{\Phi \Big( \frac{x - \mu + 1/2}{\sigma} \Big) - \Phi \Big( \frac{x - \mu - 1/2}{\sigma} \Big)}{\Phi \Big( \frac{m - \mu + 1/2}{\sigma} \Big) - \Phi \Big( \frac{1/2 - \mu}{\sigma} \Big)}
\quad \quad \quad
\text{for } x=1,...,m,$$
where $\Phi$ is the CDF of the standard normal distribution. You can then adjust to an arbitrary arithmetic progression by taking the appropriate linear transformation. This has the first three properties you stipulated in your question. | Probability Mass Function making the Truncated Normal Discrete
The other answer here uses the normal density values at the exact points. Another similar method would be to take the normal probabilities across intervals centred on those points. In the latter cas |
52,089 | Probability Mass Function making the Truncated Normal Discrete | When you use the values of the normal density then you get automatically the first four conditions satisfied
$$P(X = x\vert \mu,\sigma,a,b,c) = \frac{e^{-\frac{(x-\mu)^2}{2\sigma^2}}}{\sum_{y \in \Omega} e^{-\frac{(y-\mu)^2}{2\sigma^2}}}$$
where $\Omega = \lbrace a,a+c,a+2c,\dots b-c,c \rbrace$ all the values in the support.
Maximum entropy
The fifth condition, maximum entropy, is also fulfilled since the maximum entropy distribution with constraints $\sum_{\forall x} x p(x) = \mu$ and $\sum_{\forall x} x^2 p(x) = \text{var}$ must be of the form $$P(X=x) = ce^{\lambda_1 x + \lambda_2 x^2}$$ which is like the distribution above (e.g. see that with $\lambda_1 = \mu/\sigma^2$ and $\lambda_2 = -1/(2\sigma^2)$ you get the above).
We can also prove it explicitly by considering the Kullback-Leiber divergence or Gibbs inequality with another distribution $f(x)$, and our distribution $g(x)$
$$\begin{array}{}
- \sum_{\forall x} f(x)\log f(x) &\leq& - \sum_{\forall x} f(x)\log g(x) \\
&\leq& - \sum_{\forall x} f(x)\log \left(ce^{\lambda_1 x + \lambda_2 x^2} \right)\\
&\leq & - \sum_{\forall x} f(x)\left(\log c + \lambda_1 x + \lambda_2 x^2 \right)\\
&\leq & - \sum_{\forall x} g(x)\left(\log c + \lambda_1 x + \lambda_2 x^2 \right) \\
&\leq& - \sum_{\forall x} g(x)\log g(x)
\end{array}$$
This second last step where we switch from $f(x)$ to $g(x)$ is because of the constraints
$$\begin{array}{}
\sum_{\forall x} f(x) &=& \sum_{\forall x} g(x) &=& 1 \\
\sum_{\forall x} xf(x) &=& \sum_{\forall x} xg(x) &=& \mu \\
\sum_{\forall x} x^2f(x) &=& \sum_{\forall x} x^2g(x) &=& \text{var}
\end{array}$$
with these constraints we can rewrite
$$\sum_{\forall x} f(x)\left(\log c + \lambda_1 x + \lambda_2 x^2 \right) = \sum_{\forall x} g(x)\left(\log c + \lambda_1 x + \lambda_2 x^2 \right)$$
So we must have that the entropy of $f(x)$ must be smaller than the entropy of $g(x)$. | Probability Mass Function making the Truncated Normal Discrete | When you use the values of the normal density then you get automatically the first four conditions satisfied
$$P(X = x\vert \mu,\sigma,a,b,c) = \frac{e^{-\frac{(x-\mu)^2}{2\sigma^2}}}{\sum_{y \in \Ome | Probability Mass Function making the Truncated Normal Discrete
When you use the values of the normal density then you get automatically the first four conditions satisfied
$$P(X = x\vert \mu,\sigma,a,b,c) = \frac{e^{-\frac{(x-\mu)^2}{2\sigma^2}}}{\sum_{y \in \Omega} e^{-\frac{(y-\mu)^2}{2\sigma^2}}}$$
where $\Omega = \lbrace a,a+c,a+2c,\dots b-c,c \rbrace$ all the values in the support.
Maximum entropy
The fifth condition, maximum entropy, is also fulfilled since the maximum entropy distribution with constraints $\sum_{\forall x} x p(x) = \mu$ and $\sum_{\forall x} x^2 p(x) = \text{var}$ must be of the form $$P(X=x) = ce^{\lambda_1 x + \lambda_2 x^2}$$ which is like the distribution above (e.g. see that with $\lambda_1 = \mu/\sigma^2$ and $\lambda_2 = -1/(2\sigma^2)$ you get the above).
We can also prove it explicitly by considering the Kullback-Leiber divergence or Gibbs inequality with another distribution $f(x)$, and our distribution $g(x)$
$$\begin{array}{}
- \sum_{\forall x} f(x)\log f(x) &\leq& - \sum_{\forall x} f(x)\log g(x) \\
&\leq& - \sum_{\forall x} f(x)\log \left(ce^{\lambda_1 x + \lambda_2 x^2} \right)\\
&\leq & - \sum_{\forall x} f(x)\left(\log c + \lambda_1 x + \lambda_2 x^2 \right)\\
&\leq & - \sum_{\forall x} g(x)\left(\log c + \lambda_1 x + \lambda_2 x^2 \right) \\
&\leq& - \sum_{\forall x} g(x)\log g(x)
\end{array}$$
This second last step where we switch from $f(x)$ to $g(x)$ is because of the constraints
$$\begin{array}{}
\sum_{\forall x} f(x) &=& \sum_{\forall x} g(x) &=& 1 \\
\sum_{\forall x} xf(x) &=& \sum_{\forall x} xg(x) &=& \mu \\
\sum_{\forall x} x^2f(x) &=& \sum_{\forall x} x^2g(x) &=& \text{var}
\end{array}$$
with these constraints we can rewrite
$$\sum_{\forall x} f(x)\left(\log c + \lambda_1 x + \lambda_2 x^2 \right) = \sum_{\forall x} g(x)\left(\log c + \lambda_1 x + \lambda_2 x^2 \right)$$
So we must have that the entropy of $f(x)$ must be smaller than the entropy of $g(x)$. | Probability Mass Function making the Truncated Normal Discrete
When you use the values of the normal density then you get automatically the first four conditions satisfied
$$P(X = x\vert \mu,\sigma,a,b,c) = \frac{e^{-\frac{(x-\mu)^2}{2\sigma^2}}}{\sum_{y \in \Ome |
52,090 | Probability Mass Function making the Truncated Normal Discrete | The most natural way would be as follows:
Let $X$ have support $0,1,...T$, where $T \in \{2,3,...,\}$ (i.e T can be taken as the limit to $\infty$, since the infinite sum can be bounded by the integral over the kernel of a gaussian pdf, but it will be intractable to work with in practice).
$$f_X(X=x;\mu, \sigma) = \frac{exp(-\frac{(\mu -x)^2}{2\sigma^2})}{\sum_{y=0}^Texp(-\frac{(\mu -y)^2}{2\sigma^2})}$$
This is a special case of the softmax function ($\frac{exp(z_i)}{\sum_{i=1}^Nexp(z_i)})$, which is commonly used to map the reals to the interval $(0,1)$ with the property that the elements sum to 1. There are many choice models which take this form, such as the multinomial logit, with $T < \infty$.
I agree with @Xi'an, however, that defining a pmf this way is unlikely to be justified. It is hard to imagine a principled construction of what this would represent and as Xi'an stated, the properties of a gaussian that make it nice to work with are not inherited by this new model. | Probability Mass Function making the Truncated Normal Discrete | The most natural way would be as follows:
Let $X$ have support $0,1,...T$, where $T \in \{2,3,...,\}$ (i.e T can be taken as the limit to $\infty$, since the infinite sum can be bounded by the integra | Probability Mass Function making the Truncated Normal Discrete
The most natural way would be as follows:
Let $X$ have support $0,1,...T$, where $T \in \{2,3,...,\}$ (i.e T can be taken as the limit to $\infty$, since the infinite sum can be bounded by the integral over the kernel of a gaussian pdf, but it will be intractable to work with in practice).
$$f_X(X=x;\mu, \sigma) = \frac{exp(-\frac{(\mu -x)^2}{2\sigma^2})}{\sum_{y=0}^Texp(-\frac{(\mu -y)^2}{2\sigma^2})}$$
This is a special case of the softmax function ($\frac{exp(z_i)}{\sum_{i=1}^Nexp(z_i)})$, which is commonly used to map the reals to the interval $(0,1)$ with the property that the elements sum to 1. There are many choice models which take this form, such as the multinomial logit, with $T < \infty$.
I agree with @Xi'an, however, that defining a pmf this way is unlikely to be justified. It is hard to imagine a principled construction of what this would represent and as Xi'an stated, the properties of a gaussian that make it nice to work with are not inherited by this new model. | Probability Mass Function making the Truncated Normal Discrete
The most natural way would be as follows:
Let $X$ have support $0,1,...T$, where $T \in \{2,3,...,\}$ (i.e T can be taken as the limit to $\infty$, since the infinite sum can be bounded by the integra |
52,091 | The best way to plot high amount of discrete data with 2 variables in R [duplicate] | One potential option is to add a tiny bit of random noise to each observation. In that way fewer points will overlap.
You can either add it directly and use R's basic plotting capabilities or look into the jitter type layer that comes with the GGplot package that adds the noise automatically. | The best way to plot high amount of discrete data with 2 variables in R [duplicate] | One potential option is to add a tiny bit of random noise to each observation. In that way fewer points will overlap.
You can either add it directly and use R's basic plotting capabilities or look int | The best way to plot high amount of discrete data with 2 variables in R [duplicate]
One potential option is to add a tiny bit of random noise to each observation. In that way fewer points will overlap.
You can either add it directly and use R's basic plotting capabilities or look into the jitter type layer that comes with the GGplot package that adds the noise automatically. | The best way to plot high amount of discrete data with 2 variables in R [duplicate]
One potential option is to add a tiny bit of random noise to each observation. In that way fewer points will overlap.
You can either add it directly and use R's basic plotting capabilities or look int |
52,092 | The best way to plot high amount of discrete data with 2 variables in R [duplicate] | Mosaic plots are a good way of doing this.
https://cran.r-project.org/web/packages/ggmosaic/vignettes/ggmosaic.html | The best way to plot high amount of discrete data with 2 variables in R [duplicate] | Mosaic plots are a good way of doing this.
https://cran.r-project.org/web/packages/ggmosaic/vignettes/ggmosaic.html | The best way to plot high amount of discrete data with 2 variables in R [duplicate]
Mosaic plots are a good way of doing this.
https://cran.r-project.org/web/packages/ggmosaic/vignettes/ggmosaic.html | The best way to plot high amount of discrete data with 2 variables in R [duplicate]
Mosaic plots are a good way of doing this.
https://cran.r-project.org/web/packages/ggmosaic/vignettes/ggmosaic.html |
52,093 | The best way to plot high amount of discrete data with 2 variables in R [duplicate] | The ggplot2 library should handle something like this. There are example of the specific code out on the internet. I’ll just address the idea, since this is CV.SE, not SO.
I would represent the points in a data frame with three columns. One column would have the x-coordinate, one column would have the y-coordinate, and one column would have the count of how many instances of that x-y pair there are. Then you can use a color to denote the prevalence of a point, which ggplot2 can do. | The best way to plot high amount of discrete data with 2 variables in R [duplicate] | The ggplot2 library should handle something like this. There are example of the specific code out on the internet. I’ll just address the idea, since this is CV.SE, not SO.
I would represent the points | The best way to plot high amount of discrete data with 2 variables in R [duplicate]
The ggplot2 library should handle something like this. There are example of the specific code out on the internet. I’ll just address the idea, since this is CV.SE, not SO.
I would represent the points in a data frame with three columns. One column would have the x-coordinate, one column would have the y-coordinate, and one column would have the count of how many instances of that x-y pair there are. Then you can use a color to denote the prevalence of a point, which ggplot2 can do. | The best way to plot high amount of discrete data with 2 variables in R [duplicate]
The ggplot2 library should handle something like this. There are example of the specific code out on the internet. I’ll just address the idea, since this is CV.SE, not SO.
I would represent the points |
52,094 | The best way to plot high amount of discrete data with 2 variables in R [duplicate] | Similar to what Dave proposes, but in base R: visualize table counts using grayscale, with darker grays for cells with higher counts.
set.seed(1)
nn <- 1e6
aa <- sample(1:10,nn,prob=(1:10)^2-5*(1:10)+20,replace=TRUE)
bb <- sample(1:10,nn,prob=20-(1:10),replace=TRUE)
data_table <- table(aa,bb)
grayscale <- function ( cnt ) paste0("grey",100-3*round(cnt/1000,0))
# this relies on the fact that counts are between 3000 and 30000
# adapt as needed
plot(c(0,12),c(0,11),type="n",las=1,xlab="A",ylab="B")
for ( ii in rownames(data_table) ) {
for ( jj in colnames(data_table) ) {
rect(as.numeric(ii)-.5,as.numeric(jj)-.5,as.numeric(ii)+.5,as.numeric(jj)+.5,
border=NA,col=grayscale(data_table[ii,jj]))
# optionally, add counts
# text(as.numeric(ii),as.numeric(jj),data_table[ii,jj],
# col=if(data_table[ii,jj]>quantile(data_table,0.7)) "white" else "black")
}
}
counts_for_legend <- round(seq(min(data_table),max(data_table),length.out=5),0)
legend("right",pch=22,pt.bg=grayscale(counts_for_legend),legend=counts_for_legend,pt.cex=1.5)
Of course, this could be prettified a lot, especially the legend - the question is whether you want to do this by hand (if you want to create this plot only a single time), or programmatically (if this needs to be created often, with different datasets).
Alternatively, if you want a little more color in your life, you could change the grayscale() function above to one that outputs a black body radiation color:
lackBodyRadiationColors <- function(x, max_value=1) {
# x should be between 0 (black) and 1 (white)
# if large x come out too bright, constrain the bright end of the palette
# by setting max_value lower than 1
foo <- colorRamp(c(rgb(0,0,0),rgb(1,0,0),rgb(1,1,0),rgb(1,1,1)))(x*max_value)/255
apply(foo,1,function(bar)rgb(bar[1],bar[2],bar[3]))
}
plot(c(0,12),c(0,11),type="n",las=1,xlab="A",ylab="B")
for ( ii in rownames(data_table) ) {
for ( jj in colnames(data_table) ) {
rect(as.numeric(ii)-.5,as.numeric(jj)-.5,as.numeric(ii)+.5,as.numeric(jj)+.5,
border=NA,col=blackBodyRadiationColors(1-data_table[ii,jj]/max(data_table)))
# optionally, add counts
# text(as.numeric(ii),as.numeric(jj),data_table[ii,jj],
# col=if(data_table[ii,jj]>quantile(data_table,0.7)) "white" else "black")
}
}
counts_for_legend <- round(seq(min(data_table),max(data_table),length.out=5),0)
legend("right",pch=22,pt.bg=blackBodyRadiationColors(1-counts_for_legend/max(data_table)),
legend=counts_for_legend,pt.cex=1.5) | The best way to plot high amount of discrete data with 2 variables in R [duplicate] | Similar to what Dave proposes, but in base R: visualize table counts using grayscale, with darker grays for cells with higher counts.
set.seed(1)
nn <- 1e6
aa <- sample(1:10,nn,prob=(1:10)^2-5*(1:10) | The best way to plot high amount of discrete data with 2 variables in R [duplicate]
Similar to what Dave proposes, but in base R: visualize table counts using grayscale, with darker grays for cells with higher counts.
set.seed(1)
nn <- 1e6
aa <- sample(1:10,nn,prob=(1:10)^2-5*(1:10)+20,replace=TRUE)
bb <- sample(1:10,nn,prob=20-(1:10),replace=TRUE)
data_table <- table(aa,bb)
grayscale <- function ( cnt ) paste0("grey",100-3*round(cnt/1000,0))
# this relies on the fact that counts are between 3000 and 30000
# adapt as needed
plot(c(0,12),c(0,11),type="n",las=1,xlab="A",ylab="B")
for ( ii in rownames(data_table) ) {
for ( jj in colnames(data_table) ) {
rect(as.numeric(ii)-.5,as.numeric(jj)-.5,as.numeric(ii)+.5,as.numeric(jj)+.5,
border=NA,col=grayscale(data_table[ii,jj]))
# optionally, add counts
# text(as.numeric(ii),as.numeric(jj),data_table[ii,jj],
# col=if(data_table[ii,jj]>quantile(data_table,0.7)) "white" else "black")
}
}
counts_for_legend <- round(seq(min(data_table),max(data_table),length.out=5),0)
legend("right",pch=22,pt.bg=grayscale(counts_for_legend),legend=counts_for_legend,pt.cex=1.5)
Of course, this could be prettified a lot, especially the legend - the question is whether you want to do this by hand (if you want to create this plot only a single time), or programmatically (if this needs to be created often, with different datasets).
Alternatively, if you want a little more color in your life, you could change the grayscale() function above to one that outputs a black body radiation color:
lackBodyRadiationColors <- function(x, max_value=1) {
# x should be between 0 (black) and 1 (white)
# if large x come out too bright, constrain the bright end of the palette
# by setting max_value lower than 1
foo <- colorRamp(c(rgb(0,0,0),rgb(1,0,0),rgb(1,1,0),rgb(1,1,1)))(x*max_value)/255
apply(foo,1,function(bar)rgb(bar[1],bar[2],bar[3]))
}
plot(c(0,12),c(0,11),type="n",las=1,xlab="A",ylab="B")
for ( ii in rownames(data_table) ) {
for ( jj in colnames(data_table) ) {
rect(as.numeric(ii)-.5,as.numeric(jj)-.5,as.numeric(ii)+.5,as.numeric(jj)+.5,
border=NA,col=blackBodyRadiationColors(1-data_table[ii,jj]/max(data_table)))
# optionally, add counts
# text(as.numeric(ii),as.numeric(jj),data_table[ii,jj],
# col=if(data_table[ii,jj]>quantile(data_table,0.7)) "white" else "black")
}
}
counts_for_legend <- round(seq(min(data_table),max(data_table),length.out=5),0)
legend("right",pch=22,pt.bg=blackBodyRadiationColors(1-counts_for_legend/max(data_table)),
legend=counts_for_legend,pt.cex=1.5) | The best way to plot high amount of discrete data with 2 variables in R [duplicate]
Similar to what Dave proposes, but in base R: visualize table counts using grayscale, with darker grays for cells with higher counts.
set.seed(1)
nn <- 1e6
aa <- sample(1:10,nn,prob=(1:10)^2-5*(1:10) |
52,095 | Why is the correlation between independent variables/regressor and residuals zero for OLS? | In any model with an intercept, the residuals are uncorrelated with the predictors $X$ by construction; this is true whether or not the linear model is a good fit and it has nothing to do with assumptions.
It's important here to distinguish between the residuals and the unobserved things often called the errors.
The covariance between residuals $R$ and $X$ is
$$\frac{1}{n}\sum RX-\frac{1}{n}(\sum R)\frac{1}{n}(\sum X)$$
If the model includes an intercept $\sum R=0$, so the covariance is just $\frac{1}{n}\sum RX$. But the Normal equations to estimate $\hat\beta$ are
$X(Y-\hat Y)=0$, ie, $\frac{1}{n}\sum XR=0$.
So the residuals and $X$ are exactly uncorrelated.
When there is actually a model
$$Y = X\beta+e$$
the assumption that the errors $e$ are uncorrelated with $X$ is necessary to make $\hat\beta$ unbiased for $\beta$ (and we assume the errors have mean zero to make the intercept identifiable). So $E[X^Te]=0$ is an assumption, not a theorem.
The residuals typically are not uncorrelated with $Y$. Neither are the errors. | Why is the correlation between independent variables/regressor and residuals zero for OLS? | In any model with an intercept, the residuals are uncorrelated with the predictors $X$ by construction; this is true whether or not the linear model is a good fit and it has nothing to do with assumpt | Why is the correlation between independent variables/regressor and residuals zero for OLS?
In any model with an intercept, the residuals are uncorrelated with the predictors $X$ by construction; this is true whether or not the linear model is a good fit and it has nothing to do with assumptions.
It's important here to distinguish between the residuals and the unobserved things often called the errors.
The covariance between residuals $R$ and $X$ is
$$\frac{1}{n}\sum RX-\frac{1}{n}(\sum R)\frac{1}{n}(\sum X)$$
If the model includes an intercept $\sum R=0$, so the covariance is just $\frac{1}{n}\sum RX$. But the Normal equations to estimate $\hat\beta$ are
$X(Y-\hat Y)=0$, ie, $\frac{1}{n}\sum XR=0$.
So the residuals and $X$ are exactly uncorrelated.
When there is actually a model
$$Y = X\beta+e$$
the assumption that the errors $e$ are uncorrelated with $X$ is necessary to make $\hat\beta$ unbiased for $\beta$ (and we assume the errors have mean zero to make the intercept identifiable). So $E[X^Te]=0$ is an assumption, not a theorem.
The residuals typically are not uncorrelated with $Y$. Neither are the errors. | Why is the correlation between independent variables/regressor and residuals zero for OLS?
In any model with an intercept, the residuals are uncorrelated with the predictors $X$ by construction; this is true whether or not the linear model is a good fit and it has nothing to do with assumpt |
52,096 | Why is the correlation between independent variables/regressor and residuals zero for OLS? | Consider the model
$$Y_i = 3 + 4x_i + e_i,$$
where $e_i \stackrel{iid}{\sim} \mathsf{Norm}(0, \sigma=1).$
A version of this is simulated in R as follows:
set.seed(625)
x = runif(20, 1, 23)
y = 3 + 4*x + rnorm(20, 0, 1)
Of course, one anticipates a linear association between $x_i$ and $Y_i,$
otherwise there is not much point trying to fit a regression line to the
data.
cor(x,y)
[1] 0.9991042
Let's do the regression procedure.
reg.out = lm(y ~ x)
reg.out
Call:
lm(formula = y ~ x)
Coefficients:
(Intercept) x
3.649 3.985
So the true intercept $\beta_0= 3$ from by simulation has been
estimated as $\hat \beta_0 = 3.649$ and the true slope
$\beta_1 =4$ has been estimated as $\hat \beta_1 = 3.985.$
A summary of results shows rejection of null hypotheses
$\beta_0 = 0$ and $\beta_1 = 0.$
summary(reg.out)
Call:
lm(formula = y ~ x)
Residuals:
Min 1Q Median 3Q Max
-1.42617 -0.61995 -0.04733 0.41389 2.63963
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.64936 0.52268 6.982 1.61e-06 ***
x 3.98474 0.03978 100.167 < 2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.9747 on 18 degrees of freedom
Multiple R-squared: 0.9982, Adjusted R-squared: 0.9981
F-statistic: 1.003e+04 on 1 and 18 DF, p-value: < 2.2e-16
Here is a scatterplot of the data along with a plot of the regression
line through the data.
plot(x,y, pch=20)
abline(reg.out, col="blue")
With $\hat Y = \hat\beta_0 + \hat\beta_1,$
the residuals are $r_i = Y_i - \hat Y_i.$
They are vertical distances between the the $Y_i$ and
the regression line at each $x_i.$
We can retrieve their values as follows:
r = reg.out$resi
summary(r)
Min. 1st Qu. Median Mean 3rd Qu. Max.
-1.42617 -0.61995 -0.04733 0.00000 0.41389 2.63963
The regression procedure ensures that $\bar r = 0,$ which
is why their Mean was not shown in the previous summary.
Also, geneally speaking, one expects that the residuals will
not be correlated with either $x_i$ or $Y_i.$ If the linear model
is correct, then the regression
line expresses the linear trend, so the $r_i$ should not show
association with either $Y_i$ or $x_i$
cor(r,x); cor(r,y)
[1] -2.554525e-16
[1] 0.04231753
Because the errors are normally distributed, it is fair to
do a formal test to see if the null hypothesis $\rho_{rY} = 0$
is rejected. It is not.
cor.test(r,y)
Pearson's product-moment correlation
data: r and y
t = 0.1797, df = 18, p-value = 0.8594
alternative hypothesis:
true correlation is not equal to 0
95 percent confidence interval:
-0.4078406 0.4759259
sample estimates:
cor
0.04231753
Maybe this demonstration helps you to see why you should not
expect to see the correlations you mention in your question.
If you are still puzzled, maybe you can clarify your doubts
by making reference to the regression procedure above. | Why is the correlation between independent variables/regressor and residuals zero for OLS? | Consider the model
$$Y_i = 3 + 4x_i + e_i,$$
where $e_i \stackrel{iid}{\sim} \mathsf{Norm}(0, \sigma=1).$
A version of this is simulated in R as follows:
set.seed(625)
x = runif(20, 1, 23)
y = 3 + 4*x | Why is the correlation between independent variables/regressor and residuals zero for OLS?
Consider the model
$$Y_i = 3 + 4x_i + e_i,$$
where $e_i \stackrel{iid}{\sim} \mathsf{Norm}(0, \sigma=1).$
A version of this is simulated in R as follows:
set.seed(625)
x = runif(20, 1, 23)
y = 3 + 4*x + rnorm(20, 0, 1)
Of course, one anticipates a linear association between $x_i$ and $Y_i,$
otherwise there is not much point trying to fit a regression line to the
data.
cor(x,y)
[1] 0.9991042
Let's do the regression procedure.
reg.out = lm(y ~ x)
reg.out
Call:
lm(formula = y ~ x)
Coefficients:
(Intercept) x
3.649 3.985
So the true intercept $\beta_0= 3$ from by simulation has been
estimated as $\hat \beta_0 = 3.649$ and the true slope
$\beta_1 =4$ has been estimated as $\hat \beta_1 = 3.985.$
A summary of results shows rejection of null hypotheses
$\beta_0 = 0$ and $\beta_1 = 0.$
summary(reg.out)
Call:
lm(formula = y ~ x)
Residuals:
Min 1Q Median 3Q Max
-1.42617 -0.61995 -0.04733 0.41389 2.63963
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.64936 0.52268 6.982 1.61e-06 ***
x 3.98474 0.03978 100.167 < 2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.9747 on 18 degrees of freedom
Multiple R-squared: 0.9982, Adjusted R-squared: 0.9981
F-statistic: 1.003e+04 on 1 and 18 DF, p-value: < 2.2e-16
Here is a scatterplot of the data along with a plot of the regression
line through the data.
plot(x,y, pch=20)
abline(reg.out, col="blue")
With $\hat Y = \hat\beta_0 + \hat\beta_1,$
the residuals are $r_i = Y_i - \hat Y_i.$
They are vertical distances between the the $Y_i$ and
the regression line at each $x_i.$
We can retrieve their values as follows:
r = reg.out$resi
summary(r)
Min. 1st Qu. Median Mean 3rd Qu. Max.
-1.42617 -0.61995 -0.04733 0.00000 0.41389 2.63963
The regression procedure ensures that $\bar r = 0,$ which
is why their Mean was not shown in the previous summary.
Also, geneally speaking, one expects that the residuals will
not be correlated with either $x_i$ or $Y_i.$ If the linear model
is correct, then the regression
line expresses the linear trend, so the $r_i$ should not show
association with either $Y_i$ or $x_i$
cor(r,x); cor(r,y)
[1] -2.554525e-16
[1] 0.04231753
Because the errors are normally distributed, it is fair to
do a formal test to see if the null hypothesis $\rho_{rY} = 0$
is rejected. It is not.
cor.test(r,y)
Pearson's product-moment correlation
data: r and y
t = 0.1797, df = 18, p-value = 0.8594
alternative hypothesis:
true correlation is not equal to 0
95 percent confidence interval:
-0.4078406 0.4759259
sample estimates:
cor
0.04231753
Maybe this demonstration helps you to see why you should not
expect to see the correlations you mention in your question.
If you are still puzzled, maybe you can clarify your doubts
by making reference to the regression procedure above. | Why is the correlation between independent variables/regressor and residuals zero for OLS?
Consider the model
$$Y_i = 3 + 4x_i + e_i,$$
where $e_i \stackrel{iid}{\sim} \mathsf{Norm}(0, \sigma=1).$
A version of this is simulated in R as follows:
set.seed(625)
x = runif(20, 1, 23)
y = 3 + 4*x |
52,097 | Defining continuous random variables via uncountable sets | The problem with both characterizations is that they ignore the underlying probabilities.
Recall that a random variable $X$ is a function that assigns real numbers to elements of the sample space. If a considerable part of the domain of $X$ has no probability, then the range of $X$ may have virtually any property whatsoever but that won't tell you a thing about the distribution of $X.$
Here are the mathematical details.
By definition, a random variable $X$ has a distribution function defined by $$F_X(x)=\Pr(X\le x)$$ for all numbers $x.$ $X$ is continuous if and only if $F_X$ is a continuous function everywhere.
As a counterexample to both (a) and (b), let $\Omega=[0,1]$ be the sample space of all real numbers between $0$ and $1$ inclusive with its usual Borel sigma-algebra. $\Omega$ is uncountable. Let $\mathbb P$ be the normalized counting measure on $\{0,1\}.$ This means the value of $\mathbb P$ on any event $\mathcal E\subset \Omega$ is the sum of two values: $0$ if $0\notin \mathcal E$ or $1/2$ if $0\in\mathcal E;$ plus $0$ if $1\notin \mathcal E$ or $1/2$ if $1\in\mathcal E.$ This is a standard way to model the flip of a fair coin, for instance.
Define a random variable by $$X:\Omega\to\mathbb{R},\quad X(\omega)=\omega.$$ By one standard definition, the range of $X$ is the smallest interval $[a,b]\subset\mathbb R$ for which $\mathbb{P}(X\in[a,b])=1.$ Clearly $0\in[a,b],$ $1\in[a,b],$ and $\mathbb{P}([0,1])=1,$ whence the range of $X$ is $[0,1].$
(Notice how this models the intuition in the introductory paragraphs: although $X$ takes on uncountably many possible values, the only values that have any nonzero probability are limited to just the finite set $\{0,1\}.$)
Although the range of $X$ is the uncountable set $[0,1],$ the distribution function $F_X$ is piecewise constant, jumping from $0$ to $1/2$ at $x=0$ and from $1/2$ to $1$ at $x=1.$ (This is the Bernoulli$(1/2)$ CDF.) $F_X$ is obviously not continuous at either point, even though (a) the range of $X$ is uncountable and (b) the sample space $\Omega$ is uncountable. | Defining continuous random variables via uncountable sets | The problem with both characterizations is that they ignore the underlying probabilities.
Recall that a random variable $X$ is a function that assigns real numbers to elements of the sample space. If | Defining continuous random variables via uncountable sets
The problem with both characterizations is that they ignore the underlying probabilities.
Recall that a random variable $X$ is a function that assigns real numbers to elements of the sample space. If a considerable part of the domain of $X$ has no probability, then the range of $X$ may have virtually any property whatsoever but that won't tell you a thing about the distribution of $X.$
Here are the mathematical details.
By definition, a random variable $X$ has a distribution function defined by $$F_X(x)=\Pr(X\le x)$$ for all numbers $x.$ $X$ is continuous if and only if $F_X$ is a continuous function everywhere.
As a counterexample to both (a) and (b), let $\Omega=[0,1]$ be the sample space of all real numbers between $0$ and $1$ inclusive with its usual Borel sigma-algebra. $\Omega$ is uncountable. Let $\mathbb P$ be the normalized counting measure on $\{0,1\}.$ This means the value of $\mathbb P$ on any event $\mathcal E\subset \Omega$ is the sum of two values: $0$ if $0\notin \mathcal E$ or $1/2$ if $0\in\mathcal E;$ plus $0$ if $1\notin \mathcal E$ or $1/2$ if $1\in\mathcal E.$ This is a standard way to model the flip of a fair coin, for instance.
Define a random variable by $$X:\Omega\to\mathbb{R},\quad X(\omega)=\omega.$$ By one standard definition, the range of $X$ is the smallest interval $[a,b]\subset\mathbb R$ for which $\mathbb{P}(X\in[a,b])=1.$ Clearly $0\in[a,b],$ $1\in[a,b],$ and $\mathbb{P}([0,1])=1,$ whence the range of $X$ is $[0,1].$
(Notice how this models the intuition in the introductory paragraphs: although $X$ takes on uncountably many possible values, the only values that have any nonzero probability are limited to just the finite set $\{0,1\}.$)
Although the range of $X$ is the uncountable set $[0,1],$ the distribution function $F_X$ is piecewise constant, jumping from $0$ to $1/2$ at $x=0$ and from $1/2$ to $1$ at $x=1.$ (This is the Bernoulli$(1/2)$ CDF.) $F_X$ is obviously not continuous at either point, even though (a) the range of $X$ is uncountable and (b) the sample space $\Omega$ is uncountable. | Defining continuous random variables via uncountable sets
The problem with both characterizations is that they ignore the underlying probabilities.
Recall that a random variable $X$ is a function that assigns real numbers to elements of the sample space. If |
52,098 | Defining continuous random variables via uncountable sets | Well, even if the range (or support set) of the random variable $X$ is uncountable, $X$ do not necessarily have a density. The answer by @Sebastian mentions measure, and specifically counting measure. But counting measure on an uncountable set isn't very useful, for instance, it is not $\sigma$-finite. So not very useful in probability.
There is an interesting counterexample, the Cantor distribution have support on an uncountable set --- the Cantor (middle-third) set, but do not have a density, so is not absolutely continuous. Neither is it discrete, it is singular. See How to sample from Cantor distribution?, Is probability theory the study of non-negative functions that integrate/sum to one? and search ...
Such singular distributions are not common in statistics (except as counterexample), but are ubiquitous in other areas. See singular distributions applications and instances. A case in point is dynamics, with the famous Smale's horseshoe, where distributions supported on dynamical Cantor sets abound. | Defining continuous random variables via uncountable sets | Well, even if the range (or support set) of the random variable $X$ is uncountable, $X$ do not necessarily have a density. The answer by @Sebastian mentions measure, and specifically counting measure. | Defining continuous random variables via uncountable sets
Well, even if the range (or support set) of the random variable $X$ is uncountable, $X$ do not necessarily have a density. The answer by @Sebastian mentions measure, and specifically counting measure. But counting measure on an uncountable set isn't very useful, for instance, it is not $\sigma$-finite. So not very useful in probability.
There is an interesting counterexample, the Cantor distribution have support on an uncountable set --- the Cantor (middle-third) set, but do not have a density, so is not absolutely continuous. Neither is it discrete, it is singular. See How to sample from Cantor distribution?, Is probability theory the study of non-negative functions that integrate/sum to one? and search ...
Such singular distributions are not common in statistics (except as counterexample), but are ubiquitous in other areas. See singular distributions applications and instances. A case in point is dynamics, with the famous Smale's horseshoe, where distributions supported on dynamical Cantor sets abound. | Defining continuous random variables via uncountable sets
Well, even if the range (or support set) of the random variable $X$ is uncountable, $X$ do not necessarily have a density. The answer by @Sebastian mentions measure, and specifically counting measure. |
52,099 | Defining continuous random variables via uncountable sets | Consider the example where your sample space $\Omega$ = $\mathbb{R}$. This is uncountably infinite. However whether the RV is continuous depends on the used measure. If you would use $\mu = \#$ (i.e. the counting-measure) you could still easily defined a density with respect to $\mu$ that would induce a discrete distribution.
In general whether a distribution is discrete or continuous depends on the distribution function. It can of course also be a mixture of both (or to make things even weirded be 'singular', see e.g. the Cantor-distribution). | Defining continuous random variables via uncountable sets | Consider the example where your sample space $\Omega$ = $\mathbb{R}$. This is uncountably infinite. However whether the RV is continuous depends on the used measure. If you would use $\mu = \#$ (i.e. | Defining continuous random variables via uncountable sets
Consider the example where your sample space $\Omega$ = $\mathbb{R}$. This is uncountably infinite. However whether the RV is continuous depends on the used measure. If you would use $\mu = \#$ (i.e. the counting-measure) you could still easily defined a density with respect to $\mu$ that would induce a discrete distribution.
In general whether a distribution is discrete or continuous depends on the distribution function. It can of course also be a mixture of both (or to make things even weirded be 'singular', see e.g. the Cantor-distribution). | Defining continuous random variables via uncountable sets
Consider the example where your sample space $\Omega$ = $\mathbb{R}$. This is uncountably infinite. However whether the RV is continuous depends on the used measure. If you would use $\mu = \#$ (i.e. |
52,100 | Does Length Scale of the Kernel in Gaussian Process directly Relates to Correlation Length? | The Gaussian RBF kernel, also known as the squared exponential or exponentiated quadratic kernel, is
$$
k(x, y) = \exp\left( - \frac{\lVert x - y \rVert^2}{2 \ell^2} \right)
,$$
where $\ell$ is often called the lengthscale. Remember that for $f \sim \mathcal{GP}(0, k)$, the correlation between $f(x)$ and $f(y)$ is exactly $k(x, y)$. So with a Gaussian RBF kernel, any two points have positive correlation – but it goes to zero pretty quickly as you get farther and farther away.
When $x$ and $y$ are $\ell$ apart, the correlation is $\exp(- \frac{\ell^2}{2 \ell^2}) = \exp(-\frac12) \approx 0.61$ – a decent amount of correlation.
$2 \ell$ apart means correlation $\exp(-\frac{2^2}{2}) \approx 0.14$ – only slightly dependent.
$3 \ell$ apart means correlation $\exp(-\frac{3^2}2) \approx 0.01$ – barely dependent.
$4 \ell$ apart means correlation $\exp(-\frac{4^2}{2}) \approx 0.0003$ – essentially independent for almost all practical purposes.
Other kernels will have different behavior here and different meaning for the lengthscale. For instance, the exponential or Laplace kernel $\exp(-\lVert x - y \rVert / \ell)$ has smaller correlation $0.37$ at one lengthscale, but has much heavier tails: it still has correlation $0.02$ at 4 lengthscales, 55 times as much as the Gaussian kernel does. | Does Length Scale of the Kernel in Gaussian Process directly Relates to Correlation Length? | The Gaussian RBF kernel, also known as the squared exponential or exponentiated quadratic kernel, is
$$
k(x, y) = \exp\left( - \frac{\lVert x - y \rVert^2}{2 \ell^2} \right)
,$$
where $\ell$ is often | Does Length Scale of the Kernel in Gaussian Process directly Relates to Correlation Length?
The Gaussian RBF kernel, also known as the squared exponential or exponentiated quadratic kernel, is
$$
k(x, y) = \exp\left( - \frac{\lVert x - y \rVert^2}{2 \ell^2} \right)
,$$
where $\ell$ is often called the lengthscale. Remember that for $f \sim \mathcal{GP}(0, k)$, the correlation between $f(x)$ and $f(y)$ is exactly $k(x, y)$. So with a Gaussian RBF kernel, any two points have positive correlation – but it goes to zero pretty quickly as you get farther and farther away.
When $x$ and $y$ are $\ell$ apart, the correlation is $\exp(- \frac{\ell^2}{2 \ell^2}) = \exp(-\frac12) \approx 0.61$ – a decent amount of correlation.
$2 \ell$ apart means correlation $\exp(-\frac{2^2}{2}) \approx 0.14$ – only slightly dependent.
$3 \ell$ apart means correlation $\exp(-\frac{3^2}2) \approx 0.01$ – barely dependent.
$4 \ell$ apart means correlation $\exp(-\frac{4^2}{2}) \approx 0.0003$ – essentially independent for almost all practical purposes.
Other kernels will have different behavior here and different meaning for the lengthscale. For instance, the exponential or Laplace kernel $\exp(-\lVert x - y \rVert / \ell)$ has smaller correlation $0.37$ at one lengthscale, but has much heavier tails: it still has correlation $0.02$ at 4 lengthscales, 55 times as much as the Gaussian kernel does. | Does Length Scale of the Kernel in Gaussian Process directly Relates to Correlation Length?
The Gaussian RBF kernel, also known as the squared exponential or exponentiated quadratic kernel, is
$$
k(x, y) = \exp\left( - \frac{\lVert x - y \rVert^2}{2 \ell^2} \right)
,$$
where $\ell$ is often |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.