idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
52,201 | Test to compare user interfaces | The main thing is study design and perhaps less statistics.
Hypothesis
Your data will probably result try to answer the following questions:
Is interface A easier to understand than interface B
Does experience with a previous interface improve the second run
Does it matter if you tried system A first and then went on to system B
As I understand it you want to answer the first question but are worried that the 2 & 3 will blur the data. Now I don't think that there is any good way to see how you can explain that 3 is not a part of 1 and I would suggest that you avoid the cross-over design of your study since you lack knowledge of that part. In a real world setting the users will also only have one interface to relate to.
Outcome
Your outcome variables are
Time to complete a task or a group of tasks
The experience that the user has with the system
It's always good practice to define which of the outcome variable that is your primary and that's the one you should do use for all your power calculations. If you have several tasks to compare I would suggest you create some compound variable (preferably that makes intuitive sense).
When choosing the subjective experience you should look for validated questionnaires. It is always good to use an already existing questionnaire and in medicine we very often use scores like EQ-5D that we have previously validated and most are familiar with. There are probably similar that perhaps can be used in your case. Scores have one nice feature and that is that you can calculate an average but this also is a disadvantage because an improvement of 10 points means nothing if you can't relate to the score. In EQ-5D we often compare surgery results to the general population and see how close we get with our interventions.
Design
I would do a randomized trial without the cross-over. The important thing with randomization is... it has to be random! Yep, there are plenty of people peeping in the envelopes ruining their own experiments so please make sure you have a good randomization procedure:
Use computer randomization or opaque envelopes
Use blocked randomization so that you aim for equal group size (each block having 50% group A and 50 % group B)
Use random block size, if you have sizes of 2, 4 & 6 you'll have a hard time knowing what interface will be next
Only use stratification for 1 or maybe 2 variables, for instance gender, computer experience
Randomize late, preferably when the subject is sitting by the computer
If word gets around about the systems you might want to have some check for previous knowledge of the systems. If you enroll your classmates you might have told them vital parts of your system by accident ruining the experiment. For obvious reasons you can't blind your subjects but you should try to do everything in your power to keep the subjects unknowing of what they're about to experience.
Confounders
If your randomization works you don't need to worry about the confounders. Even though the theory says you don't need to most note down possible confounders for their subjects in case something doesn't work out with the randomization. If you have very small groups, less that 20-30 subjects, you might want to do the same.
Typical confounders in your case are probably:
Previous computer experience
Age
Gender
Educational level
Power calculation
Calculating power (estimating number of subjects needed) is easy unless you want to go into details and I did my first calculations with Russ Lenth's power calculations that you can find here. You could also use R's package "pwr", you can find some help on that here.
Power calculation is a very rough estimate and you should always add 10-20 % for drop-outs. In medicine we use a significance level of 0.05 and aim for a power of 80% (0.8) by tradition but choosing interface is perhaps not as critical as choosing drug A for a cancer patient and therefore can allow for a significance of only 0.1.
You have to guess a lot when trying to calculate power which is one of the reasons that you should look at the numbers as guidance more than the truth. I also think that if you need more than 60-80 subjects per group then the difference that your looking for is probably very small and that is perhaps OK if you're looking for a better heart medication but less interesting if your designing interfaces.
Statistics
If you have a well designed study this part is the least of your worries. I would say that most of the flaws are in study design and statistic measures differ usually in the decimals while the conclusions usually stay the same.
Tests for randomized trials:
T-test for continuous outcome
Cox regression for time dependent survival outcome
Kaplan-Meier for visualizing survival
Chi-square test for categorical outcome
For testing confounders:
Regression analysis is an extremely powerful method that really allows you to do almost anything.
For learning the basics behind statistics I use Khan Academy that I also recommend to all my students. | Test to compare user interfaces | The main thing is study design and perhaps less statistics.
Hypothesis
Your data will probably result try to answer the following questions:
Is interface A easier to understand than interface B
Does | Test to compare user interfaces
The main thing is study design and perhaps less statistics.
Hypothesis
Your data will probably result try to answer the following questions:
Is interface A easier to understand than interface B
Does experience with a previous interface improve the second run
Does it matter if you tried system A first and then went on to system B
As I understand it you want to answer the first question but are worried that the 2 & 3 will blur the data. Now I don't think that there is any good way to see how you can explain that 3 is not a part of 1 and I would suggest that you avoid the cross-over design of your study since you lack knowledge of that part. In a real world setting the users will also only have one interface to relate to.
Outcome
Your outcome variables are
Time to complete a task or a group of tasks
The experience that the user has with the system
It's always good practice to define which of the outcome variable that is your primary and that's the one you should do use for all your power calculations. If you have several tasks to compare I would suggest you create some compound variable (preferably that makes intuitive sense).
When choosing the subjective experience you should look for validated questionnaires. It is always good to use an already existing questionnaire and in medicine we very often use scores like EQ-5D that we have previously validated and most are familiar with. There are probably similar that perhaps can be used in your case. Scores have one nice feature and that is that you can calculate an average but this also is a disadvantage because an improvement of 10 points means nothing if you can't relate to the score. In EQ-5D we often compare surgery results to the general population and see how close we get with our interventions.
Design
I would do a randomized trial without the cross-over. The important thing with randomization is... it has to be random! Yep, there are plenty of people peeping in the envelopes ruining their own experiments so please make sure you have a good randomization procedure:
Use computer randomization or opaque envelopes
Use blocked randomization so that you aim for equal group size (each block having 50% group A and 50 % group B)
Use random block size, if you have sizes of 2, 4 & 6 you'll have a hard time knowing what interface will be next
Only use stratification for 1 or maybe 2 variables, for instance gender, computer experience
Randomize late, preferably when the subject is sitting by the computer
If word gets around about the systems you might want to have some check for previous knowledge of the systems. If you enroll your classmates you might have told them vital parts of your system by accident ruining the experiment. For obvious reasons you can't blind your subjects but you should try to do everything in your power to keep the subjects unknowing of what they're about to experience.
Confounders
If your randomization works you don't need to worry about the confounders. Even though the theory says you don't need to most note down possible confounders for their subjects in case something doesn't work out with the randomization. If you have very small groups, less that 20-30 subjects, you might want to do the same.
Typical confounders in your case are probably:
Previous computer experience
Age
Gender
Educational level
Power calculation
Calculating power (estimating number of subjects needed) is easy unless you want to go into details and I did my first calculations with Russ Lenth's power calculations that you can find here. You could also use R's package "pwr", you can find some help on that here.
Power calculation is a very rough estimate and you should always add 10-20 % for drop-outs. In medicine we use a significance level of 0.05 and aim for a power of 80% (0.8) by tradition but choosing interface is perhaps not as critical as choosing drug A for a cancer patient and therefore can allow for a significance of only 0.1.
You have to guess a lot when trying to calculate power which is one of the reasons that you should look at the numbers as guidance more than the truth. I also think that if you need more than 60-80 subjects per group then the difference that your looking for is probably very small and that is perhaps OK if you're looking for a better heart medication but less interesting if your designing interfaces.
Statistics
If you have a well designed study this part is the least of your worries. I would say that most of the flaws are in study design and statistic measures differ usually in the decimals while the conclusions usually stay the same.
Tests for randomized trials:
T-test for continuous outcome
Cox regression for time dependent survival outcome
Kaplan-Meier for visualizing survival
Chi-square test for categorical outcome
For testing confounders:
Regression analysis is an extremely powerful method that really allows you to do almost anything.
For learning the basics behind statistics I use Khan Academy that I also recommend to all my students. | Test to compare user interfaces
The main thing is study design and perhaps less statistics.
Hypothesis
Your data will probably result try to answer the following questions:
Is interface A easier to understand than interface B
Does |
52,202 | Test to compare user interfaces | My take on this, which is admittedly biased by techniques I'm familiar with.
Study design: In your case, I think you can get away with a simple randomized design. I hesitate to have each subject try both GUI setups for two reasons: It complicates analysis, and its possible that greater familiarity with the "problem set" or whatever will influence your results for the second GUI.
Power: Sadly, this is going to entirely depend on your question. If you're really inclined to calculate power beforehand, you need to do it for each type of analysis you're going to be conducting. Each test has their own quirks, although power calculation via simulation is probably the easiest method that can apply to any test you can come up with. Though be aware that power calculations have tons of assumptions built into them. When in doubt, add more subjects.
Analysis: Two ideas off the top of my head:
If you're not interested in any covariates - and indeed, randomizing should get rid of the differences between your study groups if you do it right - then for continuous, normally distributed measures, you should be able to do a t-test. For more categorical measures, like things rated on scales, you're looking at contingency table analysis.
UI design seems like a perfect application of survival analysis. Unless one of the GUIs is profoundly bad your users should be able to complete your test on either system. So the question is how long does it take them to complete your test? With two groups and a random design, comparing time-to-completion with something like a Kaplan-Meyer curve gives you both a cool picture, a good estimate of how much faster one UI is over the other, and is pretty straight-forward.
And as @whuber said, it's really nice to see people pondering this ahead of time. | Test to compare user interfaces | My take on this, which is admittedly biased by techniques I'm familiar with.
Study design: In your case, I think you can get away with a simple randomized design. I hesitate to have each subject try b | Test to compare user interfaces
My take on this, which is admittedly biased by techniques I'm familiar with.
Study design: In your case, I think you can get away with a simple randomized design. I hesitate to have each subject try both GUI setups for two reasons: It complicates analysis, and its possible that greater familiarity with the "problem set" or whatever will influence your results for the second GUI.
Power: Sadly, this is going to entirely depend on your question. If you're really inclined to calculate power beforehand, you need to do it for each type of analysis you're going to be conducting. Each test has their own quirks, although power calculation via simulation is probably the easiest method that can apply to any test you can come up with. Though be aware that power calculations have tons of assumptions built into them. When in doubt, add more subjects.
Analysis: Two ideas off the top of my head:
If you're not interested in any covariates - and indeed, randomizing should get rid of the differences between your study groups if you do it right - then for continuous, normally distributed measures, you should be able to do a t-test. For more categorical measures, like things rated on scales, you're looking at contingency table analysis.
UI design seems like a perfect application of survival analysis. Unless one of the GUIs is profoundly bad your users should be able to complete your test on either system. So the question is how long does it take them to complete your test? With two groups and a random design, comparing time-to-completion with something like a Kaplan-Meyer curve gives you both a cool picture, a good estimate of how much faster one UI is over the other, and is pretty straight-forward.
And as @whuber said, it's really nice to see people pondering this ahead of time. | Test to compare user interfaces
My take on this, which is admittedly biased by techniques I'm familiar with.
Study design: In your case, I think you can get away with a simple randomized design. I hesitate to have each subject try b |
52,203 | Test to compare user interfaces | Which test you have to use depends on the variables you intend to measure. A Likert type questionnaire needs a different test than time measurements. So first you need to know what the criteria are for the experimental interface to be better. Then you have to find out how to measure these and what measure constitutes a better interface. Only then can you determine the test to use.
As for power: You could also use an adaptive design. This is a kind of design where you don't have a pre-set sample size but define parameters for dynamically increasing the sample size. E.g. you start with 10 participants and see if you get significant results. If your p is below alpha, you stop: everything is fine. If your p is above alpha but below a certain threshold you continue to add participants (depending on your randomization scheme - e.g. batches of 4) until your p leaves the corridor between the upper threshold and the lower one (alpha). You stop if your time runs out or your funding. ;-) But don't take my word on this here. This is what I heard. I've never used it myself.
The problem with user interface studies is that it is often hard to get significant results with small sample sizes. You could interpret this in two ways: a) my study is bad (to little data) or b) the difference between the interfaces is not large enough to show a consistent improvement.
You could also think about which is more important to you: a low type-I error or a low type-II error. A low type-I error would be great if there is some risk or cost attached to using your experimental interface and you really want to be sure that you have an effect there in order to justify the cost. Cost could be everything from higher cognitive load to major trade unions going on strike because they hate the new interface. A low type-II error (i.e. high power) would be great if there is some cost of NOT using the experimental interface and you want to justify that. (In text books this is often explained under "consumer risk" and "producer risk".) | Test to compare user interfaces | Which test you have to use depends on the variables you intend to measure. A Likert type questionnaire needs a different test than time measurements. So first you need to know what the criteria are fo | Test to compare user interfaces
Which test you have to use depends on the variables you intend to measure. A Likert type questionnaire needs a different test than time measurements. So first you need to know what the criteria are for the experimental interface to be better. Then you have to find out how to measure these and what measure constitutes a better interface. Only then can you determine the test to use.
As for power: You could also use an adaptive design. This is a kind of design where you don't have a pre-set sample size but define parameters for dynamically increasing the sample size. E.g. you start with 10 participants and see if you get significant results. If your p is below alpha, you stop: everything is fine. If your p is above alpha but below a certain threshold you continue to add participants (depending on your randomization scheme - e.g. batches of 4) until your p leaves the corridor between the upper threshold and the lower one (alpha). You stop if your time runs out or your funding. ;-) But don't take my word on this here. This is what I heard. I've never used it myself.
The problem with user interface studies is that it is often hard to get significant results with small sample sizes. You could interpret this in two ways: a) my study is bad (to little data) or b) the difference between the interfaces is not large enough to show a consistent improvement.
You could also think about which is more important to you: a low type-I error or a low type-II error. A low type-I error would be great if there is some risk or cost attached to using your experimental interface and you really want to be sure that you have an effect there in order to justify the cost. Cost could be everything from higher cognitive load to major trade unions going on strike because they hate the new interface. A low type-II error (i.e. high power) would be great if there is some cost of NOT using the experimental interface and you want to justify that. (In text books this is often explained under "consumer risk" and "producer risk".) | Test to compare user interfaces
Which test you have to use depends on the variables you intend to measure. A Likert type questionnaire needs a different test than time measurements. So first you need to know what the criteria are fo |
52,204 | What methods can be used to specify priors from data? | If you have all this data, I think the best answer is to actually fit a single large model, using Hierarchical Modeling rather than do it in two steps (generating a prior then fitting a model). This is basically the answer I gave to this question. I explain this a little bit more there.
In a hierarchical model you model each of the parameters you are interested in (for example, the location and scale parameters of the thumb lengths for a species) are drawn from a common prior distribution. The hyperparameters of the hierarchical model parameterize the common prior distribution of the parameters for the species', and you estimate the hyperparameters at the same time as the parameters you're interested in. The hyperparameters of course need their own prior distribution, but these can be relatively diffuse. | What methods can be used to specify priors from data? | If you have all this data, I think the best answer is to actually fit a single large model, using Hierarchical Modeling rather than do it in two steps (generating a prior then fitting a model). This i | What methods can be used to specify priors from data?
If you have all this data, I think the best answer is to actually fit a single large model, using Hierarchical Modeling rather than do it in two steps (generating a prior then fitting a model). This is basically the answer I gave to this question. I explain this a little bit more there.
In a hierarchical model you model each of the parameters you are interested in (for example, the location and scale parameters of the thumb lengths for a species) are drawn from a common prior distribution. The hyperparameters of the hierarchical model parameterize the common prior distribution of the parameters for the species', and you estimate the hyperparameters at the same time as the parameters you're interested in. The hyperparameters of course need their own prior distribution, but these can be relatively diffuse. | What methods can be used to specify priors from data?
If you have all this data, I think the best answer is to actually fit a single large model, using Hierarchical Modeling rather than do it in two steps (generating a prior then fitting a model). This i |
52,205 | What methods can be used to specify priors from data? | A useful way to incorporate data into a prior distribution is the principle of maximum entropy. You basically provide constraints that the prior distribution is to satisfy (e.g. mean, variance, etc.,etc.) and then choose the distribution which is most "spread out" that satisfies these constraints.
The distribution generally has the form $p(x) \propto exp(...)$
Edwin Jaynes was the originator of this principle, so searching for his work is a good place to start.
See the wiki page (http://en.wikipedia.org/wiki/Principle_of_maximum_entropy) and links therein for a more detailed description. | What methods can be used to specify priors from data? | A useful way to incorporate data into a prior distribution is the principle of maximum entropy. You basically provide constraints that the prior distribution is to satisfy (e.g. mean, variance, etc., | What methods can be used to specify priors from data?
A useful way to incorporate data into a prior distribution is the principle of maximum entropy. You basically provide constraints that the prior distribution is to satisfy (e.g. mean, variance, etc.,etc.) and then choose the distribution which is most "spread out" that satisfies these constraints.
The distribution generally has the form $p(x) \propto exp(...)$
Edwin Jaynes was the originator of this principle, so searching for his work is a good place to start.
See the wiki page (http://en.wikipedia.org/wiki/Principle_of_maximum_entropy) and links therein for a more detailed description. | What methods can be used to specify priors from data?
A useful way to incorporate data into a prior distribution is the principle of maximum entropy. You basically provide constraints that the prior distribution is to satisfy (e.g. mean, variance, etc., |
52,206 | What methods can be used to specify priors from data? | I propose the following solution to 2), and would appreciate feedback:
Data include mean, $Y$, sample size $n$, and standard error $\sigma$; calculate precision ($\tau=\frac{1}{\sigma\sqrt{n}}$) because it is required for logN parameterization by BUGS
data $Y\sim \text{N}(\beta_0,\tau)$
precision $\tau\sim\text{Gamma}(\frac{n}{2},\frac{n}{2\tau})$
diffuse priors
use $N(\mu=\beta_0, \sigma=\frac{1}{\sqrt{\tau}}$) prior
Here is the code:
library(rjags)
data <- data.frame(Y = c(1.6, 2.5, 1.8, 1.8, 1.7, 2.5),
n = c(4, 4, 4, 3, 4, 3),
se = c(0.2, 0.41, 0.24, 0.27, 0.2, 0.14))
# convert se to precision
data <- transform(data, obs.prec = 1/se)[, colnames(data)!='se']
# write a bugs model
sink(file= 'model.bug') #put following in file 'model.bug'
#i don't think sink() actually works like this
model
{
for (k in 1:length(Y)) {
Y[k] ~ dnorm(beta.o, tau.y[k])
tau.y[k] <- prec.y * n[k]
u1[k] <- n[k]/2
u2[k] <- n[k]/(2 * prec.y)
obs.prec[k] ~ dgamma(u1[k], u2[k])
}
beta.o ~ dnorm(3, 0.0001)
prec.y ~ dgamma(0.001, 0.001)
sd.y <- 1/sqrt(prec.y)
}
sink()
model <- jags.model(file = "model.bug",
data = data,
n.adapt = 500,
n.chains = 4)
mcmc.object <- coda.samples(model = model,
variable.names = c( 'beta.o', 'sd.y'),
n.iter = 10000,
thin = 50)
summary(mcmc.object)
Update
I have revised this approach to compute a posterior predictive distribution. It required some modifications, mostly computing a posterior predictive distribution for an unobserved sample.
Details here:
David S. LeBauer, Dan Wang, Katherine T. Richter, Carl C. Davidson, and Michael C. Dietze 2013. Facilitating feedbacks between field measurements and ecosystem models. Ecological Monographs 83:133–154. http://dx.doi.org/10.1890/12-0137.1 pdf
Examples of this and simpler approaches here: https://github.com/dlebauer/pecan-priors/blob/master/priors_demo.Rmd | What methods can be used to specify priors from data? | I propose the following solution to 2), and would appreciate feedback:
Data include mean, $Y$, sample size $n$, and standard error $\sigma$; calculate precision ($\tau=\frac{1}{\sigma\sqrt{n}}$) beca | What methods can be used to specify priors from data?
I propose the following solution to 2), and would appreciate feedback:
Data include mean, $Y$, sample size $n$, and standard error $\sigma$; calculate precision ($\tau=\frac{1}{\sigma\sqrt{n}}$) because it is required for logN parameterization by BUGS
data $Y\sim \text{N}(\beta_0,\tau)$
precision $\tau\sim\text{Gamma}(\frac{n}{2},\frac{n}{2\tau})$
diffuse priors
use $N(\mu=\beta_0, \sigma=\frac{1}{\sqrt{\tau}}$) prior
Here is the code:
library(rjags)
data <- data.frame(Y = c(1.6, 2.5, 1.8, 1.8, 1.7, 2.5),
n = c(4, 4, 4, 3, 4, 3),
se = c(0.2, 0.41, 0.24, 0.27, 0.2, 0.14))
# convert se to precision
data <- transform(data, obs.prec = 1/se)[, colnames(data)!='se']
# write a bugs model
sink(file= 'model.bug') #put following in file 'model.bug'
#i don't think sink() actually works like this
model
{
for (k in 1:length(Y)) {
Y[k] ~ dnorm(beta.o, tau.y[k])
tau.y[k] <- prec.y * n[k]
u1[k] <- n[k]/2
u2[k] <- n[k]/(2 * prec.y)
obs.prec[k] ~ dgamma(u1[k], u2[k])
}
beta.o ~ dnorm(3, 0.0001)
prec.y ~ dgamma(0.001, 0.001)
sd.y <- 1/sqrt(prec.y)
}
sink()
model <- jags.model(file = "model.bug",
data = data,
n.adapt = 500,
n.chains = 4)
mcmc.object <- coda.samples(model = model,
variable.names = c( 'beta.o', 'sd.y'),
n.iter = 10000,
thin = 50)
summary(mcmc.object)
Update
I have revised this approach to compute a posterior predictive distribution. It required some modifications, mostly computing a posterior predictive distribution for an unobserved sample.
Details here:
David S. LeBauer, Dan Wang, Katherine T. Richter, Carl C. Davidson, and Michael C. Dietze 2013. Facilitating feedbacks between field measurements and ecosystem models. Ecological Monographs 83:133–154. http://dx.doi.org/10.1890/12-0137.1 pdf
Examples of this and simpler approaches here: https://github.com/dlebauer/pecan-priors/blob/master/priors_demo.Rmd | What methods can be used to specify priors from data?
I propose the following solution to 2), and would appreciate feedback:
Data include mean, $Y$, sample size $n$, and standard error $\sigma$; calculate precision ($\tau=\frac{1}{\sigma\sqrt{n}}$) beca |
52,207 | If an ANOVA indicates no main effect and no interaction, should the lack of interaction be stated? | Well, it depends if the interaction was your main hypothesis or not. If this the case, then you are encouraged to report the negative result, otherwise you can simply refit your model (without the B and A:B terms) to get a better estimate of A.
Now, the part of your conclusion that you emphasized doesn't sound correct to me. You can only prove that an observed difference of means is different from 0 (or any other fixed value, according to your alternative hypothesis), you cannot "accept" the null. If your test is non-significant, it simply means that you cannot reject $H_0$. Non-significant results can also reflect lack of power (Type II error).
Also, rather than simply reporting crude p-values, it would be better (and it actually follows the APA recommendations) to also report some kind of effect size or difference of means, together with your inferential results.
Here is an example for reporting results from a factorial ANOVA (it has to be rework to fit your specific experimental design since your factors have a lot of levels):
A two-way analysis of variance yielded a main effect for A factor, $F(\nu_1,\nu_2) = 0.00$, $p < .05$, such that the average "value" was significantly higher in the $a_1$ condition (Mean=0.00, SD=0.00) compared to $a_2$ (Mean=0.00, SD=0.00) and $a_3$ (Mean=0.00, SD=0.00, Tukey HSD, all p<0.05). The main effect of B was non-significant ($F(\nu_1,\nu_2) = 0.00$, $p = 0.00$), and no interaction effect was found significant ($F(\nu_1,\nu_2) = 0.00$, $p = 0.00$) indicating that the effect of A on Y was not significantly different across the B levels. | If an ANOVA indicates no main effect and no interaction, should the lack of interaction be stated? | Well, it depends if the interaction was your main hypothesis or not. If this the case, then you are encouraged to report the negative result, otherwise you can simply refit your model (without the B a | If an ANOVA indicates no main effect and no interaction, should the lack of interaction be stated?
Well, it depends if the interaction was your main hypothesis or not. If this the case, then you are encouraged to report the negative result, otherwise you can simply refit your model (without the B and A:B terms) to get a better estimate of A.
Now, the part of your conclusion that you emphasized doesn't sound correct to me. You can only prove that an observed difference of means is different from 0 (or any other fixed value, according to your alternative hypothesis), you cannot "accept" the null. If your test is non-significant, it simply means that you cannot reject $H_0$. Non-significant results can also reflect lack of power (Type II error).
Also, rather than simply reporting crude p-values, it would be better (and it actually follows the APA recommendations) to also report some kind of effect size or difference of means, together with your inferential results.
Here is an example for reporting results from a factorial ANOVA (it has to be rework to fit your specific experimental design since your factors have a lot of levels):
A two-way analysis of variance yielded a main effect for A factor, $F(\nu_1,\nu_2) = 0.00$, $p < .05$, such that the average "value" was significantly higher in the $a_1$ condition (Mean=0.00, SD=0.00) compared to $a_2$ (Mean=0.00, SD=0.00) and $a_3$ (Mean=0.00, SD=0.00, Tukey HSD, all p<0.05). The main effect of B was non-significant ($F(\nu_1,\nu_2) = 0.00$, $p = 0.00$), and no interaction effect was found significant ($F(\nu_1,\nu_2) = 0.00$, $p = 0.00$) indicating that the effect of A on Y was not significantly different across the B levels. | If an ANOVA indicates no main effect and no interaction, should the lack of interaction be stated?
Well, it depends if the interaction was your main hypothesis or not. If this the case, then you are encouraged to report the negative result, otherwise you can simply refit your model (without the B a |
52,208 | If an ANOVA indicates no main effect and no interaction, should the lack of interaction be stated? | Please let me know if you have replicates in your experiment. As you mention using Tukey HSD, I am guessing you don't have any replicates. If you experiment analyzes test of additivity in a two-way factorial Analysis of Variance (ANOVA) with one observation per cell, then please read ahead or otherwise ignore my solution.
Assuming I am thinking in the right direction, it may be possible that B affects A, but it doesn't affect Y in form of a multiplicative term like A*B. In general, B can affect Y in even if the interaction A * B is not significant.
The reason for this is the assumption of special form of relation between A and B (ie A*B) that affects Y in Tukey HSD Test. Please see the wikipedia article about Tukey HSD for further details.
I am sorry if this was not the test that you were conducting.
But even if the interaction is not statistically significant, it doesn't imply that B doesn't affect Y. Even if this is a randomized experiment and you are allowed to relate significance and causation, but not able to reject the null hypothesis $H_o$ (ie p-value is $\geq \alpha$ ) doesn't mean that you accept $H_o$.
Please accept my apologies, if I completely misunderstood you.
Update after David's details
A few common themes that should be addressed first:
If you expect interaction in your main effects here A and B, then testing for the significance of main effects doesn't make sense. (cf. an elegant paper by Dr. Venables first link )
I think doing sub-sampling doesn't help you (when I say help, I mean in the context of the number degrees of the freedom of F distribution which is used to test is treatment effect = 0). By sub-sampling, I mean replication at Block by Treatment level. In RCB, the degrees of freedom wasted in replication can only be used to test if the block effect is present or not. Using Blocks in expt. design is a "trick" to control for the extra variation which is not the main interest. In your case variance because of trays (block) is not of importance, so I don't know if it is useful to waste extra resources (ie money) in testing if the variance due to trays is significant or not.
I would be afraid to come to the conclusion that there was no block effect. The block effect in your case is random, and the test uses some "strong assumptions" to determine the sampling distribution (which may or may not be true depending on the strong assumptions). If your experiment started with RCB, please retain the structure of the data. Pooling of samples from block may confound your conclusions. You are again committing the error of accepting $H_o$ if you say there is no block effect (I am assuming you conclude no block effect by testing hypotheses. right?).
The point 4. further stresses my point 3. of choosing not to subsample, as essentially subsampling lets you test something (trays effect) which is not of primary interest to you. Even if this was of interest, you invest your money in replication to test something whose sampling distribution is not appropriately determined.
In the comment you say 12 treatments, but you only mention A and B as your treatment variables. Did you mean 12 levels for treatments A and B respectively? If yes, then you have only 2 treatments with 12 levels. If no, then you need to change the model.
For further details about subsampling in RCB, refer to chapter 3 of George Casella's design book.
Sincere apologies again, if I misunderstood you. | If an ANOVA indicates no main effect and no interaction, should the lack of interaction be stated? | Please let me know if you have replicates in your experiment. As you mention using Tukey HSD, I am guessing you don't have any replicates. If you experiment analyzes test of additivity in a two-way fa | If an ANOVA indicates no main effect and no interaction, should the lack of interaction be stated?
Please let me know if you have replicates in your experiment. As you mention using Tukey HSD, I am guessing you don't have any replicates. If you experiment analyzes test of additivity in a two-way factorial Analysis of Variance (ANOVA) with one observation per cell, then please read ahead or otherwise ignore my solution.
Assuming I am thinking in the right direction, it may be possible that B affects A, but it doesn't affect Y in form of a multiplicative term like A*B. In general, B can affect Y in even if the interaction A * B is not significant.
The reason for this is the assumption of special form of relation between A and B (ie A*B) that affects Y in Tukey HSD Test. Please see the wikipedia article about Tukey HSD for further details.
I am sorry if this was not the test that you were conducting.
But even if the interaction is not statistically significant, it doesn't imply that B doesn't affect Y. Even if this is a randomized experiment and you are allowed to relate significance and causation, but not able to reject the null hypothesis $H_o$ (ie p-value is $\geq \alpha$ ) doesn't mean that you accept $H_o$.
Please accept my apologies, if I completely misunderstood you.
Update after David's details
A few common themes that should be addressed first:
If you expect interaction in your main effects here A and B, then testing for the significance of main effects doesn't make sense. (cf. an elegant paper by Dr. Venables first link )
I think doing sub-sampling doesn't help you (when I say help, I mean in the context of the number degrees of the freedom of F distribution which is used to test is treatment effect = 0). By sub-sampling, I mean replication at Block by Treatment level. In RCB, the degrees of freedom wasted in replication can only be used to test if the block effect is present or not. Using Blocks in expt. design is a "trick" to control for the extra variation which is not the main interest. In your case variance because of trays (block) is not of importance, so I don't know if it is useful to waste extra resources (ie money) in testing if the variance due to trays is significant or not.
I would be afraid to come to the conclusion that there was no block effect. The block effect in your case is random, and the test uses some "strong assumptions" to determine the sampling distribution (which may or may not be true depending on the strong assumptions). If your experiment started with RCB, please retain the structure of the data. Pooling of samples from block may confound your conclusions. You are again committing the error of accepting $H_o$ if you say there is no block effect (I am assuming you conclude no block effect by testing hypotheses. right?).
The point 4. further stresses my point 3. of choosing not to subsample, as essentially subsampling lets you test something (trays effect) which is not of primary interest to you. Even if this was of interest, you invest your money in replication to test something whose sampling distribution is not appropriately determined.
In the comment you say 12 treatments, but you only mention A and B as your treatment variables. Did you mean 12 levels for treatments A and B respectively? If yes, then you have only 2 treatments with 12 levels. If no, then you need to change the model.
For further details about subsampling in RCB, refer to chapter 3 of George Casella's design book.
Sincere apologies again, if I misunderstood you. | If an ANOVA indicates no main effect and no interaction, should the lack of interaction be stated?
Please let me know if you have replicates in your experiment. As you mention using Tukey HSD, I am guessing you don't have any replicates. If you experiment analyzes test of additivity in a two-way fa |
52,209 | R command for stcox in Stata | In package survival, it's coxph. John Fox has a nice introduction to using coxph in R:
Cox Proportional-Hazards Regression for Survival Data | R command for stcox in Stata | In package survival, it's coxph. John Fox has a nice introduction to using coxph in R:
Cox Proportional-Hazards Regression for Survival Data | R command for stcox in Stata
In package survival, it's coxph. John Fox has a nice introduction to using coxph in R:
Cox Proportional-Hazards Regression for Survival Data | R command for stcox in Stata
In package survival, it's coxph. John Fox has a nice introduction to using coxph in R:
Cox Proportional-Hazards Regression for Survival Data |
52,210 | R command for stcox in Stata | In case you're looking for a quick code translation.
Assuming your Stata and R variables are the same...
stset time, failure(fail)
stcox var1 var2
in Stata translates to
library(survival)
coxph(Surv(time, fail) ~ var1 + var2)
in R assuming your dataframe is attached.
Note: if you're comparing results in R and Stata, R uses the Efron method to handle ties by default while Stata uses the Breslow method, which is less accurate but slightly quicker to compute. | R command for stcox in Stata | In case you're looking for a quick code translation.
Assuming your Stata and R variables are the same...
stset time, failure(fail)
stcox var1 var2
in Stata translates to
library(survival)
coxph(Surv( | R command for stcox in Stata
In case you're looking for a quick code translation.
Assuming your Stata and R variables are the same...
stset time, failure(fail)
stcox var1 var2
in Stata translates to
library(survival)
coxph(Surv(time, fail) ~ var1 + var2)
in R assuming your dataframe is attached.
Note: if you're comparing results in R and Stata, R uses the Efron method to handle ties by default while Stata uses the Breslow method, which is less accurate but slightly quicker to compute. | R command for stcox in Stata
In case you're looking for a quick code translation.
Assuming your Stata and R variables are the same...
stset time, failure(fail)
stcox var1 var2
in Stata translates to
library(survival)
coxph(Surv( |
52,211 | Which Deep Learning Method to use for the classification of precious stones (diamonds, sapphire, ruby etc) based on digital photo images and data set? | Convolutional neural networks (CNNs) and/or vision transformers (both neural networks) with transfer learning would have been my first thought, too. However, there are situations, where that might not work so well. Note that in the work you cite, it seems like they had "2042 training images and 284 unseen (test) images divided into 68 categories of gemstones". Just 2042 training images for 68 categories is at best 30 images per category (if all categories are the same size). In such low data setting with the images possibly very different than what pre-training (usually on ImageNet, the photos of which are probably not so much like images of precious stones taken in a professional setting, I'd guess - pre-training on ImageNet is often of limited value for e.g. X-rays or satellite images, too) could be one of those settings. In such a setting, it could also be the case that good feature engineering can create really good features for traditional tabular data algorithms (like random forest, XGBoost and the like). The training approach taken for the neural networks and the image resizing also sound like they may have been optimal. E.g.
The only data augmentation was "random horizontal or vertical flip", when lots of augmentations would likely be useful starting with rotations.
Not much was set on how hyperparameters were set. A good validation approach (e.g. cross-validation) helps with that and also helps with combining different algorithms.
Nowadays, one would probably use cosine-decay for transfer learning, possibly with differential learning rates across layers instead of the stepwise learning rate decay they used.
Ideally, one uses images as large as possible, and several architectures don't enforce a fixed size.
Just a center crop for the test set is likely inferior to test-time-augmentation.
ResNet50 (and esp. ResNet18) are relatively old pre-trained models. If you look at popular repositories like timm you can see a lot more recent architectures. With popular deep learning Python libraries like the easy to use fastai one you can directly use models from that repository (see also their book and free course).
I'd usually think that with a lot of data, neural networks should become better and better vs. feature engineering + RF. Of course, if the two are still competitive with a lot of data, then combining them might be an option (and cross-validation would help you figure out how to do this, the keyword here would be "model stacking").
Kaggle competitions are usually quite a good source on what is current good practice and there's the occasional panel interview where Kaggle GMs tell you what they think is a good way to do things for e.g. vision tasks.
Another thing to always keep in mind is whether the data is good / matches what you intend to do (e.g. photos taken by professionals in a well lit shop vs. via smartphone, or even differences in cameras used by different sources), whether there are data leaks (e.g. the correct classification is given by the text next to the gem, or more expensive gems being displayed in nicer settings) and whether there are ways to trick the model (people would of course potentially have a strong incentive, if they could somehow get away with selling glass-beads as expensive gems, so make sure you have a representative sample of fake stones in all sorts of colors and shapes including those used for precious stones, too). | Which Deep Learning Method to use for the classification of precious stones (diamonds, sapphire, rub | Convolutional neural networks (CNNs) and/or vision transformers (both neural networks) with transfer learning would have been my first thought, too. However, there are situations, where that might not | Which Deep Learning Method to use for the classification of precious stones (diamonds, sapphire, ruby etc) based on digital photo images and data set?
Convolutional neural networks (CNNs) and/or vision transformers (both neural networks) with transfer learning would have been my first thought, too. However, there are situations, where that might not work so well. Note that in the work you cite, it seems like they had "2042 training images and 284 unseen (test) images divided into 68 categories of gemstones". Just 2042 training images for 68 categories is at best 30 images per category (if all categories are the same size). In such low data setting with the images possibly very different than what pre-training (usually on ImageNet, the photos of which are probably not so much like images of precious stones taken in a professional setting, I'd guess - pre-training on ImageNet is often of limited value for e.g. X-rays or satellite images, too) could be one of those settings. In such a setting, it could also be the case that good feature engineering can create really good features for traditional tabular data algorithms (like random forest, XGBoost and the like). The training approach taken for the neural networks and the image resizing also sound like they may have been optimal. E.g.
The only data augmentation was "random horizontal or vertical flip", when lots of augmentations would likely be useful starting with rotations.
Not much was set on how hyperparameters were set. A good validation approach (e.g. cross-validation) helps with that and also helps with combining different algorithms.
Nowadays, one would probably use cosine-decay for transfer learning, possibly with differential learning rates across layers instead of the stepwise learning rate decay they used.
Ideally, one uses images as large as possible, and several architectures don't enforce a fixed size.
Just a center crop for the test set is likely inferior to test-time-augmentation.
ResNet50 (and esp. ResNet18) are relatively old pre-trained models. If you look at popular repositories like timm you can see a lot more recent architectures. With popular deep learning Python libraries like the easy to use fastai one you can directly use models from that repository (see also their book and free course).
I'd usually think that with a lot of data, neural networks should become better and better vs. feature engineering + RF. Of course, if the two are still competitive with a lot of data, then combining them might be an option (and cross-validation would help you figure out how to do this, the keyword here would be "model stacking").
Kaggle competitions are usually quite a good source on what is current good practice and there's the occasional panel interview where Kaggle GMs tell you what they think is a good way to do things for e.g. vision tasks.
Another thing to always keep in mind is whether the data is good / matches what you intend to do (e.g. photos taken by professionals in a well lit shop vs. via smartphone, or even differences in cameras used by different sources), whether there are data leaks (e.g. the correct classification is given by the text next to the gem, or more expensive gems being displayed in nicer settings) and whether there are ways to trick the model (people would of course potentially have a strong incentive, if they could somehow get away with selling glass-beads as expensive gems, so make sure you have a representative sample of fake stones in all sorts of colors and shapes including those used for precious stones, too). | Which Deep Learning Method to use for the classification of precious stones (diamonds, sapphire, rub
Convolutional neural networks (CNNs) and/or vision transformers (both neural networks) with transfer learning would have been my first thought, too. However, there are situations, where that might not |
52,212 | Why does a 1D convolution increase the size of the output, while a 2D convolution tends to decrease (such as in a CNN?) | From the documentation of np.convolve: "mode{‘full’, ‘valid’, ‘same’}, optional By default, mode is ‘full’. This returns the convolution at each point of overlap, with an output shape of (N+M-1,). At the end-points of the convolution, the signals do not overlap completely, and boundary effects may be seen."
By contrast, whatever neural network library you're using does not have this behavior as the default. | Why does a 1D convolution increase the size of the output, while a 2D convolution tends to decrease | From the documentation of np.convolve: "mode{‘full’, ‘valid’, ‘same’}, optional By default, mode is ‘full’. This returns the convolution at each point of overlap, with an output shape of (N+M-1,). At | Why does a 1D convolution increase the size of the output, while a 2D convolution tends to decrease (such as in a CNN?)
From the documentation of np.convolve: "mode{‘full’, ‘valid’, ‘same’}, optional By default, mode is ‘full’. This returns the convolution at each point of overlap, with an output shape of (N+M-1,). At the end-points of the convolution, the signals do not overlap completely, and boundary effects may be seen."
By contrast, whatever neural network library you're using does not have this behavior as the default. | Why does a 1D convolution increase the size of the output, while a 2D convolution tends to decrease
From the documentation of np.convolve: "mode{‘full’, ‘valid’, ‘same’}, optional By default, mode is ‘full’. This returns the convolution at each point of overlap, with an output shape of (N+M-1,). At |
52,213 | Why does a 1D convolution increase the size of the output, while a 2D convolution tends to decrease (such as in a CNN?) | The short answer is: this is how convolution works. As you can see e.g. in the examples at https://en.wikipedia.org/wiki/Convolution the support of the convolution is larger than that of the convoluted functions:
As Sycorax mentions, sometimes the tails are omitted for practical reasons. Note that in some application (signal processing comes to my mind) omitting the tails might be an error. | Why does a 1D convolution increase the size of the output, while a 2D convolution tends to decrease | The short answer is: this is how convolution works. As you can see e.g. in the examples at https://en.wikipedia.org/wiki/Convolution the support of the convolution is larger than that of the convolute | Why does a 1D convolution increase the size of the output, while a 2D convolution tends to decrease (such as in a CNN?)
The short answer is: this is how convolution works. As you can see e.g. in the examples at https://en.wikipedia.org/wiki/Convolution the support of the convolution is larger than that of the convoluted functions:
As Sycorax mentions, sometimes the tails are omitted for practical reasons. Note that in some application (signal processing comes to my mind) omitting the tails might be an error. | Why does a 1D convolution increase the size of the output, while a 2D convolution tends to decrease
The short answer is: this is how convolution works. As you can see e.g. in the examples at https://en.wikipedia.org/wiki/Convolution the support of the convolution is larger than that of the convolute |
52,214 | Validity of regression diagnostics for deterministic computer experiments | This question reminds me a bit of the question Could a mismatch between loss functions used for fitting vs. tuning parameter selection be justified? , where I answered that it can be useful to use a different loss function than the loss function which needs to be optimized. The reason is that statistical fluctuations might be best minimized by using a loss function that relates to the statistical distribution of the errors. For example: when we are estimating the location parameter of a Laplace distribution, then finding an estimator that minimizes the sum of absolute residuals will be better in optimizing the expected square error than an estimator that minimizes the sum of squared residuals.
In your example there is also in some way statistical variation. You have a deterministic function (the population), but you only take a sample $x_i$ where you evaluate $f(x_i)$, so you have a sub-selection of the population*, and that resembles the randomness of sampling.
I do not see any reason why stepwise regression or variable selection based on such diagnostics should work in the deterministic and ill-specified case.
The error distribution of this 'randomness' is not very clear, and to optimize AIC (based on an arbitrary likelihood) or F statistic will only make sense as a heuristic diagnostic. Yet they do work and that is because they introduce some way to penalize the number of terms. You can fit a high order polynomial and reach an $R^2 = 1$ but that is possibly introducing a larger prediction error.
Instead, to reduce this error, it might be better to use, for instance, leave one out cross validation, which relates to the mean squared error (Proof of LOOCV formula), and is asymptotically similar to optimizing a likelihood (Equivalence of AIC and LOOCV under mismatched loss functions)
*(If you would know the entire function $f$ then you could just optimize the entire function straight away, and if you have no limits to the number of terms then you get something like a perfectly fitting Taylor series or Fourier series). | Validity of regression diagnostics for deterministic computer experiments | This question reminds me a bit of the question Could a mismatch between loss functions used for fitting vs. tuning parameter selection be justified? , where I answered that it can be useful to use a d | Validity of regression diagnostics for deterministic computer experiments
This question reminds me a bit of the question Could a mismatch between loss functions used for fitting vs. tuning parameter selection be justified? , where I answered that it can be useful to use a different loss function than the loss function which needs to be optimized. The reason is that statistical fluctuations might be best minimized by using a loss function that relates to the statistical distribution of the errors. For example: when we are estimating the location parameter of a Laplace distribution, then finding an estimator that minimizes the sum of absolute residuals will be better in optimizing the expected square error than an estimator that minimizes the sum of squared residuals.
In your example there is also in some way statistical variation. You have a deterministic function (the population), but you only take a sample $x_i$ where you evaluate $f(x_i)$, so you have a sub-selection of the population*, and that resembles the randomness of sampling.
I do not see any reason why stepwise regression or variable selection based on such diagnostics should work in the deterministic and ill-specified case.
The error distribution of this 'randomness' is not very clear, and to optimize AIC (based on an arbitrary likelihood) or F statistic will only make sense as a heuristic diagnostic. Yet they do work and that is because they introduce some way to penalize the number of terms. You can fit a high order polynomial and reach an $R^2 = 1$ but that is possibly introducing a larger prediction error.
Instead, to reduce this error, it might be better to use, for instance, leave one out cross validation, which relates to the mean squared error (Proof of LOOCV formula), and is asymptotically similar to optimizing a likelihood (Equivalence of AIC and LOOCV under mismatched loss functions)
*(If you would know the entire function $f$ then you could just optimize the entire function straight away, and if you have no limits to the number of terms then you get something like a perfectly fitting Taylor series or Fourier series). | Validity of regression diagnostics for deterministic computer experiments
This question reminds me a bit of the question Could a mismatch between loss functions used for fitting vs. tuning parameter selection be justified? , where I answered that it can be useful to use a d |
52,215 | Validity of regression diagnostics for deterministic computer experiments | Statistical methods applied to deterministic phenomena: Statistical methods like regression analysis can be quite useful in situations where one is dealing with deterministic values, particularly in cases where the deterministic behaviour is complex enough that it is not helpful to express this behaviour in deterministic terms. In such cases, statistical methods can be employed, where we model deterministic data using a statistical model that includes an "error term" with a particular distribution. As with other applications, you can still use diagnostic tests to check if the (deterministic) residuals from the model appear to follow the assumed distribution, etc. Once it is determined that a statistical model is applicable to a set of data (deterministic or not) and diagnostic tests confirm plausibility of the model assumptions, the entire suite of tests and methods applicable to that model are okay to implement. In a regression context, this includes methods like F-tests, AIC computations, etc.
In regard to this issue, it is worth noting that simulations used in statistical practice are generated by pseudo-random number generators (PRNGs) that are deterministic in nature. (So long as you set the seed for these generators in a deterministic manner, the resulting series of numbers is deterministic.) Indeed, the entire field of constructing valid PRNGs and the entire field of simulation analysis using their values involves the application of statistical models and tests to deterministic phenomena.
An applied example: As an example of a regression analysis of this kind, you can have a look at O'Neill (2020) (pp. 6-9, 11-12). That paper examines the classical occupancy distribution and computes the accuracy of the normal approximation to that distribution over a large range of parameter values. Part of the analysis involves showing that the approximation tends to get more accurate as the parameters get large, and this is done by computing some summary quantities relating to the approximation error, and showing how they change as the parameter values increase. To this end, the paper includes a regression analysis looking at how these summary quantities for the approximation error reduce as the parameter values become large, using a model that has extremely high goodness-of-fit. It turns out that these quantities follow very closely (but not exactly) to a simple deterministic function and so the regression analysis is able to show that this simple relationship has a high goodness-of-fit. What is notable in this analysis is that the data used for the regression is purely deterministive --- it consists of computed RMSE values comparing two known distributions over a set of known parameter values. There is no randomness or real-world data in that analysis.
Philosophical considerations: When conducting this kind of analysis it is worth noting that it is possible to take an underlying philosophical position with respect to probability that does not assume the existence of (aleatory) randomness and so does not contradict determinism. Epistemic interpretations of probability generally consider probability distributions to describe our own uncertainty in values, even if those values are fixed. Statisticians who take an underlying epistemic interpretation of probability (which would include most Bayesians at a minimum) usually have no in-principle aversion to applying statistical methods to deterministic phenomena. | Validity of regression diagnostics for deterministic computer experiments | Statistical methods applied to deterministic phenomena: Statistical methods like regression analysis can be quite useful in situations where one is dealing with deterministic values, particularly in c | Validity of regression diagnostics for deterministic computer experiments
Statistical methods applied to deterministic phenomena: Statistical methods like regression analysis can be quite useful in situations where one is dealing with deterministic values, particularly in cases where the deterministic behaviour is complex enough that it is not helpful to express this behaviour in deterministic terms. In such cases, statistical methods can be employed, where we model deterministic data using a statistical model that includes an "error term" with a particular distribution. As with other applications, you can still use diagnostic tests to check if the (deterministic) residuals from the model appear to follow the assumed distribution, etc. Once it is determined that a statistical model is applicable to a set of data (deterministic or not) and diagnostic tests confirm plausibility of the model assumptions, the entire suite of tests and methods applicable to that model are okay to implement. In a regression context, this includes methods like F-tests, AIC computations, etc.
In regard to this issue, it is worth noting that simulations used in statistical practice are generated by pseudo-random number generators (PRNGs) that are deterministic in nature. (So long as you set the seed for these generators in a deterministic manner, the resulting series of numbers is deterministic.) Indeed, the entire field of constructing valid PRNGs and the entire field of simulation analysis using their values involves the application of statistical models and tests to deterministic phenomena.
An applied example: As an example of a regression analysis of this kind, you can have a look at O'Neill (2020) (pp. 6-9, 11-12). That paper examines the classical occupancy distribution and computes the accuracy of the normal approximation to that distribution over a large range of parameter values. Part of the analysis involves showing that the approximation tends to get more accurate as the parameters get large, and this is done by computing some summary quantities relating to the approximation error, and showing how they change as the parameter values increase. To this end, the paper includes a regression analysis looking at how these summary quantities for the approximation error reduce as the parameter values become large, using a model that has extremely high goodness-of-fit. It turns out that these quantities follow very closely (but not exactly) to a simple deterministic function and so the regression analysis is able to show that this simple relationship has a high goodness-of-fit. What is notable in this analysis is that the data used for the regression is purely deterministive --- it consists of computed RMSE values comparing two known distributions over a set of known parameter values. There is no randomness or real-world data in that analysis.
Philosophical considerations: When conducting this kind of analysis it is worth noting that it is possible to take an underlying philosophical position with respect to probability that does not assume the existence of (aleatory) randomness and so does not contradict determinism. Epistemic interpretations of probability generally consider probability distributions to describe our own uncertainty in values, even if those values are fixed. Statisticians who take an underlying epistemic interpretation of probability (which would include most Bayesians at a minimum) usually have no in-principle aversion to applying statistical methods to deterministic phenomena. | Validity of regression diagnostics for deterministic computer experiments
Statistical methods applied to deterministic phenomena: Statistical methods like regression analysis can be quite useful in situations where one is dealing with deterministic values, particularly in c |
52,216 | Validity of regression diagnostics for deterministic computer experiments | It sounds like your deterministic experiments are effectively evaluating an unknown function $f$ at many different input values $x$. If evaluating $f$ at new $x$ values is cheap, then you can easily get arbitrarily large samples; so hypothesis tests (which are partly a measure of sample size) aren't going to be a very useful way to choose a parsimonious $\hat f$. Similar arguments can be made for AIC.
Instead, I'd suggest you think of a measure of "How good does my approximation need to be to be useful for my purposes?" What are the units of $f$? Choose the smallest value of a maximum or average discrepancy between $f$ and $\hat f$ that would be acceptable for your needs, on the range of $x$ values that you care about. Then, evaluate $f$ at many $x$ values, so that you have more than enough data to get low variance in the coefficient estimates when fitting fairly complex $\hat f$ models. Choose the simplest of these $\hat f$ models that meets your criteria for a "good enough" approximation.
On the other hand, if evaluating $f$ for new $x$ values is expensive or slow and your sample size is essentially fixed, then the F test or AIC might help you choose the most-complex $\hat f$ you can afford to fit with this sample size. The fact that $f$ is deterministic isn't a problem (though it might affect your choice of which test to carry out; see below).
It often helps to think of significance tests (like the F test) as a measure of sample size. I will assume you're using the F test that compares two nested regression models. Then the F test is basically asking: "Is my sample large enough to trust the larger / more complex model? Or is there too much variability relative to this sample size, and it would be safer to use the smaller / less complex model at this sample size?"
Say the true unknown $f$ is pretty close to a quadratic function (but not quite exactly quadratic). And say you first try to compare a linear $\hat {f_1}$ vs a quadratic $\hat {f_2}$ fit to the pairs $(x, f(x))$. If you've evaluated $f$ at very few values of $x$, the F test might fail to reject $H_0$, telling you that you don't have enough data and your estimated $\hat {f_2}$ is too noisy. But if you've evaluated $f$ at enough different $x$ values, the F test might reject $H_0$, telling you that you've got enough data to safely trust your estimated $\hat {f_2}$ as a better approximation than $\hat {f_1}$.
But this issue of "Is my sample big enough?" is distinct from "Is $\hat f$ a good enough approximation to $f$?" The latter question requires subject matter expertise. Again, imagine that $f$ is nearly-but-not-quite quadratic. And let's say that the true differences between $f$ and its best quadratic approximation are negligible for your purposes; a cubic $\hat{f_3}$ could technically be an even better fit, but it would make almost no practical difference in whatever you're using $\hat f$ for. Even so, if you have a large enough sample size, the F test comparing quadratic vs cubic will reject $H_0$ and say you've got enough data to trust the cubic approximation $\hat{f_3}$.
My key point is that "negligible for your purposes" is something that you must decide and the F test cannot answer for you. I don't know the context of your computer experiments, but here's a physics example. Say you're trying to predict $f$ = boiling point of water from $x$ = atmospheric pressure. The true relationship is not linear, but if your purpose is to suggest approximate baking times in a recipe book, a linear $\hat f$ might be plenty good enough. On the other hand if your purpose is to design high-performance meteorological equipment, a linear $\hat f$ may be nowhere near good enough and you'll need a more complex model (and enough data to fit it).
Finally, if you do use hypothesis tests in this scenario, and your deterministic $f$ is smooth and your $x$ values are very close together... then the independent-residuals assumption isn't going to be appropriate.
Think of time-series data. Imagine an extreme example: $y$ is changing once a day (and is independent from day to day), but I record measurements every hour. Then I don't really have $n$ independent measurements -- my "effective sample size" is more like $n/24$. If I fit a regression model to $y$ vs $time$ and don't account for this autocorrelation, my F tests will wrongly think that I do have $n$ independent measurements, and my p-values will be way smaller than they really ought to be for the effective sample size.
In your case it might not be quite as extreme. But I'd still look into autocorrelation models to account for the fact that similar $x$ values might have extremely similar $f$ values. | Validity of regression diagnostics for deterministic computer experiments | It sounds like your deterministic experiments are effectively evaluating an unknown function $f$ at many different input values $x$. If evaluating $f$ at new $x$ values is cheap, then you can easily g | Validity of regression diagnostics for deterministic computer experiments
It sounds like your deterministic experiments are effectively evaluating an unknown function $f$ at many different input values $x$. If evaluating $f$ at new $x$ values is cheap, then you can easily get arbitrarily large samples; so hypothesis tests (which are partly a measure of sample size) aren't going to be a very useful way to choose a parsimonious $\hat f$. Similar arguments can be made for AIC.
Instead, I'd suggest you think of a measure of "How good does my approximation need to be to be useful for my purposes?" What are the units of $f$? Choose the smallest value of a maximum or average discrepancy between $f$ and $\hat f$ that would be acceptable for your needs, on the range of $x$ values that you care about. Then, evaluate $f$ at many $x$ values, so that you have more than enough data to get low variance in the coefficient estimates when fitting fairly complex $\hat f$ models. Choose the simplest of these $\hat f$ models that meets your criteria for a "good enough" approximation.
On the other hand, if evaluating $f$ for new $x$ values is expensive or slow and your sample size is essentially fixed, then the F test or AIC might help you choose the most-complex $\hat f$ you can afford to fit with this sample size. The fact that $f$ is deterministic isn't a problem (though it might affect your choice of which test to carry out; see below).
It often helps to think of significance tests (like the F test) as a measure of sample size. I will assume you're using the F test that compares two nested regression models. Then the F test is basically asking: "Is my sample large enough to trust the larger / more complex model? Or is there too much variability relative to this sample size, and it would be safer to use the smaller / less complex model at this sample size?"
Say the true unknown $f$ is pretty close to a quadratic function (but not quite exactly quadratic). And say you first try to compare a linear $\hat {f_1}$ vs a quadratic $\hat {f_2}$ fit to the pairs $(x, f(x))$. If you've evaluated $f$ at very few values of $x$, the F test might fail to reject $H_0$, telling you that you don't have enough data and your estimated $\hat {f_2}$ is too noisy. But if you've evaluated $f$ at enough different $x$ values, the F test might reject $H_0$, telling you that you've got enough data to safely trust your estimated $\hat {f_2}$ as a better approximation than $\hat {f_1}$.
But this issue of "Is my sample big enough?" is distinct from "Is $\hat f$ a good enough approximation to $f$?" The latter question requires subject matter expertise. Again, imagine that $f$ is nearly-but-not-quite quadratic. And let's say that the true differences between $f$ and its best quadratic approximation are negligible for your purposes; a cubic $\hat{f_3}$ could technically be an even better fit, but it would make almost no practical difference in whatever you're using $\hat f$ for. Even so, if you have a large enough sample size, the F test comparing quadratic vs cubic will reject $H_0$ and say you've got enough data to trust the cubic approximation $\hat{f_3}$.
My key point is that "negligible for your purposes" is something that you must decide and the F test cannot answer for you. I don't know the context of your computer experiments, but here's a physics example. Say you're trying to predict $f$ = boiling point of water from $x$ = atmospheric pressure. The true relationship is not linear, but if your purpose is to suggest approximate baking times in a recipe book, a linear $\hat f$ might be plenty good enough. On the other hand if your purpose is to design high-performance meteorological equipment, a linear $\hat f$ may be nowhere near good enough and you'll need a more complex model (and enough data to fit it).
Finally, if you do use hypothesis tests in this scenario, and your deterministic $f$ is smooth and your $x$ values are very close together... then the independent-residuals assumption isn't going to be appropriate.
Think of time-series data. Imagine an extreme example: $y$ is changing once a day (and is independent from day to day), but I record measurements every hour. Then I don't really have $n$ independent measurements -- my "effective sample size" is more like $n/24$. If I fit a regression model to $y$ vs $time$ and don't account for this autocorrelation, my F tests will wrongly think that I do have $n$ independent measurements, and my p-values will be way smaller than they really ought to be for the effective sample size.
In your case it might not be quite as extreme. But I'd still look into autocorrelation models to account for the fact that similar $x$ values might have extremely similar $f$ values. | Validity of regression diagnostics for deterministic computer experiments
It sounds like your deterministic experiments are effectively evaluating an unknown function $f$ at many different input values $x$. If evaluating $f$ at new $x$ values is cheap, then you can easily g |
52,217 | Validity of regression diagnostics for deterministic computer experiments | In other words: Can stepwise regression or variable selection using AIC (or similar statistics based on the likelihood or distributional assumptions of the regression model) be justified also in the case of deterministic computer experiments?
Here's an argument for likelihood-based statistics. Let $Y = f(X)$, where $f$ is a deterministic function. Assume you observe a finite number of realizations s.t. $\mathbf{y} = f(\mathbf{X})$. If you knew both $f$ and $\mathbf{X}$, then you would know $\mathbf{y}$ with probability one, or very vaguely $P(\mathbf{y} = f(\mathbf{X}) | \mathbf{X}, f) = 1$. You don't know $f$, so you place a Gaussian process prior $f | \mathbf{X}\sim\mathcal{GP}(\cdot, \cdot)$. You integrate out $f$ to obtain the output marginal likelihood, which is MVN Gaussian: $\mathbf{y} | \mathbf{X}\sim\mathcal{MVN}(\cdot, \cdot)$. Even if the computer code output is deterministic and there's no "innate" variability on it, the output marginal likelihood is multivariate normal due to your uncertainty on $f$. So, statistics based on this marginal likelihood seem sensible to me.
More general, here are a few things that might help you decide how comfortable you feel about using such diagnostic techniques.
Your computer model is deterministic, i.e., $Y = f(X)$. Since the computer model does not have a random error component, there's no aleatoric uncertainty. However, you don't know $f$, so there's epistemic uncertainty. See Aleatoric and epistemic.
Even though the underlying function $f$ might be deterministic, the output value is unknown a priori (before running the simulator). This is sometimes called code uncertainty and is enough to justify the use of a stochastic process.
Surrogates are most frequently set up in a Bayesian framework, and so the probabilistic statements are about your uncertainty about the function that you are approximating $f$ and not really about the inherent random noise in your computer code. This applies even when Empirical Bayes is used, which in many contexts can get quite close to simply "find MLE estimates but interpret like a Bayesian model".
Bastos and O'Hagan (2009, sec. 3.1) discusses the challenges associated with diagnostics for linear models fitted to a deterministic function. In particular, the training set predictions will be perfect if you use an interpolator, thus the need to use cross-validation or a new data set. Out-of-sample validation introduces some uncertainty. However, they also note that there might be scoring functions for deterministic computer experiments such as the Mahalanobis distance. See 10.1198/TECH.2009.08019
An adjacent note.
A linear model might not be commonly used for a surrogate. However, you can cast a linear model as a Gaussian process with linear kernel $\mathbf{XX}^\top$ (Rasmussen and Williams, 2006, sec. 4.2.2).
References
Bastos, L. S., & O’Hagan, A. (2009). Diagnostics for Gaussian process emulators. Technometrics, 51(4), 425–438. https://doi.org/10/bw62bq
Rasmussen, C. E., & Williams, C. K. I. (2006). Gaussian processes for machine learning. MIT Press. | Validity of regression diagnostics for deterministic computer experiments | In other words: Can stepwise regression or variable selection using AIC (or similar statistics based on the likelihood or distributional assumptions of the regression model) be justified also in the c | Validity of regression diagnostics for deterministic computer experiments
In other words: Can stepwise regression or variable selection using AIC (or similar statistics based on the likelihood or distributional assumptions of the regression model) be justified also in the case of deterministic computer experiments?
Here's an argument for likelihood-based statistics. Let $Y = f(X)$, where $f$ is a deterministic function. Assume you observe a finite number of realizations s.t. $\mathbf{y} = f(\mathbf{X})$. If you knew both $f$ and $\mathbf{X}$, then you would know $\mathbf{y}$ with probability one, or very vaguely $P(\mathbf{y} = f(\mathbf{X}) | \mathbf{X}, f) = 1$. You don't know $f$, so you place a Gaussian process prior $f | \mathbf{X}\sim\mathcal{GP}(\cdot, \cdot)$. You integrate out $f$ to obtain the output marginal likelihood, which is MVN Gaussian: $\mathbf{y} | \mathbf{X}\sim\mathcal{MVN}(\cdot, \cdot)$. Even if the computer code output is deterministic and there's no "innate" variability on it, the output marginal likelihood is multivariate normal due to your uncertainty on $f$. So, statistics based on this marginal likelihood seem sensible to me.
More general, here are a few things that might help you decide how comfortable you feel about using such diagnostic techniques.
Your computer model is deterministic, i.e., $Y = f(X)$. Since the computer model does not have a random error component, there's no aleatoric uncertainty. However, you don't know $f$, so there's epistemic uncertainty. See Aleatoric and epistemic.
Even though the underlying function $f$ might be deterministic, the output value is unknown a priori (before running the simulator). This is sometimes called code uncertainty and is enough to justify the use of a stochastic process.
Surrogates are most frequently set up in a Bayesian framework, and so the probabilistic statements are about your uncertainty about the function that you are approximating $f$ and not really about the inherent random noise in your computer code. This applies even when Empirical Bayes is used, which in many contexts can get quite close to simply "find MLE estimates but interpret like a Bayesian model".
Bastos and O'Hagan (2009, sec. 3.1) discusses the challenges associated with diagnostics for linear models fitted to a deterministic function. In particular, the training set predictions will be perfect if you use an interpolator, thus the need to use cross-validation or a new data set. Out-of-sample validation introduces some uncertainty. However, they also note that there might be scoring functions for deterministic computer experiments such as the Mahalanobis distance. See 10.1198/TECH.2009.08019
An adjacent note.
A linear model might not be commonly used for a surrogate. However, you can cast a linear model as a Gaussian process with linear kernel $\mathbf{XX}^\top$ (Rasmussen and Williams, 2006, sec. 4.2.2).
References
Bastos, L. S., & O’Hagan, A. (2009). Diagnostics for Gaussian process emulators. Technometrics, 51(4), 425–438. https://doi.org/10/bw62bq
Rasmussen, C. E., & Williams, C. K. I. (2006). Gaussian processes for machine learning. MIT Press. | Validity of regression diagnostics for deterministic computer experiments
In other words: Can stepwise regression or variable selection using AIC (or similar statistics based on the likelihood or distributional assumptions of the regression model) be justified also in the c |
52,218 | $\sin(x)$ is a counterexample to the universal approximation theorem | A ReLU network is ultimately a piecewise-linear continuous function. Each neuron in the first hidden layer is just a shifted and scaled ReLU. Taking a linear combination of those produces a piecewise linear function, with (at most) as many hingepoints as there are neurons. Applying ReLU to that can create new hingepoints whenever the function crosses 0, but this is at most once per linear segment, so you end up with at most twice as many hingepoints as neurons.
Having established that, the end behavior is linear. If the slope is nonzero, the function goes to infinity, and the supremum of errors is also infinite; if the slope is zero, the supremum of errors is at least 1.
... a shallow neural network
Actually, this isn't an important assumption. Adding layers can increase the number of hinge points multiplicatively, but at the end you're still stuck with a finite number of hinges and hence a bounded good-estimation range. | $\sin(x)$ is a counterexample to the universal approximation theorem | A ReLU network is ultimately a piecewise-linear continuous function. Each neuron in the first hidden layer is just a shifted and scaled ReLU. Taking a linear combination of those produces a piecewis | $\sin(x)$ is a counterexample to the universal approximation theorem
A ReLU network is ultimately a piecewise-linear continuous function. Each neuron in the first hidden layer is just a shifted and scaled ReLU. Taking a linear combination of those produces a piecewise linear function, with (at most) as many hingepoints as there are neurons. Applying ReLU to that can create new hingepoints whenever the function crosses 0, but this is at most once per linear segment, so you end up with at most twice as many hingepoints as neurons.
Having established that, the end behavior is linear. If the slope is nonzero, the function goes to infinity, and the supremum of errors is also infinite; if the slope is zero, the supremum of errors is at least 1.
... a shallow neural network
Actually, this isn't an important assumption. Adding layers can increase the number of hinge points multiplicatively, but at the end you're still stuck with a finite number of hinges and hence a bounded good-estimation range. | $\sin(x)$ is a counterexample to the universal approximation theorem
A ReLU network is ultimately a piecewise-linear continuous function. Each neuron in the first hidden layer is just a shifted and scaled ReLU. Taking a linear combination of those produces a piecewis |
52,219 | $\sin(x)$ is a counterexample to the universal approximation theorem | The classical (Cybenko) universal approximation theorem has a condition about the function being approximated on a compact space.
On the real line, the Heine-Borel theorem says that compacts sets are the closed and bounded sets.
Therefore, the Cybenko universal approximation theorem does not apply to a function over the whole real line. If you approximate $\sin(x)$ over a compact space, such as a closed interval, then the theorem holds. | $\sin(x)$ is a counterexample to the universal approximation theorem | The classical (Cybenko) universal approximation theorem has a condition about the function being approximated on a compact space.
On the real line, the Heine-Borel theorem says that compacts sets are | $\sin(x)$ is a counterexample to the universal approximation theorem
The classical (Cybenko) universal approximation theorem has a condition about the function being approximated on a compact space.
On the real line, the Heine-Borel theorem says that compacts sets are the closed and bounded sets.
Therefore, the Cybenko universal approximation theorem does not apply to a function over the whole real line. If you approximate $\sin(x)$ over a compact space, such as a closed interval, then the theorem holds. | $\sin(x)$ is a counterexample to the universal approximation theorem
The classical (Cybenko) universal approximation theorem has a condition about the function being approximated on a compact space.
On the real line, the Heine-Borel theorem says that compacts sets are |
52,220 | Dependent Variable takes on the values 0, 1, 2, 3 - What is the right (logistic) regression model to use? | If you stick really close to the data generating process, these are repeated binary decisions. I.e. each participants makes three decisions of choosing the local product or not (each time a 1/0 outcome).
It's not count data, because the counts cannot reach any arbitrary number. Arguably, it could be truncated (at 3) count data or ordinal data, but that's not exactly how the data arose. That does not mean that you could not create a useful model by considering it that way, especially an ordinal model may indeed be useful. However, an ordinal model ignores that participants are asked to make a very similar choices - possibly with some known differences between questions - each time.
If one goes down the repeated binary route, then it matters what we wish to assume/what detailed data we have on each decision. E.g. when asked about the products, the product might sometimes have been a vegetables, exotic fruit, or clothes. People might tend to prefer buying vegetables locally, think that exotic fruits are so much better from further away and be more mixed about clothes. Or perhaps you think people might change their answer based on something else (e.g. is it the first question they get asked, participant's age, favorability rating 0-100 for globalization etc.). In that case modeling each binary choice (with explanatory variables like type of choice, participant age etc. as a fixed effect) and accounting for decisions being by the same participant (e.g. participant random effect) would be an option. Or, perhaps the choices were really all the same, in which one can reduce this to a binomial outcome (i.e. number of yes answers out of 3 with the same probability applying to all of them). Thus, I'm arguing for the use of (random effects) logistic regression (which is well supported e.g. in R using the lme4 or brms packages). | Dependent Variable takes on the values 0, 1, 2, 3 - What is the right (logistic) regression model to | If you stick really close to the data generating process, these are repeated binary decisions. I.e. each participants makes three decisions of choosing the local product or not (each time a 1/0 outcom | Dependent Variable takes on the values 0, 1, 2, 3 - What is the right (logistic) regression model to use?
If you stick really close to the data generating process, these are repeated binary decisions. I.e. each participants makes three decisions of choosing the local product or not (each time a 1/0 outcome).
It's not count data, because the counts cannot reach any arbitrary number. Arguably, it could be truncated (at 3) count data or ordinal data, but that's not exactly how the data arose. That does not mean that you could not create a useful model by considering it that way, especially an ordinal model may indeed be useful. However, an ordinal model ignores that participants are asked to make a very similar choices - possibly with some known differences between questions - each time.
If one goes down the repeated binary route, then it matters what we wish to assume/what detailed data we have on each decision. E.g. when asked about the products, the product might sometimes have been a vegetables, exotic fruit, or clothes. People might tend to prefer buying vegetables locally, think that exotic fruits are so much better from further away and be more mixed about clothes. Or perhaps you think people might change their answer based on something else (e.g. is it the first question they get asked, participant's age, favorability rating 0-100 for globalization etc.). In that case modeling each binary choice (with explanatory variables like type of choice, participant age etc. as a fixed effect) and accounting for decisions being by the same participant (e.g. participant random effect) would be an option. Or, perhaps the choices were really all the same, in which one can reduce this to a binomial outcome (i.e. number of yes answers out of 3 with the same probability applying to all of them). Thus, I'm arguing for the use of (random effects) logistic regression (which is well supported e.g. in R using the lme4 or brms packages). | Dependent Variable takes on the values 0, 1, 2, 3 - What is the right (logistic) regression model to
If you stick really close to the data generating process, these are repeated binary decisions. I.e. each participants makes three decisions of choosing the local product or not (each time a 1/0 outcom |
52,221 | Dependent Variable takes on the values 0, 1, 2, 3 - What is the right (logistic) regression model to use? | Although ordinal regression is a generally useful choice for ordered outcomes, in this particular case a binary regression could also be considered.
In each of the 3 trials per individual there is a binary choice: local versus non-local. In R you can model this situation with binary regression, coding the outcomes in a two-column integer matrix of c(successes, failures) with a row for each individual. That would also allow for individuals who didn't finish all 3 trials. See the Details of the R help page for family. I suspect that STATA allows for similar coding.
My impression is that would provide the same result as ordinal logistic regression here, but I haven't thought that through carefully. A potential advantage of using binary regression is that it's (perhaps unfortunately) more likely to be familiar to your audience than ordinal regression and thus easier to explain.
A binary regression could also be used to evaluate changes in outcome probabilities as a function of trial number, although it might be unlikely in this study. You could separate out each trial with a 0/1 outcome and annotation of the individual and trial number, treat the individuals as random effects in a mixed model, and include trial number as a covariate. | Dependent Variable takes on the values 0, 1, 2, 3 - What is the right (logistic) regression model to | Although ordinal regression is a generally useful choice for ordered outcomes, in this particular case a binary regression could also be considered.
In each of the 3 trials per individual there is a b | Dependent Variable takes on the values 0, 1, 2, 3 - What is the right (logistic) regression model to use?
Although ordinal regression is a generally useful choice for ordered outcomes, in this particular case a binary regression could also be considered.
In each of the 3 trials per individual there is a binary choice: local versus non-local. In R you can model this situation with binary regression, coding the outcomes in a two-column integer matrix of c(successes, failures) with a row for each individual. That would also allow for individuals who didn't finish all 3 trials. See the Details of the R help page for family. I suspect that STATA allows for similar coding.
My impression is that would provide the same result as ordinal logistic regression here, but I haven't thought that through carefully. A potential advantage of using binary regression is that it's (perhaps unfortunately) more likely to be familiar to your audience than ordinal regression and thus easier to explain.
A binary regression could also be used to evaluate changes in outcome probabilities as a function of trial number, although it might be unlikely in this study. You could separate out each trial with a 0/1 outcome and annotation of the individual and trial number, treat the individuals as random effects in a mixed model, and include trial number as a covariate. | Dependent Variable takes on the values 0, 1, 2, 3 - What is the right (logistic) regression model to
Although ordinal regression is a generally useful choice for ordered outcomes, in this particular case a binary regression could also be considered.
In each of the 3 trials per individual there is a b |
52,222 | Dependent Variable takes on the values 0, 1, 2, 3 - What is the right (logistic) regression model to use? | Yes, ordinal logistic regression (also referred to as the proportional odds model) is your best choice. If your outcome were to have 7+ ordered categories, linear regression may also be used, though because you have few ordered categories (i.e., 4) I would use ordinal logistic regression as suggested in the comments by @mkt. The article cited below (Bauer & Sterba, 2011) is a good introduction to ordinal regression.
References
Bauer, D. J., & Sterba, S. K. (2011). Fitting multilevel models with ordinal outcomes: performance of alternative specifications and methods of estimation. Psychological methods, 16(4), 373. | Dependent Variable takes on the values 0, 1, 2, 3 - What is the right (logistic) regression model to | Yes, ordinal logistic regression (also referred to as the proportional odds model) is your best choice. If your outcome were to have 7+ ordered categories, linear regression may also be used, though b | Dependent Variable takes on the values 0, 1, 2, 3 - What is the right (logistic) regression model to use?
Yes, ordinal logistic regression (also referred to as the proportional odds model) is your best choice. If your outcome were to have 7+ ordered categories, linear regression may also be used, though because you have few ordered categories (i.e., 4) I would use ordinal logistic regression as suggested in the comments by @mkt. The article cited below (Bauer & Sterba, 2011) is a good introduction to ordinal regression.
References
Bauer, D. J., & Sterba, S. K. (2011). Fitting multilevel models with ordinal outcomes: performance of alternative specifications and methods of estimation. Psychological methods, 16(4), 373. | Dependent Variable takes on the values 0, 1, 2, 3 - What is the right (logistic) regression model to
Yes, ordinal logistic regression (also referred to as the proportional odds model) is your best choice. If your outcome were to have 7+ ordered categories, linear regression may also be used, though b |
52,223 | Dependent Variable takes on the values 0, 1, 2, 3 - What is the right (logistic) regression model to use? | Beware of adding difficult-to-compare indicators
Treating your variable as an ordinal assumes both that each decision has equal weight (local apple is equivalent to local grape even if the latter is 4 times the price and the former has 2 times the reduction in environmental impact by purchasing locally) and yet that the difference between 3 local choices and 2 is not comparable to the difference between 2 local choices and 1. This is a strange combination.
Use metrics and weight your data
I would suggest determining variables that matches what you really want to study - some function of (differences in) price or environmental impact for instance - and making use of those variables. This will give the decisions made by your participants the appropriate weight and scale appropriately to future tasks. If you have very few items and they are offered to all participants then modelling them separately as suggested by Björn will make the most of your data.
Alternative approach which may be more informative to your research question - pivot to decision-level rather than participant-level modelling
I guess your research question is about whether people choose to purchase greener versions of products. You're looking at this from a 'person' perspective - would you be better served by analysing from a 'decision' perspective?
I suggest a more informative model here might be price of local item, environmental impact of local item, price of remote item, environmental impact of remote item (other item characteristics as you present to the participant) and participant characteristics predicting whether the choice is made to take a local item or not. That gives you a binary variable for which you could use logistic regression.
The resulting model has much higher generalisability to other items, which you could test through offering other people other shopping tasks. | Dependent Variable takes on the values 0, 1, 2, 3 - What is the right (logistic) regression model to | Beware of adding difficult-to-compare indicators
Treating your variable as an ordinal assumes both that each decision has equal weight (local apple is equivalent to local grape even if the latter is 4 | Dependent Variable takes on the values 0, 1, 2, 3 - What is the right (logistic) regression model to use?
Beware of adding difficult-to-compare indicators
Treating your variable as an ordinal assumes both that each decision has equal weight (local apple is equivalent to local grape even if the latter is 4 times the price and the former has 2 times the reduction in environmental impact by purchasing locally) and yet that the difference between 3 local choices and 2 is not comparable to the difference between 2 local choices and 1. This is a strange combination.
Use metrics and weight your data
I would suggest determining variables that matches what you really want to study - some function of (differences in) price or environmental impact for instance - and making use of those variables. This will give the decisions made by your participants the appropriate weight and scale appropriately to future tasks. If you have very few items and they are offered to all participants then modelling them separately as suggested by Björn will make the most of your data.
Alternative approach which may be more informative to your research question - pivot to decision-level rather than participant-level modelling
I guess your research question is about whether people choose to purchase greener versions of products. You're looking at this from a 'person' perspective - would you be better served by analysing from a 'decision' perspective?
I suggest a more informative model here might be price of local item, environmental impact of local item, price of remote item, environmental impact of remote item (other item characteristics as you present to the participant) and participant characteristics predicting whether the choice is made to take a local item or not. That gives you a binary variable for which you could use logistic regression.
The resulting model has much higher generalisability to other items, which you could test through offering other people other shopping tasks. | Dependent Variable takes on the values 0, 1, 2, 3 - What is the right (logistic) regression model to
Beware of adding difficult-to-compare indicators
Treating your variable as an ordinal assumes both that each decision has equal weight (local apple is equivalent to local grape even if the latter is 4 |
52,224 | What is an efficient algorithm for finding the minimum of a parabola-shaped function? [closed] | The parabola going through $(a,f(a))$, $(b,f(b))$, $(c,f(c))$ has a minimum at
$$G(a,b,c):=\frac{1}{2}\left( \frac{f(a)(b^2-c^2)+f(b)(c^2-a^2)+f(c)(a^2-b^2)}{f(a)(b-c)+f(b)(c-a)+f(c)(a-b)}\right)$$
So if you start with three reasonable guesses $x_0$, $x_1$ and $x_2$, you can iterate with $x_{n+1}=G(x_{n-2}, x_{n-1}, x_n)$ until you get whatever convergence you desire, and then take $f$ of the limit at the end. | What is an efficient algorithm for finding the minimum of a parabola-shaped function? [closed] | The parabola going through $(a,f(a))$, $(b,f(b))$, $(c,f(c))$ has a minimum at
$$G(a,b,c):=\frac{1}{2}\left( \frac{f(a)(b^2-c^2)+f(b)(c^2-a^2)+f(c)(a^2-b^2)}{f(a)(b-c)+f(b)(c-a)+f(c)(a-b)}\right)$$
So | What is an efficient algorithm for finding the minimum of a parabola-shaped function? [closed]
The parabola going through $(a,f(a))$, $(b,f(b))$, $(c,f(c))$ has a minimum at
$$G(a,b,c):=\frac{1}{2}\left( \frac{f(a)(b^2-c^2)+f(b)(c^2-a^2)+f(c)(a^2-b^2)}{f(a)(b-c)+f(b)(c-a)+f(c)(a-b)}\right)$$
So if you start with three reasonable guesses $x_0$, $x_1$ and $x_2$, you can iterate with $x_{n+1}=G(x_{n-2}, x_{n-1}, x_n)$ until you get whatever convergence you desire, and then take $f$ of the limit at the end. | What is an efficient algorithm for finding the minimum of a parabola-shaped function? [closed]
The parabola going through $(a,f(a))$, $(b,f(b))$, $(c,f(c))$ has a minimum at
$$G(a,b,c):=\frac{1}{2}\left( \frac{f(a)(b^2-c^2)+f(b)(c^2-a^2)+f(c)(a^2-b^2)}{f(a)(b-c)+f(b)(c-a)+f(c)(a-b)}\right)$$
So |
52,225 | What is an efficient algorithm for finding the minimum of a parabola-shaped function? [closed] | Ternary search is a simple algorithm to find the minimum (or maximum) of a unimodal function without using any derivative information. It proceeds by starting with some interval, and then recursively discards one-third of the interval until some tolerance is reached. | What is an efficient algorithm for finding the minimum of a parabola-shaped function? [closed] | Ternary search is a simple algorithm to find the minimum (or maximum) of a unimodal function without using any derivative information. It proceeds by starting with some interval, and then recursively | What is an efficient algorithm for finding the minimum of a parabola-shaped function? [closed]
Ternary search is a simple algorithm to find the minimum (or maximum) of a unimodal function without using any derivative information. It proceeds by starting with some interval, and then recursively discards one-third of the interval until some tolerance is reached. | What is an efficient algorithm for finding the minimum of a parabola-shaped function? [closed]
Ternary search is a simple algorithm to find the minimum (or maximum) of a unimodal function without using any derivative information. It proceeds by starting with some interval, and then recursively |
52,226 | How to test if two events with unknown probabilities are different or not | It turns out the counts of the other outcomes don't matter: a suitable chi-squared test works just fine here.
Let the chance of outcome $A$ be $p.$ Under your null hypothesis, the chance of $B$ is the same and (therefore) the chance of seeing something other than $A$ and $B$ is $1-2p.$ Because your rolls are independent, probabilities multiply, implying the chance of what you observed is proportional to
$$L(p;10,44,500) = p^{10}p^{44}(1-2p)^{500-10-44} = p^{54}(1-2p)^{446}.$$
Upon taking logarithms and differentiating, it's easy to establish that this likelihood $L$ is uniquely maximized at the value $$p = \frac{1}{2} \frac{10 + 44}{500} = \frac{27}{500}.$$ The figure shows a plot of $L(p);$ the vertical axis is on a logarithmic scale.
Clearly, the expected counts under the null hypothesis are $27$ for both $A$ and $B$ and $500-2\times 27 = 446$ for all the others. Because the expected count for all the others is equal to the actual count, it contributes nothing to the chi-squared statistic:
$$\chi^2 = \frac{(10-27)^2}{27} + \frac{(44-27)^2}{27} + \frac{(446-446)^2}{446} = 2\frac{17^2}{27} \approx 21.4.$$
(This is identical to the value you would obtain if you ignored all the non-A, non-B outcomes and just worked with the counts of $10$ and $44,$ exactly as if you were flipping a potentially unfair coin and the outcomes A and B corresponded to its two sides: for a fair coin, you would guess that both counts should be around $(10+44)/2=27$ and thereby obtain the same value of chi-squared.)
A large value of chi-squared indicates a large deviation from what would be predicted by the null hypothesis. The chance of observing a statistic with a deviation at least this great is given by a chi-squared distribution. The particular one to use has one "degree of freedom" because one unknown value, $p,$ was involved in the likelihood $L.$ The "p value" given by this distribution is less than four in a million, which is so small you can safely conclude the null hypothesis is incorrect. In a technical paper you might write something like "the difference is significant ($\chi^2(1) = 21.4,$ $p = 4\times 10^{-6}$)."
Moreover, the evidence points to event $A$ having a smaller probability than $B.$
Finally, now that we know the two probabilities differ, we may estimate them (using maximum likelihood, as above, or otherwise) from the data. The maximum likelihood estimates are $10/500$ and $44/500,$ respectively.
Reference
In a post at https://stats.stackexchange.com/a/17148/919 I give the background and necessary conditions for applying a chi-squared test. You can verify all those conditions are satisfied here.
If the reasoning behind a null hypothesis and a p-value is unfamiliar, see my post at https://stats.stackexchange.com/a/130772/919 for an accessible account of this theory. | How to test if two events with unknown probabilities are different or not | It turns out the counts of the other outcomes don't matter: a suitable chi-squared test works just fine here.
Let the chance of outcome $A$ be $p.$ Under your null hypothesis, the chance of $B$ is the | How to test if two events with unknown probabilities are different or not
It turns out the counts of the other outcomes don't matter: a suitable chi-squared test works just fine here.
Let the chance of outcome $A$ be $p.$ Under your null hypothesis, the chance of $B$ is the same and (therefore) the chance of seeing something other than $A$ and $B$ is $1-2p.$ Because your rolls are independent, probabilities multiply, implying the chance of what you observed is proportional to
$$L(p;10,44,500) = p^{10}p^{44}(1-2p)^{500-10-44} = p^{54}(1-2p)^{446}.$$
Upon taking logarithms and differentiating, it's easy to establish that this likelihood $L$ is uniquely maximized at the value $$p = \frac{1}{2} \frac{10 + 44}{500} = \frac{27}{500}.$$ The figure shows a plot of $L(p);$ the vertical axis is on a logarithmic scale.
Clearly, the expected counts under the null hypothesis are $27$ for both $A$ and $B$ and $500-2\times 27 = 446$ for all the others. Because the expected count for all the others is equal to the actual count, it contributes nothing to the chi-squared statistic:
$$\chi^2 = \frac{(10-27)^2}{27} + \frac{(44-27)^2}{27} + \frac{(446-446)^2}{446} = 2\frac{17^2}{27} \approx 21.4.$$
(This is identical to the value you would obtain if you ignored all the non-A, non-B outcomes and just worked with the counts of $10$ and $44,$ exactly as if you were flipping a potentially unfair coin and the outcomes A and B corresponded to its two sides: for a fair coin, you would guess that both counts should be around $(10+44)/2=27$ and thereby obtain the same value of chi-squared.)
A large value of chi-squared indicates a large deviation from what would be predicted by the null hypothesis. The chance of observing a statistic with a deviation at least this great is given by a chi-squared distribution. The particular one to use has one "degree of freedom" because one unknown value, $p,$ was involved in the likelihood $L.$ The "p value" given by this distribution is less than four in a million, which is so small you can safely conclude the null hypothesis is incorrect. In a technical paper you might write something like "the difference is significant ($\chi^2(1) = 21.4,$ $p = 4\times 10^{-6}$)."
Moreover, the evidence points to event $A$ having a smaller probability than $B.$
Finally, now that we know the two probabilities differ, we may estimate them (using maximum likelihood, as above, or otherwise) from the data. The maximum likelihood estimates are $10/500$ and $44/500,$ respectively.
Reference
In a post at https://stats.stackexchange.com/a/17148/919 I give the background and necessary conditions for applying a chi-squared test. You can verify all those conditions are satisfied here.
If the reasoning behind a null hypothesis and a p-value is unfamiliar, see my post at https://stats.stackexchange.com/a/130772/919 for an accessible account of this theory. | How to test if two events with unknown probabilities are different or not
It turns out the counts of the other outcomes don't matter: a suitable chi-squared test works just fine here.
Let the chance of outcome $A$ be $p.$ Under your null hypothesis, the chance of $B$ is the |
52,227 | How to test if two events with unknown probabilities are different or not | A Bayesian approach to the problem works out neatly. The reference posterior (see equation 6) of $\phi\equiv{p_A/p_B}$ is
$\phi^{ref}\sim{}B'(\phi;x_A+1/2,x_B+1/2)$,
where $B'$ is the beta prime distribution and $x_A$ and $x_B$ are the number of observations of $A$ and $B$ from the independent trials. Notice that the posterior distribution of $\phi$ is parameterized only by the counts of observations of $A$ and $B$, so the results of the analysis are the same whether the counts came from 500 rolls or 2000 rolls.
The beta prime CDF is the regularized incomplete beta function,
$I_{\frac{\phi}{\phi+1}}(x_A+1/2,x_B+1/2)$
The posterior likelihood that $\phi>1$ (i.e., $p_A>p_B$) is
$1-I_{1/2}(10.5,44.5)\approx{7.97\times10^{-7}}$
In R:
pbeta(0.5, 10.5, 44.5, lower.tail = FALSE)
#> [1] 7.972998e-07
As Bernardo and Ramón point out, the analysis works out similarly with the parameter $\theta_{A|AB}\equiv\frac{\phi}{\phi+1}$, which is the probability that the die shows $A$ given that the result of the roll is either $A$ or $B$. The reference/Jeffreys posterior of the probability of a Bernoulli trial is given by the beta distribution:
$Be(\theta_{A|AB};x_A+1/2,x_B+1/2)$ | How to test if two events with unknown probabilities are different or not | A Bayesian approach to the problem works out neatly. The reference posterior (see equation 6) of $\phi\equiv{p_A/p_B}$ is
$\phi^{ref}\sim{}B'(\phi;x_A+1/2,x_B+1/2)$,
where $B'$ is the beta prime distr | How to test if two events with unknown probabilities are different or not
A Bayesian approach to the problem works out neatly. The reference posterior (see equation 6) of $\phi\equiv{p_A/p_B}$ is
$\phi^{ref}\sim{}B'(\phi;x_A+1/2,x_B+1/2)$,
where $B'$ is the beta prime distribution and $x_A$ and $x_B$ are the number of observations of $A$ and $B$ from the independent trials. Notice that the posterior distribution of $\phi$ is parameterized only by the counts of observations of $A$ and $B$, so the results of the analysis are the same whether the counts came from 500 rolls or 2000 rolls.
The beta prime CDF is the regularized incomplete beta function,
$I_{\frac{\phi}{\phi+1}}(x_A+1/2,x_B+1/2)$
The posterior likelihood that $\phi>1$ (i.e., $p_A>p_B$) is
$1-I_{1/2}(10.5,44.5)\approx{7.97\times10^{-7}}$
In R:
pbeta(0.5, 10.5, 44.5, lower.tail = FALSE)
#> [1] 7.972998e-07
As Bernardo and Ramón point out, the analysis works out similarly with the parameter $\theta_{A|AB}\equiv\frac{\phi}{\phi+1}$, which is the probability that the die shows $A$ given that the result of the roll is either $A$ or $B$. The reference/Jeffreys posterior of the probability of a Bernoulli trial is given by the beta distribution:
$Be(\theta_{A|AB};x_A+1/2,x_B+1/2)$ | How to test if two events with unknown probabilities are different or not
A Bayesian approach to the problem works out neatly. The reference posterior (see equation 6) of $\phi\equiv{p_A/p_B}$ is
$\phi^{ref}\sim{}B'(\phi;x_A+1/2,x_B+1/2)$,
where $B'$ is the beta prime distr |
52,228 | "Dumb" log-loss for a binary classifier | You are correct, your if your "dumb" classifier knows the frequency of successes in the test set, it in fact works as an oracle, and is not that dumb. You're leaking the data from test set. It is easy to imagine an extreme case with big discrepancy between train and test set where such "dumb" classifier would in fact outperform the model trained using only the train set.
What you should do is to base your "dumb" classifier on the distribution of the train dataset. In fact, for binary data predicting the mean, or probability of success, is the best single-value prediction you can make assuming squared error or log-loss, so it is a pretty nice benchmark of the simplest but not completely useless model. | "Dumb" log-loss for a binary classifier | You are correct, your if your "dumb" classifier knows the frequency of successes in the test set, it in fact works as an oracle, and is not that dumb. You're leaking the data from test set. It is easy | "Dumb" log-loss for a binary classifier
You are correct, your if your "dumb" classifier knows the frequency of successes in the test set, it in fact works as an oracle, and is not that dumb. You're leaking the data from test set. It is easy to imagine an extreme case with big discrepancy between train and test set where such "dumb" classifier would in fact outperform the model trained using only the train set.
What you should do is to base your "dumb" classifier on the distribution of the train dataset. In fact, for binary data predicting the mean, or probability of success, is the best single-value prediction you can make assuming squared error or log-loss, so it is a pretty nice benchmark of the simplest but not completely useless model. | "Dumb" log-loss for a binary classifier
You are correct, your if your "dumb" classifier knows the frequency of successes in the test set, it in fact works as an oracle, and is not that dumb. You're leaking the data from test set. It is easy |
52,229 | "Dumb" log-loss for a binary classifier | I like my explanation of $R^2$ here and how it relates to a naïve model. You would be looking for the McFadden’s $R^2$ that I mention, as that compares the log loss of your model to that of one that naively predicts the prior probability every time, much as linear regression $R^2$ naively guesses the pooled/marginal mean $\bar y$ every time in the “denominator”.
$$R^2_{McFadden}=
1-
\dfrac{
\sum \big(
y_i\log(\hat p_i)+
(1-y_i)\log(1-\hat p_i)
\big)
}{
\sum\big(
y_i\log(p_{train})+
(1-y_i)\log(1-p_{train})
\big)
}
$$
The denominator is (proportional to) the log loss of the biased coin that you described.
Positive values of this quality represent an improvement over the “dumb” classifier. We take the $p_{train}$ in the denominator to use our knowledge of the training set without cheating and looking at any of the out-of-sample data.
EDIT
If you see me write about $R^2$ on here, you'll see me use that word "naïve". I see it this way:
You have to predict the probability that a photo is of a dog or a cat. The obvious move would be to look at the photo and decide how likely it is to be of a dog or of a cat. I, however, will not show you the picture, but I will tell you that there are as many dog photos as cat photos (or ten times as many dogs as cats, or whatever the ratio is).
Knowing nothing about the photo, the sensible, even if naïve, guess is that class ratio (the "prior" probability of being a dog or being a cat). If there are as many dog pictures as cat pictures, guess that there's a $50/50$ chance of either. If there are nine dog photos for every one cat photo, guess that there is a $90\%$ chance of being a dog and a $10\%$ chance of being a cat. | "Dumb" log-loss for a binary classifier | I like my explanation of $R^2$ here and how it relates to a naïve model. You would be looking for the McFadden’s $R^2$ that I mention, as that compares the log loss of your model to that of one that n | "Dumb" log-loss for a binary classifier
I like my explanation of $R^2$ here and how it relates to a naïve model. You would be looking for the McFadden’s $R^2$ that I mention, as that compares the log loss of your model to that of one that naively predicts the prior probability every time, much as linear regression $R^2$ naively guesses the pooled/marginal mean $\bar y$ every time in the “denominator”.
$$R^2_{McFadden}=
1-
\dfrac{
\sum \big(
y_i\log(\hat p_i)+
(1-y_i)\log(1-\hat p_i)
\big)
}{
\sum\big(
y_i\log(p_{train})+
(1-y_i)\log(1-p_{train})
\big)
}
$$
The denominator is (proportional to) the log loss of the biased coin that you described.
Positive values of this quality represent an improvement over the “dumb” classifier. We take the $p_{train}$ in the denominator to use our knowledge of the training set without cheating and looking at any of the out-of-sample data.
EDIT
If you see me write about $R^2$ on here, you'll see me use that word "naïve". I see it this way:
You have to predict the probability that a photo is of a dog or a cat. The obvious move would be to look at the photo and decide how likely it is to be of a dog or of a cat. I, however, will not show you the picture, but I will tell you that there are as many dog photos as cat photos (or ten times as many dogs as cats, or whatever the ratio is).
Knowing nothing about the photo, the sensible, even if naïve, guess is that class ratio (the "prior" probability of being a dog or being a cat). If there are as many dog pictures as cat pictures, guess that there's a $50/50$ chance of either. If there are nine dog photos for every one cat photo, guess that there is a $90\%$ chance of being a dog and a $10\%$ chance of being a cat. | "Dumb" log-loss for a binary classifier
I like my explanation of $R^2$ here and how it relates to a naïve model. You would be looking for the McFadden’s $R^2$ that I mention, as that compares the log loss of your model to that of one that n |
52,230 | Peak of Poisson distribution | The mode of the Poisson distribution occurs at the value $\text{Mode}(\lambda) = \lceil \lambda \rceil - 1$, whereas your proposed approximation is:
$$\widehat{\text{Mode}}(\lambda) \equiv \lambda - \tfrac{1}{2}.$$
Letting $u(\lambda) \equiv \lceil \lambda \rceil - \lambda$ denote the "upper remainder" of the number $\lambda$ we can write the difference between the true mode and your approximation as:
$$\begin{align}
\text{DIFF} \equiv
\text{Mode}(\lambda) - \widehat{\text{Mode}}(\lambda)
&= (\lceil \lambda \rceil - 1) - (\lambda-\tfrac{1}{2}) \\[6pt]
&= (\lceil \lambda \rceil - \lambda) -\tfrac{1}{2} \\[6pt]
&= u(\lambda) - \tfrac{1}{2}. \\[6pt]
\end{align}$$
It is simple to establish that $-\tfrac{1}{2} \leqslant \text{DIFF} < \tfrac{1}{2}$ so your approximation is near to the true mode. As $\lambda \rightarrow \infty$ the relative error in the approximation converges to zero. | Peak of Poisson distribution | The mode of the Poisson distribution occurs at the value $\text{Mode}(\lambda) = \lceil \lambda \rceil - 1$, whereas your proposed approximation is:
$$\widehat{\text{Mode}}(\lambda) \equiv \lambda - \ | Peak of Poisson distribution
The mode of the Poisson distribution occurs at the value $\text{Mode}(\lambda) = \lceil \lambda \rceil - 1$, whereas your proposed approximation is:
$$\widehat{\text{Mode}}(\lambda) \equiv \lambda - \tfrac{1}{2}.$$
Letting $u(\lambda) \equiv \lceil \lambda \rceil - \lambda$ denote the "upper remainder" of the number $\lambda$ we can write the difference between the true mode and your approximation as:
$$\begin{align}
\text{DIFF} \equiv
\text{Mode}(\lambda) - \widehat{\text{Mode}}(\lambda)
&= (\lceil \lambda \rceil - 1) - (\lambda-\tfrac{1}{2}) \\[6pt]
&= (\lceil \lambda \rceil - \lambda) -\tfrac{1}{2} \\[6pt]
&= u(\lambda) - \tfrac{1}{2}. \\[6pt]
\end{align}$$
It is simple to establish that $-\tfrac{1}{2} \leqslant \text{DIFF} < \tfrac{1}{2}$ so your approximation is near to the true mode. As $\lambda \rightarrow \infty$ the relative error in the approximation converges to zero. | Peak of Poisson distribution
The mode of the Poisson distribution occurs at the value $\text{Mode}(\lambda) = \lceil \lambda \rceil - 1$, whereas your proposed approximation is:
$$\widehat{\text{Mode}}(\lambda) \equiv \lambda - \ |
52,231 | Peak of Poisson distribution | The analysis at https://stats.stackexchange.com/a/211612/919 shows the mode of any Poisson$(\lambda)$ distribution is near $\lambda$ itself. Although that question concerns only integral $\lambda,$ its results answer the present question, too.
Let $p_\lambda(k)$ be the Poisson$(\lambda)$ probability for $k\in\{0,1,2,\ldots,\}$ (all other probabilities are zero, of course). These probabilities are proportional to the ratios
$$p_\lambda(k) \ \propto\ \frac{\lambda^k}{k!}.$$
As pointed out in the foregoing link, this means two successive Poisson probabilities are related by
$$p_\lambda(k+1) = \frac{\lambda}{k+1} p_\lambda(k).$$
Consequently, as $k$ progresses from $0$ through all integers less than $\lambda-1,$ the probability increases from $p_\lambda(k)$ to $p_\lambda(k+1);$ and then once $k$ exceeds $\lambda-1,$ the probability decreases. Therefore
All Poisson probability functions $p_\lambda$ rise to a peak and then fall again. The peak (a mode) occurs at the greatest integer less than $\lambda,$ written $\lfloor \lambda\rfloor.$ When $\lambda$ is an integer, the peak occurs at the two neighboring values $\lfloor \lambda\rfloor$ and $\lfloor \lambda\rfloor + 1.$
To illustrate, here is a plot of the modes based on a brute-force search.
The search was conducted by this R function. It evaluates the Poisson probabilities for $k$ between two suitable extreme quantiles, finds the indexes where the largest probability occurs, and returns an array of the values of $k$ corresponding to those indexes.
mode <- function(lambda) {
q <- min(1/2, ppois(floor(lambda), lambda)) # This is *a* probability
k <- do.call(seq, as.list(qpois(c(q, 1-q), lambda))) # Search limits
p <- dpois(k, lambda)
k[which(p == max(p))]
} | Peak of Poisson distribution | The analysis at https://stats.stackexchange.com/a/211612/919 shows the mode of any Poisson$(\lambda)$ distribution is near $\lambda$ itself. Although that question concerns only integral $\lambda,$ i | Peak of Poisson distribution
The analysis at https://stats.stackexchange.com/a/211612/919 shows the mode of any Poisson$(\lambda)$ distribution is near $\lambda$ itself. Although that question concerns only integral $\lambda,$ its results answer the present question, too.
Let $p_\lambda(k)$ be the Poisson$(\lambda)$ probability for $k\in\{0,1,2,\ldots,\}$ (all other probabilities are zero, of course). These probabilities are proportional to the ratios
$$p_\lambda(k) \ \propto\ \frac{\lambda^k}{k!}.$$
As pointed out in the foregoing link, this means two successive Poisson probabilities are related by
$$p_\lambda(k+1) = \frac{\lambda}{k+1} p_\lambda(k).$$
Consequently, as $k$ progresses from $0$ through all integers less than $\lambda-1,$ the probability increases from $p_\lambda(k)$ to $p_\lambda(k+1);$ and then once $k$ exceeds $\lambda-1,$ the probability decreases. Therefore
All Poisson probability functions $p_\lambda$ rise to a peak and then fall again. The peak (a mode) occurs at the greatest integer less than $\lambda,$ written $\lfloor \lambda\rfloor.$ When $\lambda$ is an integer, the peak occurs at the two neighboring values $\lfloor \lambda\rfloor$ and $\lfloor \lambda\rfloor + 1.$
To illustrate, here is a plot of the modes based on a brute-force search.
The search was conducted by this R function. It evaluates the Poisson probabilities for $k$ between two suitable extreme quantiles, finds the indexes where the largest probability occurs, and returns an array of the values of $k$ corresponding to those indexes.
mode <- function(lambda) {
q <- min(1/2, ppois(floor(lambda), lambda)) # This is *a* probability
k <- do.call(seq, as.list(qpois(c(q, 1-q), lambda))) # Search limits
p <- dpois(k, lambda)
k[which(p == max(p))]
} | Peak of Poisson distribution
The analysis at https://stats.stackexchange.com/a/211612/919 shows the mode of any Poisson$(\lambda)$ distribution is near $\lambda$ itself. Although that question concerns only integral $\lambda,$ i |
52,232 | $N \sim \text{Po}(\lambda)$ and $X_1,X_2,....,X_N$ are iid and independent of $N$, what is distribution of $Z_N = \max \{X_i\}_{i=1}^{N}$ | I assume that you take $Z$ to be the supremum of the set $\{X_1,X_2,\dots,X_N\}$ since this is also reasonably defined (as negative infinity) when the set is empty (in the event that $N=0$).
Conditional on $N$, $Z$ has cdf
\begin{align}
F_{Z|N}(z)&=P(\sup\{X_i\}_{i=1}^N\le z|N)
\\&=P(X_1\le z \cap \dots \cap X_N\le z|N)
\\&=F_X(z)^N.
\end{align}
This holds also for $N=0$, in which case
$F_{Z|N=0}(z)=P(Z\le z|N=0)=1$, that is, the supremum of the empty set ($-\infty$) is smaller than any $z$ with probability 1.
Using the law of total probability,
\begin{align}
F_Z(z)&=\sum_{n=0}^\infty P(N=n)F_X(z)^n
\\&=E(F_X(z)^N)
\\&=G_N(F_X(z))
\end{align}
where $G_N$ is the probability generating function of $N$ given by
$$
G_N(s)=e^{-\lambda(1-s)}
$$
when $N\sim\operatorname{Poisson}(\lambda)$.
Hence,
\begin{align}
F_Z(z)&=e^{-\lambda(1-F_X(z))}.
\end{align}
Note that $F_Z(z) \rightarrow e^{-\lambda}$ as $z\rightarrow -\infty$ in agreement with the "point mass" at $-\infty$.
For the standard generalized Pareto case
$$
F_X(x) = \begin{cases}
1-(1+ \xi x)^{-1/\xi}, & x\ge 0 \\
0, & x<0
\end{cases}
$$
the cdf of $Z$ thus becomes
$$
F_Z(z) = \begin{cases}
e^{-\lambda(1+ \xi z)^{-1/\xi})}, & z\ge 0 \\
e^{-\lambda}, & z<0.
\end{cases}
$$ | $N \sim \text{Po}(\lambda)$ and $X_1,X_2,....,X_N$ are iid and independent of $N$, what is distribut | I assume that you take $Z$ to be the supremum of the set $\{X_1,X_2,\dots,X_N\}$ since this is also reasonably defined (as negative infinity) when the set is empty (in the event that $N=0$).
Condition | $N \sim \text{Po}(\lambda)$ and $X_1,X_2,....,X_N$ are iid and independent of $N$, what is distribution of $Z_N = \max \{X_i\}_{i=1}^{N}$
I assume that you take $Z$ to be the supremum of the set $\{X_1,X_2,\dots,X_N\}$ since this is also reasonably defined (as negative infinity) when the set is empty (in the event that $N=0$).
Conditional on $N$, $Z$ has cdf
\begin{align}
F_{Z|N}(z)&=P(\sup\{X_i\}_{i=1}^N\le z|N)
\\&=P(X_1\le z \cap \dots \cap X_N\le z|N)
\\&=F_X(z)^N.
\end{align}
This holds also for $N=0$, in which case
$F_{Z|N=0}(z)=P(Z\le z|N=0)=1$, that is, the supremum of the empty set ($-\infty$) is smaller than any $z$ with probability 1.
Using the law of total probability,
\begin{align}
F_Z(z)&=\sum_{n=0}^\infty P(N=n)F_X(z)^n
\\&=E(F_X(z)^N)
\\&=G_N(F_X(z))
\end{align}
where $G_N$ is the probability generating function of $N$ given by
$$
G_N(s)=e^{-\lambda(1-s)}
$$
when $N\sim\operatorname{Poisson}(\lambda)$.
Hence,
\begin{align}
F_Z(z)&=e^{-\lambda(1-F_X(z))}.
\end{align}
Note that $F_Z(z) \rightarrow e^{-\lambda}$ as $z\rightarrow -\infty$ in agreement with the "point mass" at $-\infty$.
For the standard generalized Pareto case
$$
F_X(x) = \begin{cases}
1-(1+ \xi x)^{-1/\xi}, & x\ge 0 \\
0, & x<0
\end{cases}
$$
the cdf of $Z$ thus becomes
$$
F_Z(z) = \begin{cases}
e^{-\lambda(1+ \xi z)^{-1/\xi})}, & z\ge 0 \\
e^{-\lambda}, & z<0.
\end{cases}
$$ | $N \sim \text{Po}(\lambda)$ and $X_1,X_2,....,X_N$ are iid and independent of $N$, what is distribut
I assume that you take $Z$ to be the supremum of the set $\{X_1,X_2,\dots,X_N\}$ since this is also reasonably defined (as negative infinity) when the set is empty (in the event that $N=0$).
Condition |
52,233 | How well do power calculations actually work in reality? | You don't know when $H_0$ is false, so you can't compute the correct empirical rejection rate for power from looking at the results of tests. You shouldn't include the cases where the null was true in that calculation. (If you could say when $H_0$ was false, you wouldn't need tests in the first place.)
Further, even if you could tell when $H_0$ was true or false, it would still be wrong. Unless you worked with a whole power curve across effect sizes, desired power was specified at some given population effect size. The sample equivalent is the proportion of correct rejections at that population effect size, so the denominator would be limited to the number of comparable cases where the effect was actually that size (which of course you don't know).
That said, power calculations are based on your requirements in relation to the population -- what's the minimum power you require at some meaningful population effect size -- e.g. it might be an effect that is clinically important or educationally relevant (e.g. the power you want if the effect was a 5% higher pass rate on some skills test, say), and so forth -- an amount you would be interested in having good power at.
What is an effect size of interest does not typically come from data. It is not a statistical consideration, but a subject matter one. To this end it's usually better (where at all possible) to frame such considerations in terms of raw effects, not standardized effects, since it's on the scale of raw effects that such considerations would typically apply. It is also not a matter of a few moments thought (for all that this often seems to be all it's accorded from what I have seen); it also requires solid domain knowledge to understand what such an effect size might be.
There's a tendency I've seen to base power calculation inputs on the estimated effects from some previous study. There's often no obvious reason why a noisy estimate of a population effect size should be an effect size at which you want some given power -- this is conflating two distinct tasks, with distinct considerations.
However, if one were to use estimates of quantities like (say) an estimate of a population mean and standard deviation, it would be quite wrong to treat those as if they were the population values. This practice will lead to typically lower power than the usual (population-based) calculations suggest, and sometimes much less. One appropriate way to investigate this (particularly for more complicated models) would be via simulation. | How well do power calculations actually work in reality? | You don't know when $H_0$ is false, so you can't compute the correct empirical rejection rate for power from looking at the results of tests. You shouldn't include the cases where the null was true in | How well do power calculations actually work in reality?
You don't know when $H_0$ is false, so you can't compute the correct empirical rejection rate for power from looking at the results of tests. You shouldn't include the cases where the null was true in that calculation. (If you could say when $H_0$ was false, you wouldn't need tests in the first place.)
Further, even if you could tell when $H_0$ was true or false, it would still be wrong. Unless you worked with a whole power curve across effect sizes, desired power was specified at some given population effect size. The sample equivalent is the proportion of correct rejections at that population effect size, so the denominator would be limited to the number of comparable cases where the effect was actually that size (which of course you don't know).
That said, power calculations are based on your requirements in relation to the population -- what's the minimum power you require at some meaningful population effect size -- e.g. it might be an effect that is clinically important or educationally relevant (e.g. the power you want if the effect was a 5% higher pass rate on some skills test, say), and so forth -- an amount you would be interested in having good power at.
What is an effect size of interest does not typically come from data. It is not a statistical consideration, but a subject matter one. To this end it's usually better (where at all possible) to frame such considerations in terms of raw effects, not standardized effects, since it's on the scale of raw effects that such considerations would typically apply. It is also not a matter of a few moments thought (for all that this often seems to be all it's accorded from what I have seen); it also requires solid domain knowledge to understand what such an effect size might be.
There's a tendency I've seen to base power calculation inputs on the estimated effects from some previous study. There's often no obvious reason why a noisy estimate of a population effect size should be an effect size at which you want some given power -- this is conflating two distinct tasks, with distinct considerations.
However, if one were to use estimates of quantities like (say) an estimate of a population mean and standard deviation, it would be quite wrong to treat those as if they were the population values. This practice will lead to typically lower power than the usual (population-based) calculations suggest, and sometimes much less. One appropriate way to investigate this (particularly for more complicated models) would be via simulation. | How well do power calculations actually work in reality?
You don't know when $H_0$ is false, so you can't compute the correct empirical rejection rate for power from looking at the results of tests. You shouldn't include the cases where the null was true in |
52,234 | How well do power calculations actually work in reality? | You can view a typical power calculation not as guess work, but as an estimate of the unknown fixed true power. This means you can also perform inference on power by constructing a confidence interval for it using parameter estimates and standard error estimates from historical studies. In the example below the point estimate of power is above 90% but the inference cannot rule out that the true power might be considerably lower.
Example: A phase 2 and 3 development plan is being created for an asset to treat an immuno-inflammation disorder. Phase 3 is planned as a non-inferiority study using a difference in proportions on a binary responder index. The non-inferiority margin is set by the regulatory agency at –0.12, as is the one-sided significance level of 0.025. Phase 2 is a dose finding study on a continuous endpoint. This study also collects data on the responder index and includes a control arm to estimate the difference in proportions planned for phase 3. A stricter non-inferiority margin of –0.05 is considered in phase 2, but since the sample size in phase 2 is typically smaller than in phase 3, a larger one-sided significance level of 0.20 is tolerated. Based on a literature review the estimated response proportion for the comparator is 0.43 with N=1200.
The power curve shows the long-run probability of succeeding in phase 3 as a function of the unknown true difference in proportions based on N=365 subjects per arm when testing Ho: Difference in Proportions ≤ –0.12 at the one-sided 0.025 significance level using a likelihood ratio test. This long-run probability forms the level of confidence in the next experimental outcome. If one is satisfied with the inference on phase 3 power given minimal success in phase 2, one would be satisfied for any other successful phase 2 result. The confidence curve above depicts one-sided p-values and confidence intervals of all levels by inverting a likelihood ratio test, showing what minimal success would look like at the end of phase 2. This is based on N=90 subjects per arm, a 0.43 response rate estimate in the control arm, and an estimated difference in proportions of 0.01 (minimum detectable effect) testing Ho: Difference in Proportions ≤ –0.05. This produces a one-sided p-value of 0.20. A nearly identical confidence curve can be produced by inverting a Wald test using an identity link function. The p-value depicts the ex-post sampling probability of the observed phase 2 result or something more extreme if the hypothesis for the difference in proportions is true. This long-run probability represents the plausibility of the hypothesis given the data.
The figure above shows that minimal success in phase 2 produces inference around high values of phase 3 power, but still assumes some risk. While the maximum likelihood estimate of phase 3 power is 95.9%, one can claim with only 80% confidence that the power of the phase 3 study is no less than 50% given minimal success in phase 2 (p-value = 0.2 testing Ho: Phase 3 Power ≤ 0.50). The phase 2 null hypothesis Ho: Difference in Proportions ≤ –0.05 was chosen as the value at which phase 3 power is 50%.
In my view ensuring phase 3 power is no worse than a coin toss conditional on passing phase 2 is a good rule of thumb. If stronger inference on phase 3 power is desired given minimal success in phase 2, one could simply increase the phase 3 sample size. This will steepen the phase 3 power curve relative to the phase 3 null hypothesis by lowering the phase 3 minimum detectable effect. Alternatively, one could adjust the phase 2 significance level and null hypothesis, and select the phase 2 sample size based on an acceptable phase 2 minimum detectable effect. Once the phase 2 study results are available, two-sided confidence limits for phase 3 power can be provided along side the maximum likelihood point estimate. These point and interval estimates can even be plotted as a function of the phase 3 sample size.
Here is a paper that discusses performing inference on power and compares this to Bayesian probability of success. Here is a related LinkedIn article. | How well do power calculations actually work in reality? | You can view a typical power calculation not as guess work, but as an estimate of the unknown fixed true power. This means you can also perform inference on power by constructing a confidence interva | How well do power calculations actually work in reality?
You can view a typical power calculation not as guess work, but as an estimate of the unknown fixed true power. This means you can also perform inference on power by constructing a confidence interval for it using parameter estimates and standard error estimates from historical studies. In the example below the point estimate of power is above 90% but the inference cannot rule out that the true power might be considerably lower.
Example: A phase 2 and 3 development plan is being created for an asset to treat an immuno-inflammation disorder. Phase 3 is planned as a non-inferiority study using a difference in proportions on a binary responder index. The non-inferiority margin is set by the regulatory agency at –0.12, as is the one-sided significance level of 0.025. Phase 2 is a dose finding study on a continuous endpoint. This study also collects data on the responder index and includes a control arm to estimate the difference in proportions planned for phase 3. A stricter non-inferiority margin of –0.05 is considered in phase 2, but since the sample size in phase 2 is typically smaller than in phase 3, a larger one-sided significance level of 0.20 is tolerated. Based on a literature review the estimated response proportion for the comparator is 0.43 with N=1200.
The power curve shows the long-run probability of succeeding in phase 3 as a function of the unknown true difference in proportions based on N=365 subjects per arm when testing Ho: Difference in Proportions ≤ –0.12 at the one-sided 0.025 significance level using a likelihood ratio test. This long-run probability forms the level of confidence in the next experimental outcome. If one is satisfied with the inference on phase 3 power given minimal success in phase 2, one would be satisfied for any other successful phase 2 result. The confidence curve above depicts one-sided p-values and confidence intervals of all levels by inverting a likelihood ratio test, showing what minimal success would look like at the end of phase 2. This is based on N=90 subjects per arm, a 0.43 response rate estimate in the control arm, and an estimated difference in proportions of 0.01 (minimum detectable effect) testing Ho: Difference in Proportions ≤ –0.05. This produces a one-sided p-value of 0.20. A nearly identical confidence curve can be produced by inverting a Wald test using an identity link function. The p-value depicts the ex-post sampling probability of the observed phase 2 result or something more extreme if the hypothesis for the difference in proportions is true. This long-run probability represents the plausibility of the hypothesis given the data.
The figure above shows that minimal success in phase 2 produces inference around high values of phase 3 power, but still assumes some risk. While the maximum likelihood estimate of phase 3 power is 95.9%, one can claim with only 80% confidence that the power of the phase 3 study is no less than 50% given minimal success in phase 2 (p-value = 0.2 testing Ho: Phase 3 Power ≤ 0.50). The phase 2 null hypothesis Ho: Difference in Proportions ≤ –0.05 was chosen as the value at which phase 3 power is 50%.
In my view ensuring phase 3 power is no worse than a coin toss conditional on passing phase 2 is a good rule of thumb. If stronger inference on phase 3 power is desired given minimal success in phase 2, one could simply increase the phase 3 sample size. This will steepen the phase 3 power curve relative to the phase 3 null hypothesis by lowering the phase 3 minimum detectable effect. Alternatively, one could adjust the phase 2 significance level and null hypothesis, and select the phase 2 sample size based on an acceptable phase 2 minimum detectable effect. Once the phase 2 study results are available, two-sided confidence limits for phase 3 power can be provided along side the maximum likelihood point estimate. These point and interval estimates can even be plotted as a function of the phase 3 sample size.
Here is a paper that discusses performing inference on power and compares this to Bayesian probability of success. Here is a related LinkedIn article. | How well do power calculations actually work in reality?
You can view a typical power calculation not as guess work, but as an estimate of the unknown fixed true power. This means you can also perform inference on power by constructing a confidence interva |
52,235 | How well do power calculations actually work in reality? | To answer this question, it is useful to first separate the power function itself from sample-size calculations that try to achieve a stipulated level of power against an alternative hypothesis. The power function arises whenever we have a proposed method of hypothesis testing and it measures the probability of rejecting the null hypothesis in the test, conditional on the true value of the parameter under analysis. The power function is a function of the parameter of interest in the test, the significance level for the test, and the sample size. However, it does not depend on the data values --- just the sample size.
Now, when we undertake sample-size calculations on the basis of a power requirement, we need to stipulate three things: (1) the significance level for the test; (2) the minimum level of power we want to achieve; and (3) the (alternative) parameter value against which we wish to achieve this level of power. Once we stipulate these three things, there is some minimum sample size $n$ that will achieve the required level of power against the stipulate value of the parameter. There is no guesswork of what the data will look like here, because the content of the data are not required for this computation.
As to the idea of undertaking empirical tests to do a post hoc check of the power, the only thing that is really worth checking is whether or not the underlying model assumptions for the test are accurate or not. If the underlying model assumptions are accurate then the power calculation is accurate --- it merely reflects a mathematical property of the test under that model. If an empirical assessment of the power curve were to examine the rejection rate in studies, it would need to know the true parameter values in those studies to know where on the power curve it is supposed to be comparing the empirical rejection rates to. | How well do power calculations actually work in reality? | To answer this question, it is useful to first separate the power function itself from sample-size calculations that try to achieve a stipulated level of power against an alternative hypothesis. The | How well do power calculations actually work in reality?
To answer this question, it is useful to first separate the power function itself from sample-size calculations that try to achieve a stipulated level of power against an alternative hypothesis. The power function arises whenever we have a proposed method of hypothesis testing and it measures the probability of rejecting the null hypothesis in the test, conditional on the true value of the parameter under analysis. The power function is a function of the parameter of interest in the test, the significance level for the test, and the sample size. However, it does not depend on the data values --- just the sample size.
Now, when we undertake sample-size calculations on the basis of a power requirement, we need to stipulate three things: (1) the significance level for the test; (2) the minimum level of power we want to achieve; and (3) the (alternative) parameter value against which we wish to achieve this level of power. Once we stipulate these three things, there is some minimum sample size $n$ that will achieve the required level of power against the stipulate value of the parameter. There is no guesswork of what the data will look like here, because the content of the data are not required for this computation.
As to the idea of undertaking empirical tests to do a post hoc check of the power, the only thing that is really worth checking is whether or not the underlying model assumptions for the test are accurate or not. If the underlying model assumptions are accurate then the power calculation is accurate --- it merely reflects a mathematical property of the test under that model. If an empirical assessment of the power curve were to examine the rejection rate in studies, it would need to know the true parameter values in those studies to know where on the power curve it is supposed to be comparing the empirical rejection rates to. | How well do power calculations actually work in reality?
To answer this question, it is useful to first separate the power function itself from sample-size calculations that try to achieve a stipulated level of power against an alternative hypothesis. The |
52,236 | What are the skewness and kurtosis of the sample mean? | $$\begin{align}
\boxed{
\quad \quad \ \ \ \mathbb{E}(\bar{X}_n) = \mu
\quad \quad \quad \quad \quad \quad \quad \ \
\mathbb{V}(\bar{X}_n) = \frac{\sigma^2}{n}, \\[12pt]
\quad \mathbb{Skew}(\bar{X}_n) = \frac{\gamma}{\sqrt{n}}
\quad \quad \quad \quad \quad
\mathbb{Kurt}(\bar{X}_n) = 3 + \frac{\kappa - 3}{n}. \quad \\}
\end{align}$$
The mean, variance, skewness and kurtosis of the sample mean are shown in the box above. These formulae are valid for any case where the underlying values are IID with finite kurtosis. It is simple to confirm that $\mathbb{Skew}(\bar{X}_n) \rightarrow 0$ and $\mathbb{Kurt}(\bar{X}_n) \rightarrow 3$ as $n \rightarrow \infty$, which means that the sample mean is asymptotically unskewed and mesokurtic. This is also implied by the classical central limit theorem, which ensures that the standardised sample mean converges in distribution to the normal distribution.
Proof via cumulants: The simplest way to prove these results is via moment/cumulant generating functions. Suppose we let $m_X$ and $K_X$ denote the moment generating function and cumulant generating function for the underlying IID random variables in the sequence. It is simple to show that $m_{\bar{X}_n}(t) = m_X(t/n)^n$ so the cumulant generating function for the sample mean has the form:
$$K_{\bar{X}_n}(t) = n K_X(t/n).$$
Consequently, the cumulants of the sample mean are related to the cumulants of the underlying random variables by:
$$\begin{align}
\bar{\kappa}_r
\equiv \frac{d^r}{dt^r} K_{\bar{X}_n} (t) \Bigg|_{t=0}
&= n \frac{d^r }{dt^r} K_X(t/n) \Bigg|_{t=0} \\[6pt]
&= \frac{1}{n^{r-1}} K_X^{(r)}(t/n) \Bigg|_{t=0} \\[6pt]
&= \frac{1}{n^{r-1}} K_X^{(r)}(0) \\[6pt]
&= \frac{\kappa_r}{n^{r-1}}. \\[6pt]
\end{align}$$
Using the relationship of the cumulants to the moments of interest, we then have:
$$\begin{align}
\mathbb{V}(\bar{X}_n)
&= \bar{\kappa}_2 \\[12pt]
&= \frac{\kappa_2}{n} \\[6pt]
&= \frac{\sigma^2}{n}, \\[6pt]
\mathbb{Skew}(\bar{X}_n)
&= \frac{\bar{\kappa}_3}{\bar{\kappa}_2^{3/2}} \\[6pt]
&= \frac{\kappa_3 / n^2}{(\kappa_2/n)^{3/2}} \\[6pt]
&= \frac{1}{\sqrt{n}} \cdot \frac{\kappa_3}{\kappa_2^{3/2}} \\[6pt]
&= \frac{\gamma}{\sqrt{n}}, \\[6pt]
\mathbb{Kurt}(\bar{X}_n)
&= \frac{\bar{\kappa}_4 + 3 \bar{\kappa}_2^2}{\bar{\kappa}_2^2} \\[6pt]
&= \frac{\kappa_4/n^3 + 3 (\kappa_2/n)^2}{(\kappa_2/n)^2} \\[6pt]
&= \frac{\kappa_4/n + 3 \kappa_2^2}{\kappa_2^2} \\[6pt]
&= \frac{(\kappa \sigma^4 - 3\sigma^4)/n + 3 \sigma^4}{\sigma^4} \\[6pt]
&= 3 + \frac{\kappa - 3}{n}. \\[6pt]
\end{align}$$
This method gives the results of interest, and it can also be generalise to give corresponding results for higher-order moments. As can be seen, the higher-order results are particularly simple, but the corresponding relationships for higher-order moments get messy once you get to high order.
Proof via expansion to raw moments: An alternative method of deriving these results is to expand the relevant central moments for the sample mean and simplify down to raw moments of the underlying random variables. Let $Y_i \equiv X_i - \mu$ and note that these random variables have mean zero, but have the same higher-order moments as $X_i$. The relevant higher-order central moments for the sample mean are:$^\dagger$
$$\begin{align}
\mathbb{E}((\bar{X}_n - \mu)^3)
&= \mathbb{E} \Bigg( \bigg( \frac{1}{n} \sum_{i=1}^n (X_i - \mu) \bigg)^3 \Bigg) \\[6pt]
&= \mathbb{E} \Bigg( \bigg( \frac{1}{n} \sum_{i=1}^n Y_i \bigg)^3 \Bigg) \\[6pt]
&= \frac{1}{n^3} \cdot \mathbb{E} \Bigg( \sum_{i=1}^n \sum_{j=1}^n \sum_{k=1}^n Y_i Y_j Y_k \Bigg) \\[8pt]
&= \frac{1}{n^3} \cdot \mathbb{E} \Bigg( \sum_{i} Y_i^3 + \sum_{i \neq j} Y_i^2 Y_j + \sum_{i \neq j \neq k} Y_i Y_j Y_k \Bigg) \\[8pt]
&= \frac{1}{n^3} \cdot \sum_{i} \mathbb{E}(Y_i^3) \\[12pt]
&= \frac{1}{n^3} \cdot \sum_{i} \gamma \sigma^3 \\[12pt]
&= \frac{1}{n^3} \cdot n \gamma \sigma^3 \\[12pt]
&= \frac{\gamma}{n^2} \cdot \sigma^3, \\[12pt]
\mathbb{E}((\bar{X}_n - \mu)^4)
&= \mathbb{E} \Bigg( \bigg( \frac{1}{n} \sum_{i=1}^n (X_i - \mu) \bigg)^4 \Bigg) \\[6pt]
&= \mathbb{E} \Bigg( \bigg( \frac{1}{n} \sum_{i=1}^n Y_i \bigg)^4 \Bigg) \\[6pt]
&= \frac{1}{n^4} \cdot \mathbb{E} \Bigg( \sum_{i=1}^n \sum_{j=1}^n \sum_{k=1}^n \sum_{l=1}^n Y_i Y_j Y_k Y_l \Bigg) \\[6pt]
&= \frac{1}{n^4} \cdot \mathbb{E} \Bigg( \sum_{i} Y_i^4 + \sum_{i \neq j} Y_i^3 Y_j + \sum_{i \neq j} Y_i^2 Y_j^2 \\[12pt]
&\quad \quad \quad \quad \quad \quad + \sum_{i \neq j \neq k} Y_i^2 Y_j Y_k + \sum_{i \neq j \neq k \neq l} Y_i Y_j Y_k Y_l \Bigg) \\[6pt]
&= \frac{1}{n^4} \cdot \Bigg[ \sum_{i} \mathbb{E}(Y_i^4) + \sum_{i \neq j} \mathbb{E}(Y_i^2) \mathbb{E}(Y_j^2) \Bigg] \\[6pt]
&= \frac{1}{n^4} \cdot \Bigg[ \sum_{i} (\kappa \sigma^4) + \sum_{i \neq j} \sigma^4 \Bigg] \\[6pt]
&= \frac{1}{n^4} \cdot \Bigg[ n \kappa \sigma^4 + 3n(n-1) \sigma^4 \Bigg] \\[6pt]
&= \frac{(\kappa + 3(n-1)) \sigma^4}{n^3} \\[6pt]
&= \frac{3n + (\kappa - 3)}{n^3} \cdot \sigma^4. \\[6pt]
\end{align}$$
Consequently, the skewness and kurtosis of the sample mean are given respectively by:
$$\begin{align}
\mathbb{Skew}(\bar{X}_n)
&= \frac{\mathbb{E}((\bar{X}_n - \mu)^3)}{\mathbb{V}(\bar{X}_n)^{3/2}} \\[6pt]
&= \frac{\gamma}{n^2} \cdot \sigma^3 \bigg/ \frac{\sigma^3}{n^{3/2}} \\[6pt]
&= \frac{\gamma}{\sqrt{n}}, \\[12pt]
\mathbb{Kurt}(\bar{X}_n)
&= \frac{\mathbb{E}((\bar{X}_n - \mu)^4)}{\mathbb{V}(\bar{X}_n)^2} \\[6pt]
&= \frac{3n + (\kappa - 3)}{n^3} \cdot \sigma^4 \bigg/\frac{\sigma^4}{n^2}
\quad \quad \quad \quad \quad \quad \quad \\[6pt]
&= \frac{3n + (\kappa - 3)}{n} \\[12pt]
&= 3 + \frac{\kappa - 3}{n}. \\[12pt]
\end{align}$$
$^\dagger$ We have used a slight abuse of notation in the range of the summations; when we refer to e.g., $i \neq j \neq k$ we use this as shorthand for the set of indices where all the indices are distinct. That is, we do not read these inequalities in their strict meaning, but read them as if the inequality were intended to be transitive. | What are the skewness and kurtosis of the sample mean? | $$\begin{align}
\boxed{
\quad \quad \ \ \ \mathbb{E}(\bar{X}_n) = \mu
\quad \quad \quad \quad \quad \quad \quad \ \
\mathbb{V}(\bar{X}_n) = \frac{\sigma^2}{n}, \\[12pt]
\quad \mathbb{Skew}(\bar{X}_n) | What are the skewness and kurtosis of the sample mean?
$$\begin{align}
\boxed{
\quad \quad \ \ \ \mathbb{E}(\bar{X}_n) = \mu
\quad \quad \quad \quad \quad \quad \quad \ \
\mathbb{V}(\bar{X}_n) = \frac{\sigma^2}{n}, \\[12pt]
\quad \mathbb{Skew}(\bar{X}_n) = \frac{\gamma}{\sqrt{n}}
\quad \quad \quad \quad \quad
\mathbb{Kurt}(\bar{X}_n) = 3 + \frac{\kappa - 3}{n}. \quad \\}
\end{align}$$
The mean, variance, skewness and kurtosis of the sample mean are shown in the box above. These formulae are valid for any case where the underlying values are IID with finite kurtosis. It is simple to confirm that $\mathbb{Skew}(\bar{X}_n) \rightarrow 0$ and $\mathbb{Kurt}(\bar{X}_n) \rightarrow 3$ as $n \rightarrow \infty$, which means that the sample mean is asymptotically unskewed and mesokurtic. This is also implied by the classical central limit theorem, which ensures that the standardised sample mean converges in distribution to the normal distribution.
Proof via cumulants: The simplest way to prove these results is via moment/cumulant generating functions. Suppose we let $m_X$ and $K_X$ denote the moment generating function and cumulant generating function for the underlying IID random variables in the sequence. It is simple to show that $m_{\bar{X}_n}(t) = m_X(t/n)^n$ so the cumulant generating function for the sample mean has the form:
$$K_{\bar{X}_n}(t) = n K_X(t/n).$$
Consequently, the cumulants of the sample mean are related to the cumulants of the underlying random variables by:
$$\begin{align}
\bar{\kappa}_r
\equiv \frac{d^r}{dt^r} K_{\bar{X}_n} (t) \Bigg|_{t=0}
&= n \frac{d^r }{dt^r} K_X(t/n) \Bigg|_{t=0} \\[6pt]
&= \frac{1}{n^{r-1}} K_X^{(r)}(t/n) \Bigg|_{t=0} \\[6pt]
&= \frac{1}{n^{r-1}} K_X^{(r)}(0) \\[6pt]
&= \frac{\kappa_r}{n^{r-1}}. \\[6pt]
\end{align}$$
Using the relationship of the cumulants to the moments of interest, we then have:
$$\begin{align}
\mathbb{V}(\bar{X}_n)
&= \bar{\kappa}_2 \\[12pt]
&= \frac{\kappa_2}{n} \\[6pt]
&= \frac{\sigma^2}{n}, \\[6pt]
\mathbb{Skew}(\bar{X}_n)
&= \frac{\bar{\kappa}_3}{\bar{\kappa}_2^{3/2}} \\[6pt]
&= \frac{\kappa_3 / n^2}{(\kappa_2/n)^{3/2}} \\[6pt]
&= \frac{1}{\sqrt{n}} \cdot \frac{\kappa_3}{\kappa_2^{3/2}} \\[6pt]
&= \frac{\gamma}{\sqrt{n}}, \\[6pt]
\mathbb{Kurt}(\bar{X}_n)
&= \frac{\bar{\kappa}_4 + 3 \bar{\kappa}_2^2}{\bar{\kappa}_2^2} \\[6pt]
&= \frac{\kappa_4/n^3 + 3 (\kappa_2/n)^2}{(\kappa_2/n)^2} \\[6pt]
&= \frac{\kappa_4/n + 3 \kappa_2^2}{\kappa_2^2} \\[6pt]
&= \frac{(\kappa \sigma^4 - 3\sigma^4)/n + 3 \sigma^4}{\sigma^4} \\[6pt]
&= 3 + \frac{\kappa - 3}{n}. \\[6pt]
\end{align}$$
This method gives the results of interest, and it can also be generalise to give corresponding results for higher-order moments. As can be seen, the higher-order results are particularly simple, but the corresponding relationships for higher-order moments get messy once you get to high order.
Proof via expansion to raw moments: An alternative method of deriving these results is to expand the relevant central moments for the sample mean and simplify down to raw moments of the underlying random variables. Let $Y_i \equiv X_i - \mu$ and note that these random variables have mean zero, but have the same higher-order moments as $X_i$. The relevant higher-order central moments for the sample mean are:$^\dagger$
$$\begin{align}
\mathbb{E}((\bar{X}_n - \mu)^3)
&= \mathbb{E} \Bigg( \bigg( \frac{1}{n} \sum_{i=1}^n (X_i - \mu) \bigg)^3 \Bigg) \\[6pt]
&= \mathbb{E} \Bigg( \bigg( \frac{1}{n} \sum_{i=1}^n Y_i \bigg)^3 \Bigg) \\[6pt]
&= \frac{1}{n^3} \cdot \mathbb{E} \Bigg( \sum_{i=1}^n \sum_{j=1}^n \sum_{k=1}^n Y_i Y_j Y_k \Bigg) \\[8pt]
&= \frac{1}{n^3} \cdot \mathbb{E} \Bigg( \sum_{i} Y_i^3 + \sum_{i \neq j} Y_i^2 Y_j + \sum_{i \neq j \neq k} Y_i Y_j Y_k \Bigg) \\[8pt]
&= \frac{1}{n^3} \cdot \sum_{i} \mathbb{E}(Y_i^3) \\[12pt]
&= \frac{1}{n^3} \cdot \sum_{i} \gamma \sigma^3 \\[12pt]
&= \frac{1}{n^3} \cdot n \gamma \sigma^3 \\[12pt]
&= \frac{\gamma}{n^2} \cdot \sigma^3, \\[12pt]
\mathbb{E}((\bar{X}_n - \mu)^4)
&= \mathbb{E} \Bigg( \bigg( \frac{1}{n} \sum_{i=1}^n (X_i - \mu) \bigg)^4 \Bigg) \\[6pt]
&= \mathbb{E} \Bigg( \bigg( \frac{1}{n} \sum_{i=1}^n Y_i \bigg)^4 \Bigg) \\[6pt]
&= \frac{1}{n^4} \cdot \mathbb{E} \Bigg( \sum_{i=1}^n \sum_{j=1}^n \sum_{k=1}^n \sum_{l=1}^n Y_i Y_j Y_k Y_l \Bigg) \\[6pt]
&= \frac{1}{n^4} \cdot \mathbb{E} \Bigg( \sum_{i} Y_i^4 + \sum_{i \neq j} Y_i^3 Y_j + \sum_{i \neq j} Y_i^2 Y_j^2 \\[12pt]
&\quad \quad \quad \quad \quad \quad + \sum_{i \neq j \neq k} Y_i^2 Y_j Y_k + \sum_{i \neq j \neq k \neq l} Y_i Y_j Y_k Y_l \Bigg) \\[6pt]
&= \frac{1}{n^4} \cdot \Bigg[ \sum_{i} \mathbb{E}(Y_i^4) + \sum_{i \neq j} \mathbb{E}(Y_i^2) \mathbb{E}(Y_j^2) \Bigg] \\[6pt]
&= \frac{1}{n^4} \cdot \Bigg[ \sum_{i} (\kappa \sigma^4) + \sum_{i \neq j} \sigma^4 \Bigg] \\[6pt]
&= \frac{1}{n^4} \cdot \Bigg[ n \kappa \sigma^4 + 3n(n-1) \sigma^4 \Bigg] \\[6pt]
&= \frac{(\kappa + 3(n-1)) \sigma^4}{n^3} \\[6pt]
&= \frac{3n + (\kappa - 3)}{n^3} \cdot \sigma^4. \\[6pt]
\end{align}$$
Consequently, the skewness and kurtosis of the sample mean are given respectively by:
$$\begin{align}
\mathbb{Skew}(\bar{X}_n)
&= \frac{\mathbb{E}((\bar{X}_n - \mu)^3)}{\mathbb{V}(\bar{X}_n)^{3/2}} \\[6pt]
&= \frac{\gamma}{n^2} \cdot \sigma^3 \bigg/ \frac{\sigma^3}{n^{3/2}} \\[6pt]
&= \frac{\gamma}{\sqrt{n}}, \\[12pt]
\mathbb{Kurt}(\bar{X}_n)
&= \frac{\mathbb{E}((\bar{X}_n - \mu)^4)}{\mathbb{V}(\bar{X}_n)^2} \\[6pt]
&= \frac{3n + (\kappa - 3)}{n^3} \cdot \sigma^4 \bigg/\frac{\sigma^4}{n^2}
\quad \quad \quad \quad \quad \quad \quad \\[6pt]
&= \frac{3n + (\kappa - 3)}{n} \\[12pt]
&= 3 + \frac{\kappa - 3}{n}. \\[12pt]
\end{align}$$
$^\dagger$ We have used a slight abuse of notation in the range of the summations; when we refer to e.g., $i \neq j \neq k$ we use this as shorthand for the set of indices where all the indices are distinct. That is, we do not read these inequalities in their strict meaning, but read them as if the inequality were intended to be transitive. | What are the skewness and kurtosis of the sample mean?
$$\begin{align}
\boxed{
\quad \quad \ \ \ \mathbb{E}(\bar{X}_n) = \mu
\quad \quad \quad \quad \quad \quad \quad \ \
\mathbb{V}(\bar{X}_n) = \frac{\sigma^2}{n}, \\[12pt]
\quad \mathbb{Skew}(\bar{X}_n) |
52,237 | What are the skewness and kurtosis of the sample mean? | Solution
The calculation amounts to removing the linear term in the cumulant generating function (log characteristic function) of the distribution and simply replacing its argument $t$ by the factor $t\sqrt{n}/\sigma$ needed to standardize the sum, afterwards multiplying everything by $n$ to account for the $n$ iid variables comprising the sum.
Details
Let $\phi$ be the characteristic function of that common distribution with mean $\mu,$ standard deviation $\sigma,$ skewness $\gamma,$ and finite kurtosis $\kappa.$ Then because
$$\log \phi(t) = i \mu (t\sigma) - \frac{1}{2} (t\sigma)^2 - \frac{i\gamma}{6} (t\sigma)^3 + \frac{\kappa - 3}{24}(t\sigma)^4 + o(t^4),$$
the log characteristic function of the standardized sample mean $Z = \sqrt{n}(\bar X - \mu)/\sigma$ is
$$\begin{aligned}
\log \phi_Z(t) &= n \left(\log \phi\left(\frac{t\sqrt{n}}{\sigma}\right) - i\mu\left(\frac{t\sqrt{n}}{\sigma}\right)\right)\\
&= -\frac{1}{2}\left(\frac{t}{\sqrt{n}}\right)^2 - n\frac{i\gamma}{6} \left(\frac{t}{\sqrt{n}}\right)^3 + n\frac{\kappa - 3}{24}\left(\frac{t}{\sqrt{n}}\right)^4 + o(t^4).\\
\end{aligned}$$
Comparing like powers of $t$ shows
$$\gamma_Z=\gamma/\sqrt{n},\ \kappa_Z - 3 = (\kappa-3)/n.$$
The first equation is standard--it's usually taken as the definition of skewness and kurtosis--while the second is a direct (and simple) consequence of the independence of the $X_i$ in the sample together with the rules for how $\phi$ transforms under recentering and rescaling. The rest of this post provides all details for any readers who might be unacquainted with this use of characteristic functions.
Background
Characteristic functions
The characteristic function of a random variable $X$ is defined to be
$$\phi_X: \mathbb{R}\to \mathbb{C},\ \phi_X(t) = E\left[e^{itX}\right].$$
It always exists because $E\left[\big|e^{itX}\big|\right] \le E[1]=1$ demonstrates absolute convergence of the integrand $e^{itX}.$
Characteristic functions are useful for understanding (positive integral) moments $\mu^\prime(k) = E[X^k]$ because, when $E[X^{k}]$ exists and is finite, an application of Taylor's Theorem to the exponential shows
$$\begin{aligned}
\phi_X(t) &= E\left[(1 + itX + (itX)^2/2! + \cdots + (itX)^k/k! + o(t^{k+1})\right]\\
&= 1 + \frac{i\mu_X(1)^\prime}{1}t^1 - \frac{\mu_X(2)^\prime}{2}t^2 - \frac{\mu_X(3)^\prime}{6}t^3 + \cdots + \frac{i^k \mu_X(k)^\prime}{k!}t^k + o(t^{k}).
\end{aligned}$$
Thus, $\phi$ has a Maclaurin expansion $\phi_X(t) = a_0 + a_1/1\, t^1 + a_2/2!\, t^2 + \cdots + a_k/k!\, t^k$ (partial power series at $0$) and the moments of $X$ can be read directly from the coefficients $a_j:$ $\mu_X(j)^\prime = a_j(i)^j.$
Samples
A sample is defined to be a collection of $n$ independent random variables $X_i,$ $i=1,2,\ldots,n$ having a common distribution, whence all the $\phi_{X_i}$ are the same function $\phi.$ The sample mean is (also by definition)
$$\bar X = \left(X_1 + X_2 + \cdots + X_n\right)/n = X_1/n + X_2/n + \cdots + X_n/n.$$
Therefore, because the exponential of a sum of numbers is the product of their exponentials and expectations of products of independent random variables are the products of their expectations,
$$\phi_{\bar X}(t) = E\left[e^{it\bar X}\right] = E\left[e^{itX_1/n}\right]\, E\left[e^{itX_2/n}\right]\cdots E\left[e^{itX_1/n}\right] = \phi\left(\frac{t}{n}\right)^n.$$
Change of location
The central moments of $X,$ where $\mu = \mu_X^\prime(1)$ exists and is finite, are defined as
$$\mu_X(k) = E\left[(X-\mu)^k\right] = \mu^\prime_{X-\mu}(k).$$
Consequently they may be found from the characteristic function of $X-\mu,$ which can be related to that of $X$ via
$$\phi_{X-\mu}(t) = E\left[e^{it(X-\mu)}\right] = E\left[e^{-it\mu}\,e^{itX}\right] = e^{-it\mu}\phi_X(t).$$
Change of scale
The relationship between the characteristic functions of $X$ and $X/\sigma,$ for any positive number $\sigma,$ is obtained directly from the definitions as
$$\phi_{X/\sigma}(t) = E\left[e^{itX/\sigma}\right] = \phi_X\left(\frac{t}{\sigma}\right).$$
Simplification with logarithms
The power relation between $\phi_{\bar X}$ and $\phi$ suggests working with the logarithms of these functions, because that would make this a direct proportion,
$$\psi_{\bar X}(t) = \log \phi_{\bar X}(t) = \log\left[\phi\left(\frac{t}{n}\right)^n\right] = n \log \phi\left(\frac{t}{n}\right) = n\psi(t).$$
Since, for $|s|\lt 1$ the Taylor series for $\log(1+s) = s -s^2/2 + s^3/3 - \cdots$ converges absolutely, we easily obtain the series
$$\psi_{X-\mu}(t) = \log\left(1 + \left[ - \frac{\mu_X(2)}{2}t^2 - \frac{\mu_X(3)}{6}t^3 + \frac{\mu_X(4)}{24}t^4 + o(t^4)\right]\right)$$
by setting $s = -\mu(2)(X)t^2 + \cdots + o(t^4)$ and computing
$$\begin{aligned}s^2/2&= \frac{1}{2}(-\mu_X(2)/2)^2 t^2 + o(t^4);\\
s^k/k! &= o(t^4)\end{aligned}$$
for all $k \gt 2.$
This gives
$$\psi_{X-\mu}(t) = 1 + s - s^2/2 + o(t^4) = - \frac{\mu_X(2)}{2}t^2 - i\frac{\mu_X(3)}{6}t^3 + \frac{\mu_X(4) - 3\mu_X(2)^2}{24}t^4 + o(t^4).$$
Upon scaling this by $1/\sigma = \mu_X(2)^{-1/2}$ it simplifies to
$$\psi_{(X-\mu)/\sigma}(t) = \log \phi_{(X-\mu)/\sigma}(t) =-\frac{1}{2}t^2 - \frac{i\gamma_X}{6}t^3 + \frac{\kappa_X - 3}{24}t^4 + o(t^4)$$
where $\gamma_X$ is, by definition, the skewness of $X$ and $\kappa_X$ is its kurtosis. | What are the skewness and kurtosis of the sample mean? | Solution
The calculation amounts to removing the linear term in the cumulant generating function (log characteristic function) of the distribution and simply replacing its argument $t$ by the factor $ | What are the skewness and kurtosis of the sample mean?
Solution
The calculation amounts to removing the linear term in the cumulant generating function (log characteristic function) of the distribution and simply replacing its argument $t$ by the factor $t\sqrt{n}/\sigma$ needed to standardize the sum, afterwards multiplying everything by $n$ to account for the $n$ iid variables comprising the sum.
Details
Let $\phi$ be the characteristic function of that common distribution with mean $\mu,$ standard deviation $\sigma,$ skewness $\gamma,$ and finite kurtosis $\kappa.$ Then because
$$\log \phi(t) = i \mu (t\sigma) - \frac{1}{2} (t\sigma)^2 - \frac{i\gamma}{6} (t\sigma)^3 + \frac{\kappa - 3}{24}(t\sigma)^4 + o(t^4),$$
the log characteristic function of the standardized sample mean $Z = \sqrt{n}(\bar X - \mu)/\sigma$ is
$$\begin{aligned}
\log \phi_Z(t) &= n \left(\log \phi\left(\frac{t\sqrt{n}}{\sigma}\right) - i\mu\left(\frac{t\sqrt{n}}{\sigma}\right)\right)\\
&= -\frac{1}{2}\left(\frac{t}{\sqrt{n}}\right)^2 - n\frac{i\gamma}{6} \left(\frac{t}{\sqrt{n}}\right)^3 + n\frac{\kappa - 3}{24}\left(\frac{t}{\sqrt{n}}\right)^4 + o(t^4).\\
\end{aligned}$$
Comparing like powers of $t$ shows
$$\gamma_Z=\gamma/\sqrt{n},\ \kappa_Z - 3 = (\kappa-3)/n.$$
The first equation is standard--it's usually taken as the definition of skewness and kurtosis--while the second is a direct (and simple) consequence of the independence of the $X_i$ in the sample together with the rules for how $\phi$ transforms under recentering and rescaling. The rest of this post provides all details for any readers who might be unacquainted with this use of characteristic functions.
Background
Characteristic functions
The characteristic function of a random variable $X$ is defined to be
$$\phi_X: \mathbb{R}\to \mathbb{C},\ \phi_X(t) = E\left[e^{itX}\right].$$
It always exists because $E\left[\big|e^{itX}\big|\right] \le E[1]=1$ demonstrates absolute convergence of the integrand $e^{itX}.$
Characteristic functions are useful for understanding (positive integral) moments $\mu^\prime(k) = E[X^k]$ because, when $E[X^{k}]$ exists and is finite, an application of Taylor's Theorem to the exponential shows
$$\begin{aligned}
\phi_X(t) &= E\left[(1 + itX + (itX)^2/2! + \cdots + (itX)^k/k! + o(t^{k+1})\right]\\
&= 1 + \frac{i\mu_X(1)^\prime}{1}t^1 - \frac{\mu_X(2)^\prime}{2}t^2 - \frac{\mu_X(3)^\prime}{6}t^3 + \cdots + \frac{i^k \mu_X(k)^\prime}{k!}t^k + o(t^{k}).
\end{aligned}$$
Thus, $\phi$ has a Maclaurin expansion $\phi_X(t) = a_0 + a_1/1\, t^1 + a_2/2!\, t^2 + \cdots + a_k/k!\, t^k$ (partial power series at $0$) and the moments of $X$ can be read directly from the coefficients $a_j:$ $\mu_X(j)^\prime = a_j(i)^j.$
Samples
A sample is defined to be a collection of $n$ independent random variables $X_i,$ $i=1,2,\ldots,n$ having a common distribution, whence all the $\phi_{X_i}$ are the same function $\phi.$ The sample mean is (also by definition)
$$\bar X = \left(X_1 + X_2 + \cdots + X_n\right)/n = X_1/n + X_2/n + \cdots + X_n/n.$$
Therefore, because the exponential of a sum of numbers is the product of their exponentials and expectations of products of independent random variables are the products of their expectations,
$$\phi_{\bar X}(t) = E\left[e^{it\bar X}\right] = E\left[e^{itX_1/n}\right]\, E\left[e^{itX_2/n}\right]\cdots E\left[e^{itX_1/n}\right] = \phi\left(\frac{t}{n}\right)^n.$$
Change of location
The central moments of $X,$ where $\mu = \mu_X^\prime(1)$ exists and is finite, are defined as
$$\mu_X(k) = E\left[(X-\mu)^k\right] = \mu^\prime_{X-\mu}(k).$$
Consequently they may be found from the characteristic function of $X-\mu,$ which can be related to that of $X$ via
$$\phi_{X-\mu}(t) = E\left[e^{it(X-\mu)}\right] = E\left[e^{-it\mu}\,e^{itX}\right] = e^{-it\mu}\phi_X(t).$$
Change of scale
The relationship between the characteristic functions of $X$ and $X/\sigma,$ for any positive number $\sigma,$ is obtained directly from the definitions as
$$\phi_{X/\sigma}(t) = E\left[e^{itX/\sigma}\right] = \phi_X\left(\frac{t}{\sigma}\right).$$
Simplification with logarithms
The power relation between $\phi_{\bar X}$ and $\phi$ suggests working with the logarithms of these functions, because that would make this a direct proportion,
$$\psi_{\bar X}(t) = \log \phi_{\bar X}(t) = \log\left[\phi\left(\frac{t}{n}\right)^n\right] = n \log \phi\left(\frac{t}{n}\right) = n\psi(t).$$
Since, for $|s|\lt 1$ the Taylor series for $\log(1+s) = s -s^2/2 + s^3/3 - \cdots$ converges absolutely, we easily obtain the series
$$\psi_{X-\mu}(t) = \log\left(1 + \left[ - \frac{\mu_X(2)}{2}t^2 - \frac{\mu_X(3)}{6}t^3 + \frac{\mu_X(4)}{24}t^4 + o(t^4)\right]\right)$$
by setting $s = -\mu(2)(X)t^2 + \cdots + o(t^4)$ and computing
$$\begin{aligned}s^2/2&= \frac{1}{2}(-\mu_X(2)/2)^2 t^2 + o(t^4);\\
s^k/k! &= o(t^4)\end{aligned}$$
for all $k \gt 2.$
This gives
$$\psi_{X-\mu}(t) = 1 + s - s^2/2 + o(t^4) = - \frac{\mu_X(2)}{2}t^2 - i\frac{\mu_X(3)}{6}t^3 + \frac{\mu_X(4) - 3\mu_X(2)^2}{24}t^4 + o(t^4).$$
Upon scaling this by $1/\sigma = \mu_X(2)^{-1/2}$ it simplifies to
$$\psi_{(X-\mu)/\sigma}(t) = \log \phi_{(X-\mu)/\sigma}(t) =-\frac{1}{2}t^2 - \frac{i\gamma_X}{6}t^3 + \frac{\kappa_X - 3}{24}t^4 + o(t^4)$$
where $\gamma_X$ is, by definition, the skewness of $X$ and $\kappa_X$ is its kurtosis. | What are the skewness and kurtosis of the sample mean?
Solution
The calculation amounts to removing the linear term in the cumulant generating function (log characteristic function) of the distribution and simply replacing its argument $t$ by the factor $ |
52,238 | What are the skewness and kurtosis of the sample mean? | What are the first four central moments of the sample mean?
Computer algebra systems are particularly adept at this sort of algebra munching. Here I am using the mathStatica package for Mathematica where $s_r = \sum _{i=1}^n X^r$. The first say 7 central moments of the sample mean $\frac{s_1}{n}$ are:
where:
$\mu_r$ denotes the $r^\text{th}$ central moment of the population.
The calculation takes just .1 of a second for all of them to be calculated live. | What are the skewness and kurtosis of the sample mean? | What are the first four central moments of the sample mean?
Computer algebra systems are particularly adept at this sort of algebra munching. Here I am using the mathStatica package for Mathematica w | What are the skewness and kurtosis of the sample mean?
What are the first four central moments of the sample mean?
Computer algebra systems are particularly adept at this sort of algebra munching. Here I am using the mathStatica package for Mathematica where $s_r = \sum _{i=1}^n X^r$. The first say 7 central moments of the sample mean $\frac{s_1}{n}$ are:
where:
$\mu_r$ denotes the $r^\text{th}$ central moment of the population.
The calculation takes just .1 of a second for all of them to be calculated live. | What are the skewness and kurtosis of the sample mean?
What are the first four central moments of the sample mean?
Computer algebra systems are particularly adept at this sort of algebra munching. Here I am using the mathStatica package for Mathematica w |
52,239 | Facebook prophet gives a very high MAPE, how can I improve it? | You have suspiciously regular massive spikes sometime about halfway through the third quarter of every year. You don't tell us where your data come from, but if they are US, this is presumably a Black Friday effect. Have you told your model about this very specific predictor?
In general, How to know that your machine learning problem is hopeless? may be helpful. You will need to understand your data and include any relevant predictors you have. Hoping for 5% MAPE may simply be unrealistic.
Also, note that your model will try to separate noise from signal and predict only the signal, with the result that your predictions will vary less than observations. Here is a recent thread on this.
Finally, you may want to take a look at What are the shortcomings of the Mean Absolute Percentage Error (MAPE)? I don't know which objective function Prophet optimizes for, but I assume it is not MAPE. Thus, if your bonus depends on having a low MAPE, you may get closer by post-processing the Prophet forecasts. (I have never seen a business problem that would benefit more from MAPE-optimal forecasts rather than, e.g., MSE- or quantile loss-optimal forecasts. And as you see at that thread, optimal forecasts can depend quite heavily on the evaluation measure.) | Facebook prophet gives a very high MAPE, how can I improve it? | You have suspiciously regular massive spikes sometime about halfway through the third quarter of every year. You don't tell us where your data come from, but if they are US, this is presumably a Black | Facebook prophet gives a very high MAPE, how can I improve it?
You have suspiciously regular massive spikes sometime about halfway through the third quarter of every year. You don't tell us where your data come from, but if they are US, this is presumably a Black Friday effect. Have you told your model about this very specific predictor?
In general, How to know that your machine learning problem is hopeless? may be helpful. You will need to understand your data and include any relevant predictors you have. Hoping for 5% MAPE may simply be unrealistic.
Also, note that your model will try to separate noise from signal and predict only the signal, with the result that your predictions will vary less than observations. Here is a recent thread on this.
Finally, you may want to take a look at What are the shortcomings of the Mean Absolute Percentage Error (MAPE)? I don't know which objective function Prophet optimizes for, but I assume it is not MAPE. Thus, if your bonus depends on having a low MAPE, you may get closer by post-processing the Prophet forecasts. (I have never seen a business problem that would benefit more from MAPE-optimal forecasts rather than, e.g., MSE- or quantile loss-optimal forecasts. And as you see at that thread, optimal forecasts can depend quite heavily on the evaluation measure.) | Facebook prophet gives a very high MAPE, how can I improve it?
You have suspiciously regular massive spikes sometime about halfway through the third quarter of every year. You don't tell us where your data come from, but if they are US, this is presumably a Black |
52,240 | Does offset always have to be on log scale with NB GLMM? | Normally, an offset is used when we are modelling some sort of rate data (e.g. deaths per 100,000, crashes per 100,000 etc).
This is naturally modelled as some sort of ratio so have data in the form of $E(y_i)/n_i$
In GLM, we model the expectation through some sort of link function, so
$$ g^{-1}(E(y_i)/n_i) = \mathbf{x}^T\beta$$
With the logarithmic link function, we have
$$ \log(E(y_i)) = \mathbf{x}^T\beta + \log(n_i) $$
from application of log rules. So to answer your question, the offset is not always a log. It depends on the link function you use. | Does offset always have to be on log scale with NB GLMM? | Normally, an offset is used when we are modelling some sort of rate data (e.g. deaths per 100,000, crashes per 100,000 etc).
This is naturally modelled as some sort of ratio so have data in the form o | Does offset always have to be on log scale with NB GLMM?
Normally, an offset is used when we are modelling some sort of rate data (e.g. deaths per 100,000, crashes per 100,000 etc).
This is naturally modelled as some sort of ratio so have data in the form of $E(y_i)/n_i$
In GLM, we model the expectation through some sort of link function, so
$$ g^{-1}(E(y_i)/n_i) = \mathbf{x}^T\beta$$
With the logarithmic link function, we have
$$ \log(E(y_i)) = \mathbf{x}^T\beta + \log(n_i) $$
from application of log rules. So to answer your question, the offset is not always a log. It depends on the link function you use. | Does offset always have to be on log scale with NB GLMM?
Normally, an offset is used when we are modelling some sort of rate data (e.g. deaths per 100,000, crashes per 100,000 etc).
This is naturally modelled as some sort of ratio so have data in the form o |
52,241 | Does offset always have to be on log scale with NB GLMM? | This question is related to the choice of link function for your generalized linear model. McCullagh and Nelder say (page 31):
The link function relates the linear predictor $\eta$ to the expected value $\mu$ of a datum [outcome value] $y$.
The link function is what makes this a generalized linear model. Hidden in your call to glmer.nb() is a default choice of a log link function. That is, you are (perhaps without knowing it) modeling the log of the expected value of feeding with the linear predictor. Equivalently, the expected value of feeding is found by exponentiating the linear predictor.
In the way you've written your model, the fixed-effect part* of the linear predictor would be: $\beta_0$ + $\beta_1$ inf_cat + total_inf_cat. Here, $\beta_0$ is the intercept, $\beta_1$ is the regression coefficient for inf_cat, and the offset restricts the coefficient of total_inf_cat to be exactly 1. So the way you have written the model, each 1 unit increase of total_inf_cat would give you an $e$-fold increase of feeding.
Does that make sense in terms of your understanding of the subject matter? Probably not, if you think that total_inf_cat is the total available duration and that the amount of feeding should be directly proportional to total_inf_cat, other things being equal. Then the log link should be accompanied by an offset of log(total_inf_cat), to maintain that direct proportionality.
There are other link-function choices for negative binomial models, with a square-root and an identity link also available for glmer.nb(). As Demetri Pananos says in another answer, if you do choose a different link function you would have to choose a different offset to keep proportionality between feeding and total_inf_cat. For example, your model with the offset of total_inf_cat would make sense if you specified the identity link in your call to glmer.nb(). This page and its links discuss the choices. With count data, the log link typically makes the most sense.
Finally, negative binomial models are most useful with count data that have more variance than would be expected from a Poisson model, where the variance necessarily equals the mean. If feeding is a continuous variable (amount of time spent feeding) instead, you might be better off with a different type of model. But with a generalized linear model of any type, the same principle of choosing an offset to give the desired behavior combined with the link function holds.
*I assume that female represents a set of IDs of the mothers. Then the (1|female) random-effect part of the model allows for different individuals to have different intercept values. | Does offset always have to be on log scale with NB GLMM? | This question is related to the choice of link function for your generalized linear model. McCullagh and Nelder say (page 31):
The link function relates the linear predictor $\eta$ to the expected va | Does offset always have to be on log scale with NB GLMM?
This question is related to the choice of link function for your generalized linear model. McCullagh and Nelder say (page 31):
The link function relates the linear predictor $\eta$ to the expected value $\mu$ of a datum [outcome value] $y$.
The link function is what makes this a generalized linear model. Hidden in your call to glmer.nb() is a default choice of a log link function. That is, you are (perhaps without knowing it) modeling the log of the expected value of feeding with the linear predictor. Equivalently, the expected value of feeding is found by exponentiating the linear predictor.
In the way you've written your model, the fixed-effect part* of the linear predictor would be: $\beta_0$ + $\beta_1$ inf_cat + total_inf_cat. Here, $\beta_0$ is the intercept, $\beta_1$ is the regression coefficient for inf_cat, and the offset restricts the coefficient of total_inf_cat to be exactly 1. So the way you have written the model, each 1 unit increase of total_inf_cat would give you an $e$-fold increase of feeding.
Does that make sense in terms of your understanding of the subject matter? Probably not, if you think that total_inf_cat is the total available duration and that the amount of feeding should be directly proportional to total_inf_cat, other things being equal. Then the log link should be accompanied by an offset of log(total_inf_cat), to maintain that direct proportionality.
There are other link-function choices for negative binomial models, with a square-root and an identity link also available for glmer.nb(). As Demetri Pananos says in another answer, if you do choose a different link function you would have to choose a different offset to keep proportionality between feeding and total_inf_cat. For example, your model with the offset of total_inf_cat would make sense if you specified the identity link in your call to glmer.nb(). This page and its links discuss the choices. With count data, the log link typically makes the most sense.
Finally, negative binomial models are most useful with count data that have more variance than would be expected from a Poisson model, where the variance necessarily equals the mean. If feeding is a continuous variable (amount of time spent feeding) instead, you might be better off with a different type of model. But with a generalized linear model of any type, the same principle of choosing an offset to give the desired behavior combined with the link function holds.
*I assume that female represents a set of IDs of the mothers. Then the (1|female) random-effect part of the model allows for different individuals to have different intercept values. | Does offset always have to be on log scale with NB GLMM?
This question is related to the choice of link function for your generalized linear model. McCullagh and Nelder say (page 31):
The link function relates the linear predictor $\eta$ to the expected va |
52,242 | Standard deviation for a half-normal distribution | The half normal distribution is usually parametrised so that the $\sigma$ from the corresponding normal distribution is its scale parameter. However, this is not the standard deviation of it, which is $\sigma\sqrt{1-\frac{2}{\pi}}$, see https://en.wikipedia.org/wiki/Half-normal_distribution. It should be intuitive that the sd is smaller than $\sigma$; there is more variance if observations can deviate on both sides of zero - cutting one half off a region where no observations is left and putting them in the place where the other observations are lowers variation.
Regarding your question: If you want a proper random sample from the normal distribution, this is not correct, because although the normal distribution is symmetric, samples of it are not perfectly symmetric. Particularly the probability is zero that if you have $x$ in a normal sample you also have $-x$ in it. The number of
observations left and right from 0 can be the same, but more often than not, due to random variation, it won't be the same. | Standard deviation for a half-normal distribution | The half normal distribution is usually parametrised so that the $\sigma$ from the corresponding normal distribution is its scale parameter. However, this is not the standard deviation of it, which is | Standard deviation for a half-normal distribution
The half normal distribution is usually parametrised so that the $\sigma$ from the corresponding normal distribution is its scale parameter. However, this is not the standard deviation of it, which is $\sigma\sqrt{1-\frac{2}{\pi}}$, see https://en.wikipedia.org/wiki/Half-normal_distribution. It should be intuitive that the sd is smaller than $\sigma$; there is more variance if observations can deviate on both sides of zero - cutting one half off a region where no observations is left and putting them in the place where the other observations are lowers variation.
Regarding your question: If you want a proper random sample from the normal distribution, this is not correct, because although the normal distribution is symmetric, samples of it are not perfectly symmetric. Particularly the probability is zero that if you have $x$ in a normal sample you also have $-x$ in it. The number of
observations left and right from 0 can be the same, but more often than not, due to random variation, it won't be the same. | Standard deviation for a half-normal distribution
The half normal distribution is usually parametrised so that the $\sigma$ from the corresponding normal distribution is its scale parameter. However, this is not the standard deviation of it, which is |
52,243 | Standard deviation for a half-normal distribution | If you randomly assign a sign (+ or -) to each member of the sample generated from a half-normal distribution, the result will be indistinguishable from a sample generated from the corresponding unfolded normal distribution.
Your method wouldn't work - for example, if your sample size is 2, it would be very surprising to draw the sample 4, -4 from a normal distribution with mean zero and variance 1, since it would involve two separate 4-sigma events; but it would be much less surprising to generate it using your method from a single sample from the corresponding half-normal distribution, as that would only involve a single 4-sigma event. | Standard deviation for a half-normal distribution | If you randomly assign a sign (+ or -) to each member of the sample generated from a half-normal distribution, the result will be indistinguishable from a sample generated from the corresponding unfol | Standard deviation for a half-normal distribution
If you randomly assign a sign (+ or -) to each member of the sample generated from a half-normal distribution, the result will be indistinguishable from a sample generated from the corresponding unfolded normal distribution.
Your method wouldn't work - for example, if your sample size is 2, it would be very surprising to draw the sample 4, -4 from a normal distribution with mean zero and variance 1, since it would involve two separate 4-sigma events; but it would be much less surprising to generate it using your method from a single sample from the corresponding half-normal distribution, as that would only involve a single 4-sigma event. | Standard deviation for a half-normal distribution
If you randomly assign a sign (+ or -) to each member of the sample generated from a half-normal distribution, the result will be indistinguishable from a sample generated from the corresponding unfol |
52,244 | Standard deviation for a half-normal distribution | With data $x_1, x_2, \ldots, x_n,$ you propose synthesizing a dataset of $2n$ values $|x_1|, -|x_1|, |x_2|, -|x_2|, \ldots, |x_n|, -|x_n|.$ Because each absolute value $|x_i|$ balances its negative $-|x_i|,$ the mean is zero. The usual standard deviation estimator therefore reduces to the square root of
$$\frac{1}{2n-1}\left((|x_1|-0)^2 + (-|x_1|-0)^2 + \cdots + (-|x_n|-0)^2\right) = \frac{1}{n-1/2}\sum_{i=1}^n x_i^2.$$
As we will see, this needs a slight modification to work well.
One way to derive an answer uses the method of maximum likelihood. One definition of the half-normal distribution with standard deviation $\sigma$ is that the probability density of any value $x\ge 0$ is proportional to $\exp(-(x/\sigma)^2/2) / \sigma.$ (Notice how extremely close that is to the definition of a Normal distribution: the only difference is the restriction $x\ge 0.$)
Thus, given a dataset of (absolute) values $\mathbf{x}=x_1, x_2, \ldots, x_n$ drawn independently from such a distribution, the log likelihood takes the form
$$\Lambda(\sigma, \mathbf{x}) = C(n) -\sum_{i=1}^n \left(\log\sigma + \frac{1}{2}\left(\frac{x_i}{\sigma}\right)^2\right)$$
which attains its maximum either as $\sigma\to0,$ as $\sigma\to\infty,$ or where the derivative of $\Lambda$ is zero; that is, at the solutions to
$$0 =\frac{\mathbf{d}}{\mathbf{d}\sigma}\Lambda(\sigma,\mathbf x) = -\frac{n}{\sigma} + \frac{1}{\sigma^3}\sum_{i=1}^n x_i^2.$$
You can check that (unless all the $x_i$ are equal) the values at $0$ and $\infty$ are not solutions. (When all the $x_i$ are equal, the unique maximum occurs as $\hat\sigma\to0.$) The resulting estimate is
$$\hat\sigma^2 = \frac{1}{n}\sum_{i=1}^n x_i^2.$$
The maximum likelihood estimate of $\hat\sigma$ will be the square root of this quantity.
Now suppose, as you propose, we were to replace the data $(x_i)$ with a dataset twice this size by introducing the negatives of the $x_i.$ This has the following obvious effects:
The count doubles from $n$ to $2n.$
The mean of the new data is $0.$
Therefore the standard deviation of the new data is
$$s^2 = \frac{1}{2n}\left(\sum_{i=1}^n (x_i-0)^2 + \sum_{i=1}^n (-x_i-0)^2\right) = \frac{1}{2n}\sum_{i=1}^n 2x_i^2 = \hat\sigma^2$$
provided we compute it using a denominator of $2n$ rather than a denominator of $2n-1$ as I sketched at the outset. With this small modification, your procedure is identical to the maximum likelihood estimate.
It is worthwhile to inquire what properties this estimator has. The property usually invoked to justify using $n-1$ in the denominator is that $\hat \sigma^2$ be an unbiased estimator of the variance. Let us therefore compute that expectation:
$$E[\hat\sigma^2] = E\left[\frac{1}{n}\sum_{i=1}^n x_i^2\right] = \frac{1}{n}\sum_{i=1}^n E\left[x_i^2\right] = \frac{1}{n}\sum_{i=1}^n \sigma^2 = \sigma^2.$$
$\hat \sigma^2$ is an unbiased estimator of the variance.
This reinforces the growing sense that your estimator is a good one. Moreover, you may now exploit all the additional properties of maximum likelihood estimation to develop confidence intervals for $\sigma,$ etc. Be careful, though, not to make the mistake of using $2n$ for the dataset size! You have only $n$ data values and the uncertainties in your estimates and predictions need to reflect that number rather than being based on the larger $2n.$ | Standard deviation for a half-normal distribution | With data $x_1, x_2, \ldots, x_n,$ you propose synthesizing a dataset of $2n$ values $|x_1|, -|x_1|, |x_2|, -|x_2|, \ldots, |x_n|, -|x_n|.$ Because each absolute value $|x_i|$ balances its negative $ | Standard deviation for a half-normal distribution
With data $x_1, x_2, \ldots, x_n,$ you propose synthesizing a dataset of $2n$ values $|x_1|, -|x_1|, |x_2|, -|x_2|, \ldots, |x_n|, -|x_n|.$ Because each absolute value $|x_i|$ balances its negative $-|x_i|,$ the mean is zero. The usual standard deviation estimator therefore reduces to the square root of
$$\frac{1}{2n-1}\left((|x_1|-0)^2 + (-|x_1|-0)^2 + \cdots + (-|x_n|-0)^2\right) = \frac{1}{n-1/2}\sum_{i=1}^n x_i^2.$$
As we will see, this needs a slight modification to work well.
One way to derive an answer uses the method of maximum likelihood. One definition of the half-normal distribution with standard deviation $\sigma$ is that the probability density of any value $x\ge 0$ is proportional to $\exp(-(x/\sigma)^2/2) / \sigma.$ (Notice how extremely close that is to the definition of a Normal distribution: the only difference is the restriction $x\ge 0.$)
Thus, given a dataset of (absolute) values $\mathbf{x}=x_1, x_2, \ldots, x_n$ drawn independently from such a distribution, the log likelihood takes the form
$$\Lambda(\sigma, \mathbf{x}) = C(n) -\sum_{i=1}^n \left(\log\sigma + \frac{1}{2}\left(\frac{x_i}{\sigma}\right)^2\right)$$
which attains its maximum either as $\sigma\to0,$ as $\sigma\to\infty,$ or where the derivative of $\Lambda$ is zero; that is, at the solutions to
$$0 =\frac{\mathbf{d}}{\mathbf{d}\sigma}\Lambda(\sigma,\mathbf x) = -\frac{n}{\sigma} + \frac{1}{\sigma^3}\sum_{i=1}^n x_i^2.$$
You can check that (unless all the $x_i$ are equal) the values at $0$ and $\infty$ are not solutions. (When all the $x_i$ are equal, the unique maximum occurs as $\hat\sigma\to0.$) The resulting estimate is
$$\hat\sigma^2 = \frac{1}{n}\sum_{i=1}^n x_i^2.$$
The maximum likelihood estimate of $\hat\sigma$ will be the square root of this quantity.
Now suppose, as you propose, we were to replace the data $(x_i)$ with a dataset twice this size by introducing the negatives of the $x_i.$ This has the following obvious effects:
The count doubles from $n$ to $2n.$
The mean of the new data is $0.$
Therefore the standard deviation of the new data is
$$s^2 = \frac{1}{2n}\left(\sum_{i=1}^n (x_i-0)^2 + \sum_{i=1}^n (-x_i-0)^2\right) = \frac{1}{2n}\sum_{i=1}^n 2x_i^2 = \hat\sigma^2$$
provided we compute it using a denominator of $2n$ rather than a denominator of $2n-1$ as I sketched at the outset. With this small modification, your procedure is identical to the maximum likelihood estimate.
It is worthwhile to inquire what properties this estimator has. The property usually invoked to justify using $n-1$ in the denominator is that $\hat \sigma^2$ be an unbiased estimator of the variance. Let us therefore compute that expectation:
$$E[\hat\sigma^2] = E\left[\frac{1}{n}\sum_{i=1}^n x_i^2\right] = \frac{1}{n}\sum_{i=1}^n E\left[x_i^2\right] = \frac{1}{n}\sum_{i=1}^n \sigma^2 = \sigma^2.$$
$\hat \sigma^2$ is an unbiased estimator of the variance.
This reinforces the growing sense that your estimator is a good one. Moreover, you may now exploit all the additional properties of maximum likelihood estimation to develop confidence intervals for $\sigma,$ etc. Be careful, though, not to make the mistake of using $2n$ for the dataset size! You have only $n$ data values and the uncertainties in your estimates and predictions need to reflect that number rather than being based on the larger $2n.$ | Standard deviation for a half-normal distribution
With data $x_1, x_2, \ldots, x_n,$ you propose synthesizing a dataset of $2n$ values $|x_1|, -|x_1|, |x_2|, -|x_2|, \ldots, |x_n|, -|x_n|.$ Because each absolute value $|x_i|$ balances its negative $ |
52,245 | What does "even if the evidence remains correlational" mean? | Both the cited finding that, "People who trust the media more are more knowledgeable about politics and the news" and the cited finding that, "The more people trust science, the more scientifically literate they are", are results found in observational data. It is well established that it is difficult to infer causality from observational data. As a result, these are referred to as correlational. That is, it has not been established that trusting the media causes people to become more knowledgeable of these topics; likewise we don't know that trust in science causes people to become more scientifically literate. For example, it could be that being more educated leads to both greater knowledge and trust in the media, and it could just as easily be that being more scientifically literate causes people to trust science more.
The following claim ("people who trust more should get better at figuring out whom to trust") does not logically follow from the prior claims. As a result, the causal status of the prior claims is not that important in supporting its truth. Instead, it seems to be an appeal to intuition.
The "evidence" is the two findings that had just been cited. | What does "even if the evidence remains correlational" mean? | Both the cited finding that, "People who trust the media more are more knowledgeable about politics and the news" and the cited finding that, "The more people trust science, the more scientifically li | What does "even if the evidence remains correlational" mean?
Both the cited finding that, "People who trust the media more are more knowledgeable about politics and the news" and the cited finding that, "The more people trust science, the more scientifically literate they are", are results found in observational data. It is well established that it is difficult to infer causality from observational data. As a result, these are referred to as correlational. That is, it has not been established that trusting the media causes people to become more knowledgeable of these topics; likewise we don't know that trust in science causes people to become more scientifically literate. For example, it could be that being more educated leads to both greater knowledge and trust in the media, and it could just as easily be that being more scientifically literate causes people to trust science more.
The following claim ("people who trust more should get better at figuring out whom to trust") does not logically follow from the prior claims. As a result, the causal status of the prior claims is not that important in supporting its truth. Instead, it seems to be an appeal to intuition.
The "evidence" is the two findings that had just been cited. | What does "even if the evidence remains correlational" mean?
Both the cited finding that, "People who trust the media more are more knowledgeable about politics and the news" and the cited finding that, "The more people trust science, the more scientifically li |
52,246 | What does "even if the evidence remains correlational" mean? | It means, that changing one of those variables may, as well as may not, lead to changing the other. We can not apriori say which is how much likely.
Imagine, that you can force somebody to increase the knowledge about science. If evidence is correlational it may or may not result in changing the trust. If there existed causal evidence, that knowledge causes trust, it is highly likely, that interventionally changing (for example by motivating to learn) knowledge of a person will have the effect of changing the trust.
Also the other way. Imagine, that you can change somebody's trust. If evidence is correlational, changing somebody's trust to science, may, or may not, result in gaining knowledge. Causal evidence make such effect much more likely.
The reasons for such distinctions are many.
One of them is that correlation do not distinguish direction. There is possibility, that only knowledge causes trust, but not the other way. Also, there is possibility, that first people gain trust, then knowledge. Also, there is possibility, that the relation is bicausal: when someone gains some knowledge, gains some trust, but trust then motivates to gain even more knowledge, which leads to even higher rust.
The other is, that maybe such relation do not exist directly, but it is caused by third variable. For example studying in college. If person goes to college such person both gain knowledge, and is persuaded, that science make sense, because meets a lot of people, who trust science, and make a living of it. Unless we make special analysis, which may take such things into consideration, we do not know for sure if causal relation exists at all.
The third example is that maybe people, which are analysed, and such correlational evidence is derived are selected in a way, that "makes" correlation without any relation between them. For example if we make such analysis on the group of people, who are successful scientist, assuming, that success requires both knowledge and trust in what they are doing, such correlation may appear, but changing one of the variables will not result in any reaction from the other. | What does "even if the evidence remains correlational" mean? | It means, that changing one of those variables may, as well as may not, lead to changing the other. We can not apriori say which is how much likely.
Imagine, that you can force somebody to increase th | What does "even if the evidence remains correlational" mean?
It means, that changing one of those variables may, as well as may not, lead to changing the other. We can not apriori say which is how much likely.
Imagine, that you can force somebody to increase the knowledge about science. If evidence is correlational it may or may not result in changing the trust. If there existed causal evidence, that knowledge causes trust, it is highly likely, that interventionally changing (for example by motivating to learn) knowledge of a person will have the effect of changing the trust.
Also the other way. Imagine, that you can change somebody's trust. If evidence is correlational, changing somebody's trust to science, may, or may not, result in gaining knowledge. Causal evidence make such effect much more likely.
The reasons for such distinctions are many.
One of them is that correlation do not distinguish direction. There is possibility, that only knowledge causes trust, but not the other way. Also, there is possibility, that first people gain trust, then knowledge. Also, there is possibility, that the relation is bicausal: when someone gains some knowledge, gains some trust, but trust then motivates to gain even more knowledge, which leads to even higher rust.
The other is, that maybe such relation do not exist directly, but it is caused by third variable. For example studying in college. If person goes to college such person both gain knowledge, and is persuaded, that science make sense, because meets a lot of people, who trust science, and make a living of it. Unless we make special analysis, which may take such things into consideration, we do not know for sure if causal relation exists at all.
The third example is that maybe people, which are analysed, and such correlational evidence is derived are selected in a way, that "makes" correlation without any relation between them. For example if we make such analysis on the group of people, who are successful scientist, assuming, that success requires both knowledge and trust in what they are doing, such correlation may appear, but changing one of the variables will not result in any reaction from the other. | What does "even if the evidence remains correlational" mean?
It means, that changing one of those variables may, as well as may not, lead to changing the other. We can not apriori say which is how much likely.
Imagine, that you can force somebody to increase th |
52,247 | What does "even if the evidence remains correlational" mean? | Evidence being "correlational" (more often called "observational") means that this was passive observational evidence of a statistical association between things, without any controlled experimentation. For example, consider the claim that "[t]he more people trust science, the more scientifically literate they are." Presumably the authors are referring to some passive observational evidence that trust in science was correlated with scientific literacy amongst the observational group. Presumably this evidence did not involve any experimental intervention to manipulate one of these variables and then observe the later effect (or lack of effect) on the other.
An alternative way to acquire evidence about this matter would be to conduct a randomised controlled trial (RCT) where researchers intervene in some way to affect one of the variables under study and then observe whether there is any change in the other variable over a period of time. (Usually the intervention is only for one group in the study, and another group is left as a "control" group.) For example, researchers might conduct some intervention that they think will exogenously increase the "trust in science" of the treatment group (without improving their scientific literacy) and then measure the later "scientific literacy" for both groups to see if their intervention has had any effect. | What does "even if the evidence remains correlational" mean? | Evidence being "correlational" (more often called "observational") means that this was passive observational evidence of a statistical association between things, without any controlled experimentatio | What does "even if the evidence remains correlational" mean?
Evidence being "correlational" (more often called "observational") means that this was passive observational evidence of a statistical association between things, without any controlled experimentation. For example, consider the claim that "[t]he more people trust science, the more scientifically literate they are." Presumably the authors are referring to some passive observational evidence that trust in science was correlated with scientific literacy amongst the observational group. Presumably this evidence did not involve any experimental intervention to manipulate one of these variables and then observe the later effect (or lack of effect) on the other.
An alternative way to acquire evidence about this matter would be to conduct a randomised controlled trial (RCT) where researchers intervene in some way to affect one of the variables under study and then observe whether there is any change in the other variable over a period of time. (Usually the intervention is only for one group in the study, and another group is left as a "control" group.) For example, researchers might conduct some intervention that they think will exogenously increase the "trust in science" of the treatment group (without improving their scientific literacy) and then measure the later "scientific literacy" for both groups to see if their intervention has had any effect. | What does "even if the evidence remains correlational" mean?
Evidence being "correlational" (more often called "observational") means that this was passive observational evidence of a statistical association between things, without any controlled experimentatio |
52,248 | What does "even if the evidence remains correlational" mean? | Trust in Science -> Scientific Literacy is observationally equivalent with Scientific Literacy -> Trust in Science. Similarly, Education -> Trust in Science & Education -> Scientific Literacy is also observationally equivalent with Trust in Science -> Scientific Literacy. So when they say "even if this evidence remains correlational" they're saying they can't rule out these alternative explanations which could induce the same correlation without there being a causal relationship where Trust in Science -> Scientific Literacy. The next phrase is then a claim that despite being unable to rule out reverse causality or omitted variable bias based on the correlation, nevertheless it makes sense that there would be a causal relationship where Trust in Science -> Scientific Literacy. | What does "even if the evidence remains correlational" mean? | Trust in Science -> Scientific Literacy is observationally equivalent with Scientific Literacy -> Trust in Science. Similarly, Education -> Trust in Science & Education -> Scientific Literacy is also | What does "even if the evidence remains correlational" mean?
Trust in Science -> Scientific Literacy is observationally equivalent with Scientific Literacy -> Trust in Science. Similarly, Education -> Trust in Science & Education -> Scientific Literacy is also observationally equivalent with Trust in Science -> Scientific Literacy. So when they say "even if this evidence remains correlational" they're saying they can't rule out these alternative explanations which could induce the same correlation without there being a causal relationship where Trust in Science -> Scientific Literacy. The next phrase is then a claim that despite being unable to rule out reverse causality or omitted variable bias based on the correlation, nevertheless it makes sense that there would be a causal relationship where Trust in Science -> Scientific Literacy. | What does "even if the evidence remains correlational" mean?
Trust in Science -> Scientific Literacy is observationally equivalent with Scientific Literacy -> Trust in Science. Similarly, Education -> Trust in Science & Education -> Scientific Literacy is also |
52,249 | What does "even if the evidence remains correlational" mean? | What does the bolded phrase mean?
I agree with you that it means the evidence isn't causal. In other words, no causal inference was tested so they only note the correlation in the survey data.
Does it mean that even if the evidence isn't causal, the next phrase still makes sense?
Yes. Even if the relationship isn't causal, only an experimental correlation, we are still going to attribute causation to the correlation.
And what is the "evidence" they are referring to?
I think it is the evidence from the previous paragraph:
Yamagishi and his colleagues demonstrated the learning advantages of being trusting. Their experiments were similar to trust games, but the participants could interact with each other before making the decision to transfer money (or not) to the other. The most trusting participants were better at figuring out who would be trustworthy, or to whom they should transfer money. | What does "even if the evidence remains correlational" mean? | What does the bolded phrase mean?
I agree with you that it means the evidence isn't causal. In other words, no causal inference was tested so they only note the correlation in the survey data.
Does | What does "even if the evidence remains correlational" mean?
What does the bolded phrase mean?
I agree with you that it means the evidence isn't causal. In other words, no causal inference was tested so they only note the correlation in the survey data.
Does it mean that even if the evidence isn't causal, the next phrase still makes sense?
Yes. Even if the relationship isn't causal, only an experimental correlation, we are still going to attribute causation to the correlation.
And what is the "evidence" they are referring to?
I think it is the evidence from the previous paragraph:
Yamagishi and his colleagues demonstrated the learning advantages of being trusting. Their experiments were similar to trust games, but the participants could interact with each other before making the decision to transfer money (or not) to the other. The most trusting participants were better at figuring out who would be trustworthy, or to whom they should transfer money. | What does "even if the evidence remains correlational" mean?
What does the bolded phrase mean?
I agree with you that it means the evidence isn't causal. In other words, no causal inference was tested so they only note the correlation in the survey data.
Does |
52,250 | Is it possible to use generated non-normal errors with a linear regression model | Not sure what is the question here. First of all, yes, you can simulate data using any data generating process. However, if what you want is to compare the scenario to data simulated from a normal distribution, you just need to make sure that in both cases, the theoretical variance and means are the same.
For example, if:
$$y=X\beta +e$$
$$e\sim N(0,1)$$
then you can compare it to:
$$ e \sim U \left( -\frac{\sqrt(12)}{2},\frac{\sqrt(12)}{2} \right) $$
because in both cases the mean is 0 and variance 1. | Is it possible to use generated non-normal errors with a linear regression model | Not sure what is the question here. First of all, yes, you can simulate data using any data generating process. However, if what you want is to compare the scenario to data simulated from a normal dis | Is it possible to use generated non-normal errors with a linear regression model
Not sure what is the question here. First of all, yes, you can simulate data using any data generating process. However, if what you want is to compare the scenario to data simulated from a normal distribution, you just need to make sure that in both cases, the theoretical variance and means are the same.
For example, if:
$$y=X\beta +e$$
$$e\sim N(0,1)$$
then you can compare it to:
$$ e \sim U \left( -\frac{\sqrt(12)}{2},\frac{\sqrt(12)}{2} \right) $$
because in both cases the mean is 0 and variance 1. | Is it possible to use generated non-normal errors with a linear regression model
Not sure what is the question here. First of all, yes, you can simulate data using any data generating process. However, if what you want is to compare the scenario to data simulated from a normal dis |
52,251 | Is it possible to use generated non-normal errors with a linear regression model | is it ok to generate data from latter distributions to build Linear Regression Models?
You can simulate whatever you want. What matters more perhaps is the type of conclusions you want to draw from the results: I can imagine people using linear regression when the true error distribution is $t$-distributed, and it is valuable to know how linear regression performs vs other techniques. But is anyone using ordinary linear regression for exponential, or Weibull distributed processes?
Since this has the r tag, perhaps what you are really asking for is how to draw samples from non-normal distributions in R.
This is actually really simple and there are a lot of built-in choices, including each choice you mentioned:
rnorm(n, mu, sd) for n random draws from a normal distribution with mean mu and variance sd^2;
rt(n, df) for n random draws from a $t$-distribution with df degrees of freedom. You could multiply these draws by some value to increase their variance if you wanted to compare them to the normally distributed ones;
rlnorm(n, meanlog, sdlog) for n random draws from a log-normal distribution with mean and standard deviation on the log scake given by meanlog and sdlog, respectively;
rbeta(n, shape1, shape2) to draw n samples from a beta distribution with $\alpha$ given by shape1 and $\beta$ by shape2;
rweibull(n, shape, scale) for n samples from a Weibull distribution with $k$ given by shape and $\lambda$ by scale;
rexp(n, rate) for n samples from an exponential distribution with $\lambda$ given by rate.
There are many other distributions available in base R, as well as from packages you can install. You could even write your own random number generator based on runif to draw samples from an arbitrary probability distribution. For example, summing two uniform distributions with the same $a$ and $b$ will give you a triangular distribution.
You may then also want to set a seed before making a call to these functions. This forces the pseudo-random number generator to start from a fixed point, so any time you run your script, you will end up with the same 'random' numbers. This can be done by placing set.seed(xxx) at the top of your script, where xxx is some arbitrarily chosen integer. This makes your results reproducible. | Is it possible to use generated non-normal errors with a linear regression model | is it ok to generate data from latter distributions to build Linear Regression Models?
You can simulate whatever you want. What matters more perhaps is the type of conclusions you want to draw from t | Is it possible to use generated non-normal errors with a linear regression model
is it ok to generate data from latter distributions to build Linear Regression Models?
You can simulate whatever you want. What matters more perhaps is the type of conclusions you want to draw from the results: I can imagine people using linear regression when the true error distribution is $t$-distributed, and it is valuable to know how linear regression performs vs other techniques. But is anyone using ordinary linear regression for exponential, or Weibull distributed processes?
Since this has the r tag, perhaps what you are really asking for is how to draw samples from non-normal distributions in R.
This is actually really simple and there are a lot of built-in choices, including each choice you mentioned:
rnorm(n, mu, sd) for n random draws from a normal distribution with mean mu and variance sd^2;
rt(n, df) for n random draws from a $t$-distribution with df degrees of freedom. You could multiply these draws by some value to increase their variance if you wanted to compare them to the normally distributed ones;
rlnorm(n, meanlog, sdlog) for n random draws from a log-normal distribution with mean and standard deviation on the log scake given by meanlog and sdlog, respectively;
rbeta(n, shape1, shape2) to draw n samples from a beta distribution with $\alpha$ given by shape1 and $\beta$ by shape2;
rweibull(n, shape, scale) for n samples from a Weibull distribution with $k$ given by shape and $\lambda$ by scale;
rexp(n, rate) for n samples from an exponential distribution with $\lambda$ given by rate.
There are many other distributions available in base R, as well as from packages you can install. You could even write your own random number generator based on runif to draw samples from an arbitrary probability distribution. For example, summing two uniform distributions with the same $a$ and $b$ will give you a triangular distribution.
You may then also want to set a seed before making a call to these functions. This forces the pseudo-random number generator to start from a fixed point, so any time you run your script, you will end up with the same 'random' numbers. This can be done by placing set.seed(xxx) at the top of your script, where xxx is some arbitrarily chosen integer. This makes your results reproducible. | Is it possible to use generated non-normal errors with a linear regression model
is it ok to generate data from latter distributions to build Linear Regression Models?
You can simulate whatever you want. What matters more perhaps is the type of conclusions you want to draw from t |
52,252 | Is it possible to use generated non-normal errors with a linear regression model | I think you'll want to revisit how regression simulation in supposed to work. The idea is to code the Data Generating Process
(DGP) and as a result you will know what is true in the population. Then you build a sample using this DGP and run a regression on this sample (can be linear regression or Huber or any other type). You'll want to see if you recover the population parameters and how your conclusion changes as you change the DGP as desired. This way you can answer any questions you might have (such as investigating robustness to outliers). To simulate the model $Y=\beta_0+\beta_1*X+\epsilon$, you do the following:
Independently simulate the error term $\epsilon$ from any assumed distribution (can be any distribution you please), and the independent variable from any assumed distribution (can be any distribution you please). You can build in a dependency of $\epsilon$ and $X$ but keep in mind that (if X is a random variable) your OLS model necessarily assumes that $E(\epsilon_i|X_i)=0$, so this condition must be satisfied by whatever dependency relationship you decide to model. With X non-random, there can be no dependency between $\epsilon$ and $X$, by definition.
Assume values for $\beta_0$ and $\beta_1$. It looks like you want to assume that $\beta_0=0$ and $\beta_1=0$. That's fine but then X does not even enter the picture here. Your model ignores X and is literally $Y_i$=$\epsilon_i$. There is loss of generality once you assume $\beta_1=0$. You can assume $\beta_1=4$ without loss of generality, but not $\beta_1=0$.
Generate $Y$ values using the DGP. You do not assume another distribution for $Y$. The distribution of $Y_i$ in your case is identical to the distribution of $\epsilon_i$. Whatever you assumed for $\epsilon_i$ holds for $Y_i$ because $Y_i$=$\epsilon_i$.
This clarification of how regression simulation works should answer all your questions. | Is it possible to use generated non-normal errors with a linear regression model | I think you'll want to revisit how regression simulation in supposed to work. The idea is to code the Data Generating Process
(DGP) and as a result you will know what is true in the population. Then y | Is it possible to use generated non-normal errors with a linear regression model
I think you'll want to revisit how regression simulation in supposed to work. The idea is to code the Data Generating Process
(DGP) and as a result you will know what is true in the population. Then you build a sample using this DGP and run a regression on this sample (can be linear regression or Huber or any other type). You'll want to see if you recover the population parameters and how your conclusion changes as you change the DGP as desired. This way you can answer any questions you might have (such as investigating robustness to outliers). To simulate the model $Y=\beta_0+\beta_1*X+\epsilon$, you do the following:
Independently simulate the error term $\epsilon$ from any assumed distribution (can be any distribution you please), and the independent variable from any assumed distribution (can be any distribution you please). You can build in a dependency of $\epsilon$ and $X$ but keep in mind that (if X is a random variable) your OLS model necessarily assumes that $E(\epsilon_i|X_i)=0$, so this condition must be satisfied by whatever dependency relationship you decide to model. With X non-random, there can be no dependency between $\epsilon$ and $X$, by definition.
Assume values for $\beta_0$ and $\beta_1$. It looks like you want to assume that $\beta_0=0$ and $\beta_1=0$. That's fine but then X does not even enter the picture here. Your model ignores X and is literally $Y_i$=$\epsilon_i$. There is loss of generality once you assume $\beta_1=0$. You can assume $\beta_1=4$ without loss of generality, but not $\beta_1=0$.
Generate $Y$ values using the DGP. You do not assume another distribution for $Y$. The distribution of $Y_i$ in your case is identical to the distribution of $\epsilon_i$. Whatever you assumed for $\epsilon_i$ holds for $Y_i$ because $Y_i$=$\epsilon_i$.
This clarification of how regression simulation works should answer all your questions. | Is it possible to use generated non-normal errors with a linear regression model
I think you'll want to revisit how regression simulation in supposed to work. The idea is to code the Data Generating Process
(DGP) and as a result you will know what is true in the population. Then y |
52,253 | Is it possible to use generated non-normal errors with a linear regression model | It is certainly possible to generate random variables from distributions other than the normal distribution in R. You can find a large list of probability distributions here. If you are interested in conducting simulation analysis looking at the effects of outliers, a natural thing to do would be to use a T-distribution for the error term, which allows you to vary the degrees-of-freedom parameter which affects the "fatness of the tails" of the distribution. This can be done by generating random variables using the rt function.
Here is an example where I simulate linear regression data using a normally distributed explanatory variable and error terms that are T-distributed with two degrees-of-freedom (giving infinite variance). This is an error distribution with "fat tails" so you get some large error terms in the data.
#Set regression parameters
n <- 200
beta0 <- 24.2
beta1 <- 0.8
#Generate simulated linear regression data
#Error terms are T-distributed with two degrees-of-freedom
set.seed(81301856)
ERR <- rt(n, df = 2)
X <- rnorm(n, mean = 40, sd = 8)
Y <- beta0 + beta1*X + ERR
#Show scatterplot of the data
plot(X, Y, ylim = c(0, 100), main = 'Scatterplot of simulated regression data') | Is it possible to use generated non-normal errors with a linear regression model | It is certainly possible to generate random variables from distributions other than the normal distribution in R. You can find a large list of probability distributions here. If you are interested i | Is it possible to use generated non-normal errors with a linear regression model
It is certainly possible to generate random variables from distributions other than the normal distribution in R. You can find a large list of probability distributions here. If you are interested in conducting simulation analysis looking at the effects of outliers, a natural thing to do would be to use a T-distribution for the error term, which allows you to vary the degrees-of-freedom parameter which affects the "fatness of the tails" of the distribution. This can be done by generating random variables using the rt function.
Here is an example where I simulate linear regression data using a normally distributed explanatory variable and error terms that are T-distributed with two degrees-of-freedom (giving infinite variance). This is an error distribution with "fat tails" so you get some large error terms in the data.
#Set regression parameters
n <- 200
beta0 <- 24.2
beta1 <- 0.8
#Generate simulated linear regression data
#Error terms are T-distributed with two degrees-of-freedom
set.seed(81301856)
ERR <- rt(n, df = 2)
X <- rnorm(n, mean = 40, sd = 8)
Y <- beta0 + beta1*X + ERR
#Show scatterplot of the data
plot(X, Y, ylim = c(0, 100), main = 'Scatterplot of simulated regression data') | Is it possible to use generated non-normal errors with a linear regression model
It is certainly possible to generate random variables from distributions other than the normal distribution in R. You can find a large list of probability distributions here. If you are interested i |
52,254 | Confused about rejection region and P-value | I think this will be best understood with an example. Let us solve a hypothesis test for the mean height of people in a country. We have the information about the heights of a sample of people in that country. First, we define our null and alternative hypthesis:
$H_0: \mu \geq a$
$H_1: \mu < a$
And (let me change your notation) we have our test statistic $Z$. Now, we know two things about this test statistic:
We know the formula for this statistic: $Z=\dfrac{\bar{x}-\mu}{\sigma/\sqrt{n}}$
We know the distribution it follows. For simplicity in this explanation, let me assume it follows a normal distribution. We would have that $Z\sim N(0,1)$.
Now the important part: We do not know the true population value of $\mu$ (this means, we do not know the true mean height of all the people in the country, if we wanted to know that we would need to know the height of all the citizens in that country). But we can say: hey, let's assume that the value for $\mu$ is the value stated in $H_0$ (a.k.a. let's assume that $\mu=a$)
And we pose the key question: How likely is it for $\mu$ to take the value $a$ given the information that we know from the sample data?
Now that we are assuming that $\mu=a$, we can obtain the value of our test statistic under the null hypothesis (this is, assuming that $\mu=a$).
There can be two possible results:
If $a$ is not a likely value for $\mu$ to have then the statistic $\hat{Z}$ value will not fit well in the distribution $Z$ follows and we will reject $H_0$
If $a$ is a likely value for $\mu$ to have, then the statistic $\hat{Z}$ value will fit well in the distribution $Z$ follows and we will fail to reject $H_0$:
And finally here comes into play the rejection region and the p-value.
We will consider that the tail of the distribution (in this case, the left tail, as stated by $H_1$ are all not likely values for $\hat{Z}$ so if any value is close to the tail, we reject $H_0$. How close to the tail? That is stated by the significance level $\alpha$. The rejection region is:
$$RR=\{Z {\ }s.t.{\ } Z < -Z_{\alpha}\}$$
If we take, for example, $\alpha=0.05$ then the rejection region is $$RR=\{Z {\ }s.t.{\ } Z < -Z_{0.05}\}= \{Z {\ }s.t.{\ } Z < -1.645\}$$
And the p-value is simply the probability of obtaining a value at least as extreme as the one from our sample, or in other words, if the sample value of our statistic is $\hat{Z}$ then the p-value is $$p-value=P(Z<\hat{Z})$$
In one image, in red the rejection region, and in green the p-value.
Remark: This plots have been made assuming that we are doing a left sided test. Considering a right sided or two sided test would yield similar but not equal images. | Confused about rejection region and P-value | I think this will be best understood with an example. Let us solve a hypothesis test for the mean height of people in a country. We have the information about the heights of a sample of people in that | Confused about rejection region and P-value
I think this will be best understood with an example. Let us solve a hypothesis test for the mean height of people in a country. We have the information about the heights of a sample of people in that country. First, we define our null and alternative hypthesis:
$H_0: \mu \geq a$
$H_1: \mu < a$
And (let me change your notation) we have our test statistic $Z$. Now, we know two things about this test statistic:
We know the formula for this statistic: $Z=\dfrac{\bar{x}-\mu}{\sigma/\sqrt{n}}$
We know the distribution it follows. For simplicity in this explanation, let me assume it follows a normal distribution. We would have that $Z\sim N(0,1)$.
Now the important part: We do not know the true population value of $\mu$ (this means, we do not know the true mean height of all the people in the country, if we wanted to know that we would need to know the height of all the citizens in that country). But we can say: hey, let's assume that the value for $\mu$ is the value stated in $H_0$ (a.k.a. let's assume that $\mu=a$)
And we pose the key question: How likely is it for $\mu$ to take the value $a$ given the information that we know from the sample data?
Now that we are assuming that $\mu=a$, we can obtain the value of our test statistic under the null hypothesis (this is, assuming that $\mu=a$).
There can be two possible results:
If $a$ is not a likely value for $\mu$ to have then the statistic $\hat{Z}$ value will not fit well in the distribution $Z$ follows and we will reject $H_0$
If $a$ is a likely value for $\mu$ to have, then the statistic $\hat{Z}$ value will fit well in the distribution $Z$ follows and we will fail to reject $H_0$:
And finally here comes into play the rejection region and the p-value.
We will consider that the tail of the distribution (in this case, the left tail, as stated by $H_1$ are all not likely values for $\hat{Z}$ so if any value is close to the tail, we reject $H_0$. How close to the tail? That is stated by the significance level $\alpha$. The rejection region is:
$$RR=\{Z {\ }s.t.{\ } Z < -Z_{\alpha}\}$$
If we take, for example, $\alpha=0.05$ then the rejection region is $$RR=\{Z {\ }s.t.{\ } Z < -Z_{0.05}\}= \{Z {\ }s.t.{\ } Z < -1.645\}$$
And the p-value is simply the probability of obtaining a value at least as extreme as the one from our sample, or in other words, if the sample value of our statistic is $\hat{Z}$ then the p-value is $$p-value=P(Z<\hat{Z})$$
In one image, in red the rejection region, and in green the p-value.
Remark: This plots have been made assuming that we are doing a left sided test. Considering a right sided or two sided test would yield similar but not equal images. | Confused about rejection region and P-value
I think this will be best understood with an example. Let us solve a hypothesis test for the mean height of people in a country. We have the information about the heights of a sample of people in that |
52,255 | Confused about rejection region and P-value | The rejection region is fixed beforehand. If the null hypothesis is true then some $\alpha \%$ of the observations will be in the region.
The p-value is not the same as this $\alpha \%$.
The p-value is computed for each separate observation, and can be different for two observations that both fall inside the rejection region.
The p-value indicates how extreme* a value is. And expresses this in terms of a probability. This expression in terms of a probability could be seen as the quantile of the outcome when the potential outcomes are ranked in decreasing order of extremity. The more extreme the observation, the lower the quantile.
In short: The rejection region can be seen as the region of observations for which the associated quantile or p-value is lower than some value.
See also: https://stats.stackexchange.com/questions/tagged/critical-value
* What is and what is not considered extreme is not well defined here and might be considered arbitrary, but depending on the situation there might be good reasons to choose a particular definition. For example, think about one-sided and two-sided tests in which case different sorts of extremities are chosen.
Because of the variations in choice for 'extremeness', it might be that you encounter a situation where some observation is inside the rejection region but has a p-value that is larger. This is the case when the two use a different definition. But typically the p-value and rejection region should relate to the same definition of 'extremeness'. | Confused about rejection region and P-value | The rejection region is fixed beforehand. If the null hypothesis is true then some $\alpha \%$ of the observations will be in the region.
The p-value is not the same as this $\alpha \%$.
The p-value i | Confused about rejection region and P-value
The rejection region is fixed beforehand. If the null hypothesis is true then some $\alpha \%$ of the observations will be in the region.
The p-value is not the same as this $\alpha \%$.
The p-value is computed for each separate observation, and can be different for two observations that both fall inside the rejection region.
The p-value indicates how extreme* a value is. And expresses this in terms of a probability. This expression in terms of a probability could be seen as the quantile of the outcome when the potential outcomes are ranked in decreasing order of extremity. The more extreme the observation, the lower the quantile.
In short: The rejection region can be seen as the region of observations for which the associated quantile or p-value is lower than some value.
See also: https://stats.stackexchange.com/questions/tagged/critical-value
* What is and what is not considered extreme is not well defined here and might be considered arbitrary, but depending on the situation there might be good reasons to choose a particular definition. For example, think about one-sided and two-sided tests in which case different sorts of extremities are chosen.
Because of the variations in choice for 'extremeness', it might be that you encounter a situation where some observation is inside the rejection region but has a p-value that is larger. This is the case when the two use a different definition. But typically the p-value and rejection region should relate to the same definition of 'extremeness'. | Confused about rejection region and P-value
The rejection region is fixed beforehand. If the null hypothesis is true then some $\alpha \%$ of the observations will be in the region.
The p-value is not the same as this $\alpha \%$.
The p-value i |
52,256 | Where does the binary logistic regression model equation come from? | I wouldn’t say it was “derived”, but rather designed. In generalized linear models
$$C(Y|X)=E(Y|X)=X\beta$$
$C$ is a link function. For linear regression its inverse, $C^{-1}$, is an identity function; for logistic regression it’s the logit function. $Y$ is assumed to follow a Bernoulli distinction parametrized by probability of success $p$, that is also its mean. Since probability is bounded between zero and one, we need to transform it to such a range: logit function is one such transformation, probit is another, and there are some other possible choices.
I don’t have the book to hand, but would say it should be
$$
E[Y|X] = C^{-1}(X\beta)
$$
and
$$
C(Y|X) = X\beta
$$ | Where does the binary logistic regression model equation come from? | I wouldn’t say it was “derived”, but rather designed. In generalized linear models
$$C(Y|X)=E(Y|X)=X\beta$$
$C$ is a link function. For linear regression its inverse, $C^{-1}$, is an identity function | Where does the binary logistic regression model equation come from?
I wouldn’t say it was “derived”, but rather designed. In generalized linear models
$$C(Y|X)=E(Y|X)=X\beta$$
$C$ is a link function. For linear regression its inverse, $C^{-1}$, is an identity function; for logistic regression it’s the logit function. $Y$ is assumed to follow a Bernoulli distinction parametrized by probability of success $p$, that is also its mean. Since probability is bounded between zero and one, we need to transform it to such a range: logit function is one such transformation, probit is another, and there are some other possible choices.
I don’t have the book to hand, but would say it should be
$$
E[Y|X] = C^{-1}(X\beta)
$$
and
$$
C(Y|X) = X\beta
$$ | Where does the binary logistic regression model equation come from?
I wouldn’t say it was “derived”, but rather designed. In generalized linear models
$$C(Y|X)=E(Y|X)=X\beta$$
$C$ is a link function. For linear regression its inverse, $C^{-1}$, is an identity function |
52,257 | Where does the binary logistic regression model equation come from? | 1 Convenient transformation
The logistic function is often used as a mapping from $(-\infty,\infty)$ to $(0,1)$ (as others mention).
However the logistic function as link function also relates to being the canonical link function, or sometimes it relates to a particular mechanism/model. See the two points below.
2 Canonical link function
In short: the logit of the mean, $\log \left( \frac{p}{1-p} \right) $, is the natural parameter of the Bernoulli distribution. The logistic function is the inverse.
You derive it as follows:
The logit/logistic function relates to the Bernoulli/binary when you express the pdf as an exponential family in canonical form, ie when you use as parameter $\theta$ the natural parameter such that $\eta(\theta) = \theta$:
$$f(y\vert \theta) = h(y)e^{\eta(\theta) t(y) - A(\theta)} = h(y)e^{\theta t(y)- A(\theta)}$$
In the case of the binomial distribution the natural parameter is not the probability $p$ (or $\mu$ which equals $p$), which we typically use, but $\eta = \log \left( \frac{p}{1-p} \right)$
$$f(y\vert p) = e^{\log \left(\frac{p}{1-p}\right)y + \log(1-p)}$$
Then the linear function $X\beta$ is used to model this natural parameter:
$$\eta = \log \left( \frac{p}{1-p} \right) = X\beta$$
If we rewrite it such that $p$ is a function of $X\beta$, then you get
$$p = (1-e^{-X\beta})^{-1}$$
So the logistic function $p=(1-e^{-X\beta})^{-1}$ is the inverse of the logit function $X\beta =\log \left( \frac{p}{1-p} \right)$. The latter pops up in the equation above when we write the model with the natural parameter.
3 Growth model or other differential equation relationship
The above, canonical link function, is an afterthought, and the history of the logistic function is older than when it was recognized as canonical link function. The use of a canonical link function can have advantages but there is no reason that the natural parameter needs to be some linear function.
An alternative reason for the use of the link function can be when it actually makes sense as a deterministic model. For instance in growth models the logistic function can arrise.
When the growth equals
$$f'= f(1-f)$$
Then the solution is the logistic function. You can see the above as exponential growth when $1-f\approx 1$ that becomes limited when $f$ approaches $1$. | Where does the binary logistic regression model equation come from? | 1 Convenient transformation
The logistic function is often used as a mapping from $(-\infty,\infty)$ to $(0,1)$ (as others mention).
However the logistic function as link function also relates to bein | Where does the binary logistic regression model equation come from?
1 Convenient transformation
The logistic function is often used as a mapping from $(-\infty,\infty)$ to $(0,1)$ (as others mention).
However the logistic function as link function also relates to being the canonical link function, or sometimes it relates to a particular mechanism/model. See the two points below.
2 Canonical link function
In short: the logit of the mean, $\log \left( \frac{p}{1-p} \right) $, is the natural parameter of the Bernoulli distribution. The logistic function is the inverse.
You derive it as follows:
The logit/logistic function relates to the Bernoulli/binary when you express the pdf as an exponential family in canonical form, ie when you use as parameter $\theta$ the natural parameter such that $\eta(\theta) = \theta$:
$$f(y\vert \theta) = h(y)e^{\eta(\theta) t(y) - A(\theta)} = h(y)e^{\theta t(y)- A(\theta)}$$
In the case of the binomial distribution the natural parameter is not the probability $p$ (or $\mu$ which equals $p$), which we typically use, but $\eta = \log \left( \frac{p}{1-p} \right)$
$$f(y\vert p) = e^{\log \left(\frac{p}{1-p}\right)y + \log(1-p)}$$
Then the linear function $X\beta$ is used to model this natural parameter:
$$\eta = \log \left( \frac{p}{1-p} \right) = X\beta$$
If we rewrite it such that $p$ is a function of $X\beta$, then you get
$$p = (1-e^{-X\beta})^{-1}$$
So the logistic function $p=(1-e^{-X\beta})^{-1}$ is the inverse of the logit function $X\beta =\log \left( \frac{p}{1-p} \right)$. The latter pops up in the equation above when we write the model with the natural parameter.
3 Growth model or other differential equation relationship
The above, canonical link function, is an afterthought, and the history of the logistic function is older than when it was recognized as canonical link function. The use of a canonical link function can have advantages but there is no reason that the natural parameter needs to be some linear function.
An alternative reason for the use of the link function can be when it actually makes sense as a deterministic model. For instance in growth models the logistic function can arrise.
When the growth equals
$$f'= f(1-f)$$
Then the solution is the logistic function. You can see the above as exponential growth when $1-f\approx 1$ that becomes limited when $f$ approaches $1$. | Where does the binary logistic regression model equation come from?
1 Convenient transformation
The logistic function is often used as a mapping from $(-\infty,\infty)$ to $(0,1)$ (as others mention).
However the logistic function as link function also relates to bein |
52,258 | Where does the binary logistic regression model equation come from? | To me, this paper from John Mount was instructive. He derives the logistic regression formula using two approaches, one them using the maximum entropy principle. | Where does the binary logistic regression model equation come from? | To me, this paper from John Mount was instructive. He derives the logistic regression formula using two approaches, one them using the maximum entropy principle. | Where does the binary logistic regression model equation come from?
To me, this paper from John Mount was instructive. He derives the logistic regression formula using two approaches, one them using the maximum entropy principle. | Where does the binary logistic regression model equation come from?
To me, this paper from John Mount was instructive. He derives the logistic regression formula using two approaches, one them using the maximum entropy principle. |
52,259 | Where does the binary logistic regression model equation come from? | You obtain the sigmoid function by making the assumption that a linear combination of your inputs gives you the log-odds of the two classes. That is the log of the ratio of the probabilities of class $1$ to class $0$,
$$
X \beta = \log\left(\frac{p_1}{p_0}\right) = \log\left(\frac{p_1}{1-p_1}\right).
$$
This is an assumption taken from scratch, similar to the linear regression assumption that the expected output is directly a linear combination of the inputs.The reason the log-odds is a common choice for the linear quantity is that its range is $(-\infty,\infty)$. You can see that the limit of the above function as $p_1 \rightarrow 0$ is $-\infty$, and as $p_1 \rightarrow 1$ it approaches $+\infty$. A linear combination of arbitrary inputs is an unbounded continuous number, so the target that it's modelling must also represent an unbounded continuous number.
It's easy to show that the inverse of the above expression is
$$
p_1 = \frac{1}{1 + \exp(-X \beta)}.
$$ | Where does the binary logistic regression model equation come from? | You obtain the sigmoid function by making the assumption that a linear combination of your inputs gives you the log-odds of the two classes. That is the log of the ratio of the probabilities of class | Where does the binary logistic regression model equation come from?
You obtain the sigmoid function by making the assumption that a linear combination of your inputs gives you the log-odds of the two classes. That is the log of the ratio of the probabilities of class $1$ to class $0$,
$$
X \beta = \log\left(\frac{p_1}{p_0}\right) = \log\left(\frac{p_1}{1-p_1}\right).
$$
This is an assumption taken from scratch, similar to the linear regression assumption that the expected output is directly a linear combination of the inputs.The reason the log-odds is a common choice for the linear quantity is that its range is $(-\infty,\infty)$. You can see that the limit of the above function as $p_1 \rightarrow 0$ is $-\infty$, and as $p_1 \rightarrow 1$ it approaches $+\infty$. A linear combination of arbitrary inputs is an unbounded continuous number, so the target that it's modelling must also represent an unbounded continuous number.
It's easy to show that the inverse of the above expression is
$$
p_1 = \frac{1}{1 + \exp(-X \beta)}.
$$ | Where does the binary logistic regression model equation come from?
You obtain the sigmoid function by making the assumption that a linear combination of your inputs gives you the log-odds of the two classes. That is the log of the ratio of the probabilities of class |
52,260 | Where does the binary logistic regression model equation come from? | Contrary to some of the answers in this thread, I would like to give a derivation of the formula which I like.
Suppose we have a random variable that can take either of two classes $C_1$ or $C_2$. We are interested in finding the probability of $C_k$ conditioned on some observation $x$, i.e., we want to estimate $p(C_k\vert x)$. To model this, consider the following:
Using Baye's Theorem we have that
$$
\begin{aligned}
p(C_1\vert x) &= \frac{p(C_1)p(x\vert C_1)}{p(C_1)p(x\vert C_1) + p(C_2)p(x\vert C_2)}\\
&= \frac{p(C_1)p(x\vert C_1)}{p(C_1)p(x\vert C_1) + p(C_2)p(x\vert C_2)} \frac{(p(C_1)p(x\vert C_1))^{-1}}{(p(C_1)p(x\vert C_1))^{-1}}\\
&= \frac{1}{1 + \frac{p(C_2)p(x\vert C_2)}{p(C_1)p(x\vert C_1)}}\\
&= \frac{1}{1 + \exp\left(\log\left(\frac{p(C_2)p(x\vert C_2)}{p(C_1)p(x\vert C_1)}\right)\right)}\\
&= \frac{1}{1 + \exp\left(-\log\left(\frac{p(C_1)p(x\vert C_1)}{p(C_2)p(x\vert C_2)}\right)\right)}\\
\end{aligned}
$$
Denoting $z(x)=\log\left(\frac{p(C_1)p(x\vert C_1)}{p(C_2)p(x\vert C_2)}\right)$, we arrive at the formula:
$$
p(C_1\vert x) = \frac{1}{1 + \exp(-z(x))}
$$
In a logistic regression, we are assuming the existence of a vector $\boldsymbol\beta\in\mathbb{R}^M$ of weights such that $\boldsymbol\beta^T\phi(x) = \log\left(\frac{p(C_1)p(x\vert C_1)}{p(C_2)p(x\vert C_2)}\right)$, for some function $\phi:\mathbb{R}\to\mathbb{R}^M$ known as the basis function. That is, assuming that the latter is true, then the conditional probability $p(C_1\vert x)$ is given by
$$
p(C_1\vert x) = \frac{1}{1 + \exp(-\boldsymbol\beta^T\phi(x))}
$$
On a personal note, I believe that it is a bold claim to state that $\boldsymbol\beta^T\phi(x) = \log\left(\frac{p(C_1)p(x\vert C_1)}{p(C_2)p(x\vert C_2)}\right)$. I do not see how it is trivial to argue that this has to be the case. In fact, by modeling the factor $z(x)$ disregarding the underlying distributions is known as a discriminative model. If we want to model explicitly the terms for $z(x)$ we would have a generative model. | Where does the binary logistic regression model equation come from? | Contrary to some of the answers in this thread, I would like to give a derivation of the formula which I like.
Suppose we have a random variable that can take either of two classes $C_1$ or $C_2$. We | Where does the binary logistic regression model equation come from?
Contrary to some of the answers in this thread, I would like to give a derivation of the formula which I like.
Suppose we have a random variable that can take either of two classes $C_1$ or $C_2$. We are interested in finding the probability of $C_k$ conditioned on some observation $x$, i.e., we want to estimate $p(C_k\vert x)$. To model this, consider the following:
Using Baye's Theorem we have that
$$
\begin{aligned}
p(C_1\vert x) &= \frac{p(C_1)p(x\vert C_1)}{p(C_1)p(x\vert C_1) + p(C_2)p(x\vert C_2)}\\
&= \frac{p(C_1)p(x\vert C_1)}{p(C_1)p(x\vert C_1) + p(C_2)p(x\vert C_2)} \frac{(p(C_1)p(x\vert C_1))^{-1}}{(p(C_1)p(x\vert C_1))^{-1}}\\
&= \frac{1}{1 + \frac{p(C_2)p(x\vert C_2)}{p(C_1)p(x\vert C_1)}}\\
&= \frac{1}{1 + \exp\left(\log\left(\frac{p(C_2)p(x\vert C_2)}{p(C_1)p(x\vert C_1)}\right)\right)}\\
&= \frac{1}{1 + \exp\left(-\log\left(\frac{p(C_1)p(x\vert C_1)}{p(C_2)p(x\vert C_2)}\right)\right)}\\
\end{aligned}
$$
Denoting $z(x)=\log\left(\frac{p(C_1)p(x\vert C_1)}{p(C_2)p(x\vert C_2)}\right)$, we arrive at the formula:
$$
p(C_1\vert x) = \frac{1}{1 + \exp(-z(x))}
$$
In a logistic regression, we are assuming the existence of a vector $\boldsymbol\beta\in\mathbb{R}^M$ of weights such that $\boldsymbol\beta^T\phi(x) = \log\left(\frac{p(C_1)p(x\vert C_1)}{p(C_2)p(x\vert C_2)}\right)$, for some function $\phi:\mathbb{R}\to\mathbb{R}^M$ known as the basis function. That is, assuming that the latter is true, then the conditional probability $p(C_1\vert x)$ is given by
$$
p(C_1\vert x) = \frac{1}{1 + \exp(-\boldsymbol\beta^T\phi(x))}
$$
On a personal note, I believe that it is a bold claim to state that $\boldsymbol\beta^T\phi(x) = \log\left(\frac{p(C_1)p(x\vert C_1)}{p(C_2)p(x\vert C_2)}\right)$. I do not see how it is trivial to argue that this has to be the case. In fact, by modeling the factor $z(x)$ disregarding the underlying distributions is known as a discriminative model. If we want to model explicitly the terms for $z(x)$ we would have a generative model. | Where does the binary logistic regression model equation come from?
Contrary to some of the answers in this thread, I would like to give a derivation of the formula which I like.
Suppose we have a random variable that can take either of two classes $C_1$ or $C_2$. We |
52,261 | May using more features decrease the accuracy of a classifier? | This is an instructive encounter with Hughes phenomenon. Naïvely, one would think that the more information one has the better one can model a system and make predictions. However, this prejudice ignores the so-called curse of dimensionality.
Suppose for convenience that each feature (or variable) can only take on a finite number of values. In order to capture the characteristics of the data accurately, one first needs a large enough collection of data points that, so to speak, fills the feature space, i.e. one needs enough samples with each combination of values. Now, in practice, when you are presented with the data, you have a limited number of observations. If you have too many features, the feature space will have so many subregions with very few observations or none at all that your classifier will lose predictive power given that it did not get to learn the behaviour of the data in too many significant subregions.
This problem typically does not occur when you have a small number of variables because very few possible combinations are sharing the limited number of observations.
Whilst raising the number of features was helpful for the predictive power of the classifier initially, it became a liability once it meant that adding features would prevent the classifier from learning the behaviour of the data in too many sizeable regions. The classifier runs the risk of overfitting the data in the data-heavy observed regions.
The reasoning naturally extends to the case with continuous features.
This article might help: https://towardsdatascience.com/the-curse-of-dimensionality-50dc6e49aa1e. | May using more features decrease the accuracy of a classifier? | This is an instructive encounter with Hughes phenomenon. Naïvely, one would think that the more information one has the better one can model a system and make predictions. However, this prejudice igno | May using more features decrease the accuracy of a classifier?
This is an instructive encounter with Hughes phenomenon. Naïvely, one would think that the more information one has the better one can model a system and make predictions. However, this prejudice ignores the so-called curse of dimensionality.
Suppose for convenience that each feature (or variable) can only take on a finite number of values. In order to capture the characteristics of the data accurately, one first needs a large enough collection of data points that, so to speak, fills the feature space, i.e. one needs enough samples with each combination of values. Now, in practice, when you are presented with the data, you have a limited number of observations. If you have too many features, the feature space will have so many subregions with very few observations or none at all that your classifier will lose predictive power given that it did not get to learn the behaviour of the data in too many significant subregions.
This problem typically does not occur when you have a small number of variables because very few possible combinations are sharing the limited number of observations.
Whilst raising the number of features was helpful for the predictive power of the classifier initially, it became a liability once it meant that adding features would prevent the classifier from learning the behaviour of the data in too many sizeable regions. The classifier runs the risk of overfitting the data in the data-heavy observed regions.
The reasoning naturally extends to the case with continuous features.
This article might help: https://towardsdatascience.com/the-curse-of-dimensionality-50dc6e49aa1e. | May using more features decrease the accuracy of a classifier?
This is an instructive encounter with Hughes phenomenon. Naïvely, one would think that the more information one has the better one can model a system and make predictions. However, this prejudice igno |
52,262 | May using more features decrease the accuracy of a classifier? | Adding too many predictors will lead to overfitting. Always. Take a look at our overfitting tag.
Don't just throw predictors into your model. (Cross-validation and regularization help somewhat, but they will not prevent all overfitting.)
Also: Why is accuracy not the best measure for assessing classification models? | May using more features decrease the accuracy of a classifier? | Adding too many predictors will lead to overfitting. Always. Take a look at our overfitting tag.
Don't just throw predictors into your model. (Cross-validation and regularization help somewhat, but th | May using more features decrease the accuracy of a classifier?
Adding too many predictors will lead to overfitting. Always. Take a look at our overfitting tag.
Don't just throw predictors into your model. (Cross-validation and regularization help somewhat, but they will not prevent all overfitting.)
Also: Why is accuracy not the best measure for assessing classification models? | May using more features decrease the accuracy of a classifier?
Adding too many predictors will lead to overfitting. Always. Take a look at our overfitting tag.
Don't just throw predictors into your model. (Cross-validation and regularization help somewhat, but th |
52,263 | May using more features decrease the accuracy of a classifier? | This has been answered but here are a few more tips. When creating a model, you must always be conscious of the bias/variance trade off curves of the model. If a model has many features, it will predict with high variance, causing less accurate results. Too few features, and the model will have high bias, causing the model too often predict near the same value, which can also reduce accuracy. It's important to use proper feature engineering in order to find the optimum tradeoff between bias and variance. | May using more features decrease the accuracy of a classifier? | This has been answered but here are a few more tips. When creating a model, you must always be conscious of the bias/variance trade off curves of the model. If a model has many features, it will predi | May using more features decrease the accuracy of a classifier?
This has been answered but here are a few more tips. When creating a model, you must always be conscious of the bias/variance trade off curves of the model. If a model has many features, it will predict with high variance, causing less accurate results. Too few features, and the model will have high bias, causing the model too often predict near the same value, which can also reduce accuracy. It's important to use proper feature engineering in order to find the optimum tradeoff between bias and variance. | May using more features decrease the accuracy of a classifier?
This has been answered but here are a few more tips. When creating a model, you must always be conscious of the bias/variance trade off curves of the model. If a model has many features, it will predi |
52,264 | What does it mean L1 loss is not differentiable? | $L_1$ loss uses the absolute value of the difference between the predicted and the actual value to measure the loss (or the error) made by the model. The absolute value (or the modulus function), i.e. $f(x) = |x|$ is not differentiable is the way of saying that its derivative is not defined for its whole domain. For modulus function the derivative at $x=0$ is undefined, i.e. we have:
$$
\frac{d|x|}{dx} = \begin{cases}
-1, & x < 0 \\
1, & x > 0
\end{cases}
$$ | What does it mean L1 loss is not differentiable? | $L_1$ loss uses the absolute value of the difference between the predicted and the actual value to measure the loss (or the error) made by the model. The absolute value (or the modulus function), i.e. | What does it mean L1 loss is not differentiable?
$L_1$ loss uses the absolute value of the difference between the predicted and the actual value to measure the loss (or the error) made by the model. The absolute value (or the modulus function), i.e. $f(x) = |x|$ is not differentiable is the way of saying that its derivative is not defined for its whole domain. For modulus function the derivative at $x=0$ is undefined, i.e. we have:
$$
\frac{d|x|}{dx} = \begin{cases}
-1, & x < 0 \\
1, & x > 0
\end{cases}
$$ | What does it mean L1 loss is not differentiable?
$L_1$ loss uses the absolute value of the difference between the predicted and the actual value to measure the loss (or the error) made by the model. The absolute value (or the modulus function), i.e. |
52,265 | What does it mean L1 loss is not differentiable? | I understand that derivative not exist at x=0, but what practical problems can arise from this fact?
$$ L = |x*a - y|; $$
$$ \frac{\partial L}{\partial a} = \dfrac{x\left(xa-y\right)}{\left|xa-y\right|} $$
When faced with loss equals zero for any sample you train your model with, the gradient calculator will need to divide expression by zero, which will cause error.
This is usually mitigated by, for example, adding a small value to denominator when $L$ is zero. | What does it mean L1 loss is not differentiable? | I understand that derivative not exist at x=0, but what practical problems can arise from this fact?
$$ L = |x*a - y|; $$
$$ \frac{\partial L}{\partial a} = \dfrac{x\left(xa-y\right)}{\left|xa-y\righ | What does it mean L1 loss is not differentiable?
I understand that derivative not exist at x=0, but what practical problems can arise from this fact?
$$ L = |x*a - y|; $$
$$ \frac{\partial L}{\partial a} = \dfrac{x\left(xa-y\right)}{\left|xa-y\right|} $$
When faced with loss equals zero for any sample you train your model with, the gradient calculator will need to divide expression by zero, which will cause error.
This is usually mitigated by, for example, adding a small value to denominator when $L$ is zero. | What does it mean L1 loss is not differentiable?
I understand that derivative not exist at x=0, but what practical problems can arise from this fact?
$$ L = |x*a - y|; $$
$$ \frac{\partial L}{\partial a} = \dfrac{x\left(xa-y\right)}{\left|xa-y\righ |
52,266 | What does it mean L1 loss is not differentiable? | +1 to both Tomasz and Alexey posts.
I would add that a good surrogate for the $L_1$ loss is the Pseudo-Huber loss function: $ L_{\delta }(x) =$ $\delta ^{2}\left({\sqrt {1+(x/\delta )^{2}}}-1\right)$ with $\delta = 1$. It allows us to approximate the $L_1$ rather faithfully away from $x=1$ and within $[-1,1]$ behaves like a quadratic. It has first (and second) derivatives everywhere. | What does it mean L1 loss is not differentiable? | +1 to both Tomasz and Alexey posts.
I would add that a good surrogate for the $L_1$ loss is the Pseudo-Huber loss function: $ L_{\delta }(x) =$ $\delta ^{2}\left({\sqrt {1+(x/\delta )^{2}}}-1\right)$ | What does it mean L1 loss is not differentiable?
+1 to both Tomasz and Alexey posts.
I would add that a good surrogate for the $L_1$ loss is the Pseudo-Huber loss function: $ L_{\delta }(x) =$ $\delta ^{2}\left({\sqrt {1+(x/\delta )^{2}}}-1\right)$ with $\delta = 1$. It allows us to approximate the $L_1$ rather faithfully away from $x=1$ and within $[-1,1]$ behaves like a quadratic. It has first (and second) derivatives everywhere. | What does it mean L1 loss is not differentiable?
+1 to both Tomasz and Alexey posts.
I would add that a good surrogate for the $L_1$ loss is the Pseudo-Huber loss function: $ L_{\delta }(x) =$ $\delta ^{2}\left({\sqrt {1+(x/\delta )^{2}}}-1\right)$ |
52,267 | regression with multiple independent variables vs multiple regressions with one independent variable | Note at first I understood your question as 'making multiple regressions with one variable' this gives rise to part 1 in which I explain the effect of an interaction term. In the image of part one the left image relates to doing six different simple regressions (a different one for each single age class, resulting in six lines with different slope).
But in hindsight it seems like your question is more relating to 'two simple regressions versus one multiple regression'. While the interaction effect might play a role there as well (because single simple regression does not allow you to include the interaction term while multiple regression does) the effects that are more commonly relating to it (the correlation between the regressors) are described in part 2 and 3.
1 Difference due to interaction term
Below is a sketch of a hypothetical relationship for GPA as function of age and IQ. Added to this are the fitted lines for the two different situations.
Right image: If you add together the effects of two single simple linear regressions (with one independent variable each) then you can see this as obtaining a relationship for 1) the slope of GPA as function of IQ and 2) the slope of GPA as function of age. Together this relates to the curves of the one relation shifting up or down as function of the other independent parameter.
Left image: However, when you do a regression with the two independent variables at once then the model may also takes into account a variation of the slope as a function of both age and IQ (when an interaction term is included).
For instance in the hypothetical case below the increase of GPA as function of increase in IQ is not the same for each age and the effect of IQ is stronger at lower age than at higher age.
2 Difference due to correlation
What if IQ and age are slightly correlated in practice?
The above explains the difference based on the consideration of the additional interaction term.
When IQ and age are correlated then the single regressions with IQ and age will partly measure effects of each other and this will be be counted twice when you add the effects together.
You can consider single regression as perpendicular projection on the regressor vectors, but multiple regression will project on the span of vectors and use skew coordinates. See https://stats.stackexchange.com/a/124892/164061
The difference between multiple regression and single linear regressions can be seen as adding the additional transformation $(X^TX)^{-1}$.
Single linear regression
$$\hat \alpha = X^T Y$$
which is just the correlation (when scaled by the variance of each column in $X$) between the outcome $Y$ and the regressors $X$
Multiple linear regression
$$\hat \beta = (X^TX)^{-1} X^T Y$$
which includes a term $(X^TX)^{-1}$ which can be seen as transformation of coordinates to undue the effect of counting an overlap of the effects multiple times.
See more here: https://stats.stackexchange.com/a/364566/164061 where the image below is explained
With single linear regression you use the effects $\alpha$ (based on perpendicular projections) while you should be using the effects $\beta$ (which incorporates the fact that the two effects of GPA and age might overlap)
3 Difference due to unbalanced design
The effect of correlation is particular clear when the experimental design is not balanced and the independent variables correlate. In this case you can have effects like Simpson's paradox.
Code for the first image:
layout(matrix(1:2,1))
# sample of 1k people with different ages and IQ
IQ <- rnorm(10^3,100,15)
age <- sample(15:20,10^3,replace=TRUE)
# hypothetical model for GPA
set.seed(1)
GPA_offset <- 2
IQ_slope <- 1/100
age_slope <- 1/8
interaction <- -1/500
noise <- rnorm(10^3,0,0.05)
GPA <- GPA_offset +
IQ_slope * (IQ-100) +
age_slope * (age - 17.5) +
interaction * (IQ-100) * (age - 17.5) +
noise
# plotting with fitted models
cols <- hsv(0.2+c(0:5)/10,0.5+c(0:5)/10,0.7-c(0:5)/40,0.5)
cols2 <- hsv(0.2+c(0:5)/10,0.5+c(0:5)/10,0.7-c(0:5)/40,1)
plot(IQ,GPA,
col = cols[age-14], bg = cols[age-14], pch = 21, cex=0.5,
xlim = c(50,210), ylim = c(1.4,2.8))
mod <- lm(GPA ~ IQ*age)
for (i in c(15:20)) {
xIQ <- c(60,140)
yGPA <- coef(mod)[1] + coef(mod)[3] * i + (coef(mod)[2] + coef(mod)[4] * i) * xIQ
lines(xIQ, yGPA,col=cols2[i-14],lwd = 2)
text(xIQ[2], yGPA[2], paste0("age = ", i, " yrs"), pos=4, col=cols2[i-14],cex=0.7)
}
title("regression \n with \n two independent variables")
cols <- hsv(0.2+c(0:5)/10,0.5+c(0:5)/10,0.7-c(0:5)/40,0.5)
plot(IQ,GPA,
col = cols[age-14], bg = cols[age-14], pch = 21, cex=0.5,
xlim = c(50,210), ylim = c(1.4,2.8))
mod <- lm(GPA ~ IQ+age)
for (i in c(15:20)) {
xIQ <- c(60,140)
yGPA <- coef(mod)[1] + coef(mod)[3] * i + (coef(mod)[2] ) * xIQ
lines(xIQ, yGPA,col=cols2[i-14],lwd = 2)
text(xIQ[2], yGPA[2], paste0("age = ", i, " yrs"), pos=4, col=cols2[i-14],cex=0.7)
}
title("two regressions \n with \n one independent variable") | regression with multiple independent variables vs multiple regressions with one independent variable | Note at first I understood your question as 'making multiple regressions with one variable' this gives rise to part 1 in which I explain the effect of an interaction term. In the image of part one the | regression with multiple independent variables vs multiple regressions with one independent variable
Note at first I understood your question as 'making multiple regressions with one variable' this gives rise to part 1 in which I explain the effect of an interaction term. In the image of part one the left image relates to doing six different simple regressions (a different one for each single age class, resulting in six lines with different slope).
But in hindsight it seems like your question is more relating to 'two simple regressions versus one multiple regression'. While the interaction effect might play a role there as well (because single simple regression does not allow you to include the interaction term while multiple regression does) the effects that are more commonly relating to it (the correlation between the regressors) are described in part 2 and 3.
1 Difference due to interaction term
Below is a sketch of a hypothetical relationship for GPA as function of age and IQ. Added to this are the fitted lines for the two different situations.
Right image: If you add together the effects of two single simple linear regressions (with one independent variable each) then you can see this as obtaining a relationship for 1) the slope of GPA as function of IQ and 2) the slope of GPA as function of age. Together this relates to the curves of the one relation shifting up or down as function of the other independent parameter.
Left image: However, when you do a regression with the two independent variables at once then the model may also takes into account a variation of the slope as a function of both age and IQ (when an interaction term is included).
For instance in the hypothetical case below the increase of GPA as function of increase in IQ is not the same for each age and the effect of IQ is stronger at lower age than at higher age.
2 Difference due to correlation
What if IQ and age are slightly correlated in practice?
The above explains the difference based on the consideration of the additional interaction term.
When IQ and age are correlated then the single regressions with IQ and age will partly measure effects of each other and this will be be counted twice when you add the effects together.
You can consider single regression as perpendicular projection on the regressor vectors, but multiple regression will project on the span of vectors and use skew coordinates. See https://stats.stackexchange.com/a/124892/164061
The difference between multiple regression and single linear regressions can be seen as adding the additional transformation $(X^TX)^{-1}$.
Single linear regression
$$\hat \alpha = X^T Y$$
which is just the correlation (when scaled by the variance of each column in $X$) between the outcome $Y$ and the regressors $X$
Multiple linear regression
$$\hat \beta = (X^TX)^{-1} X^T Y$$
which includes a term $(X^TX)^{-1}$ which can be seen as transformation of coordinates to undue the effect of counting an overlap of the effects multiple times.
See more here: https://stats.stackexchange.com/a/364566/164061 where the image below is explained
With single linear regression you use the effects $\alpha$ (based on perpendicular projections) while you should be using the effects $\beta$ (which incorporates the fact that the two effects of GPA and age might overlap)
3 Difference due to unbalanced design
The effect of correlation is particular clear when the experimental design is not balanced and the independent variables correlate. In this case you can have effects like Simpson's paradox.
Code for the first image:
layout(matrix(1:2,1))
# sample of 1k people with different ages and IQ
IQ <- rnorm(10^3,100,15)
age <- sample(15:20,10^3,replace=TRUE)
# hypothetical model for GPA
set.seed(1)
GPA_offset <- 2
IQ_slope <- 1/100
age_slope <- 1/8
interaction <- -1/500
noise <- rnorm(10^3,0,0.05)
GPA <- GPA_offset +
IQ_slope * (IQ-100) +
age_slope * (age - 17.5) +
interaction * (IQ-100) * (age - 17.5) +
noise
# plotting with fitted models
cols <- hsv(0.2+c(0:5)/10,0.5+c(0:5)/10,0.7-c(0:5)/40,0.5)
cols2 <- hsv(0.2+c(0:5)/10,0.5+c(0:5)/10,0.7-c(0:5)/40,1)
plot(IQ,GPA,
col = cols[age-14], bg = cols[age-14], pch = 21, cex=0.5,
xlim = c(50,210), ylim = c(1.4,2.8))
mod <- lm(GPA ~ IQ*age)
for (i in c(15:20)) {
xIQ <- c(60,140)
yGPA <- coef(mod)[1] + coef(mod)[3] * i + (coef(mod)[2] + coef(mod)[4] * i) * xIQ
lines(xIQ, yGPA,col=cols2[i-14],lwd = 2)
text(xIQ[2], yGPA[2], paste0("age = ", i, " yrs"), pos=4, col=cols2[i-14],cex=0.7)
}
title("regression \n with \n two independent variables")
cols <- hsv(0.2+c(0:5)/10,0.5+c(0:5)/10,0.7-c(0:5)/40,0.5)
plot(IQ,GPA,
col = cols[age-14], bg = cols[age-14], pch = 21, cex=0.5,
xlim = c(50,210), ylim = c(1.4,2.8))
mod <- lm(GPA ~ IQ+age)
for (i in c(15:20)) {
xIQ <- c(60,140)
yGPA <- coef(mod)[1] + coef(mod)[3] * i + (coef(mod)[2] ) * xIQ
lines(xIQ, yGPA,col=cols2[i-14],lwd = 2)
text(xIQ[2], yGPA[2], paste0("age = ", i, " yrs"), pos=4, col=cols2[i-14],cex=0.7)
}
title("two regressions \n with \n one independent variable") | regression with multiple independent variables vs multiple regressions with one independent variable
Note at first I understood your question as 'making multiple regressions with one variable' this gives rise to part 1 in which I explain the effect of an interaction term. In the image of part one the |
52,268 | regression with multiple independent variables vs multiple regressions with one independent variable | To explain a little more. Multiple regression tests for the unique contribution of each predictor. So let's take your example and assume that IQ and age are correlated.
If you run a regression with IQ only the contribution of IQ can be visualized like this (red part):
But once you add age to the analysis, it looks something like that:
As you can see the unique contribution (red part) of IQ is smaller, hence beta for IQ will dicrease in this analysis.
I hope this makes it clear why both analysis answer different question: First analysis, using only IQ as the predictor, tells you how much IQ contributes to predict GPA in total, while in the second analysis you can see the unique contribution of IQ to explain variation in GPA apart from age.
Keep in mind, that this is a simple exmaple and there can be other things going on like moderation, mediation or suppression which can change your interpretation of the results. | regression with multiple independent variables vs multiple regressions with one independent variable | To explain a little more. Multiple regression tests for the unique contribution of each predictor. So let's take your example and assume that IQ and age are correlated.
If you run a regression with IQ | regression with multiple independent variables vs multiple regressions with one independent variable
To explain a little more. Multiple regression tests for the unique contribution of each predictor. So let's take your example and assume that IQ and age are correlated.
If you run a regression with IQ only the contribution of IQ can be visualized like this (red part):
But once you add age to the analysis, it looks something like that:
As you can see the unique contribution (red part) of IQ is smaller, hence beta for IQ will dicrease in this analysis.
I hope this makes it clear why both analysis answer different question: First analysis, using only IQ as the predictor, tells you how much IQ contributes to predict GPA in total, while in the second analysis you can see the unique contribution of IQ to explain variation in GPA apart from age.
Keep in mind, that this is a simple exmaple and there can be other things going on like moderation, mediation or suppression which can change your interpretation of the results. | regression with multiple independent variables vs multiple regressions with one independent variable
To explain a little more. Multiple regression tests for the unique contribution of each predictor. So let's take your example and assume that IQ and age are correlated.
If you run a regression with IQ |
52,269 | regression with multiple independent variables vs multiple regressions with one independent variable | You can do that. It answers a different question.
If you include both independent variables then the results for each are controlling for the other. If you do them separately then they are not. | regression with multiple independent variables vs multiple regressions with one independent variable | You can do that. It answers a different question.
If you include both independent variables then the results for each are controlling for the other. If you do them separately then they are not. | regression with multiple independent variables vs multiple regressions with one independent variable
You can do that. It answers a different question.
If you include both independent variables then the results for each are controlling for the other. If you do them separately then they are not. | regression with multiple independent variables vs multiple regressions with one independent variable
You can do that. It answers a different question.
If you include both independent variables then the results for each are controlling for the other. If you do them separately then they are not. |
52,270 | regression with multiple independent variables vs multiple regressions with one independent variable | What this would do is answer drastically different questions.
Multiple regressions of one independent variable will give you an understand of
the target variable varies with each output of each variable
A regression with multiple independent variables would give you
coefficient estimates that let you know how the target variable
varies for a given change in the independent variable - controlling
for the other independent variables in the regression.
In the first case you would not be taking into account the impact of certain factors such as wealth, gender, ... into account when looking at at the age coefficient on IQ.
If for example, there is a disproportionate number of wealthy young people, that can have access to better education, better nutrients ... that will be implicitly absorbed in your "age" coefficient of your 1 independent regression variable. The regression might show that young people are "smarter", which might be true given your dataset, but the underlying factor might be attributable to wealth instead. | regression with multiple independent variables vs multiple regressions with one independent variable | What this would do is answer drastically different questions.
Multiple regressions of one independent variable will give you an understand of
the target variable varies with each output of each vari | regression with multiple independent variables vs multiple regressions with one independent variable
What this would do is answer drastically different questions.
Multiple regressions of one independent variable will give you an understand of
the target variable varies with each output of each variable
A regression with multiple independent variables would give you
coefficient estimates that let you know how the target variable
varies for a given change in the independent variable - controlling
for the other independent variables in the regression.
In the first case you would not be taking into account the impact of certain factors such as wealth, gender, ... into account when looking at at the age coefficient on IQ.
If for example, there is a disproportionate number of wealthy young people, that can have access to better education, better nutrients ... that will be implicitly absorbed in your "age" coefficient of your 1 independent regression variable. The regression might show that young people are "smarter", which might be true given your dataset, but the underlying factor might be attributable to wealth instead. | regression with multiple independent variables vs multiple regressions with one independent variable
What this would do is answer drastically different questions.
Multiple regressions of one independent variable will give you an understand of
the target variable varies with each output of each vari |
52,271 | regression with multiple independent variables vs multiple regressions with one independent variable | Your question says "Which method is better?". Better what for? If you want to predict GPA you might want to use both variables. If your question is about the relation between IQ and GPA, then you have no reason to add age to the Model. Hence, it depends on your research question what Model suits better. One point that appears unmentioned, is that not only beta but also the p values can change after addition of another predictor, leading to another interpretation of the results. | regression with multiple independent variables vs multiple regressions with one independent variable | Your question says "Which method is better?". Better what for? If you want to predict GPA you might want to use both variables. If your question is about the relation between IQ and GPA, then you have | regression with multiple independent variables vs multiple regressions with one independent variable
Your question says "Which method is better?". Better what for? If you want to predict GPA you might want to use both variables. If your question is about the relation between IQ and GPA, then you have no reason to add age to the Model. Hence, it depends on your research question what Model suits better. One point that appears unmentioned, is that not only beta but also the p values can change after addition of another predictor, leading to another interpretation of the results. | regression with multiple independent variables vs multiple regressions with one independent variable
Your question says "Which method is better?". Better what for? If you want to predict GPA you might want to use both variables. If your question is about the relation between IQ and GPA, then you have |
52,272 | How can I create a neural network that can recognize objects without having data for objects that aren't in the classification set? | Yes, but the most focused treatment takes a different approach. You need to choose a model type that will train each class on an "is or is-not" basis. This is also called "one-vs-all". Your result will be, effectively, a separate set of weights for each of the five classes, i.e. five models that will decide whether or not the observation (one datum) is a member of that class. If none of the five sub-models claims the item, it winds up in the "garbage" class.
SVM (support vector machine) is good for this, given compatible input. Although, there are more sophisticated treatments as well. | How can I create a neural network that can recognize objects without having data for objects that ar | Yes, but the most focused treatment takes a different approach. You need to choose a model type that will train each class on an "is or is-not" basis. This is also called "one-vs-all". Your result wi | How can I create a neural network that can recognize objects without having data for objects that aren't in the classification set?
Yes, but the most focused treatment takes a different approach. You need to choose a model type that will train each class on an "is or is-not" basis. This is also called "one-vs-all". Your result will be, effectively, a separate set of weights for each of the five classes, i.e. five models that will decide whether or not the observation (one datum) is a member of that class. If none of the five sub-models claims the item, it winds up in the "garbage" class.
SVM (support vector machine) is good for this, given compatible input. Although, there are more sophisticated treatments as well. | How can I create a neural network that can recognize objects without having data for objects that ar
Yes, but the most focused treatment takes a different approach. You need to choose a model type that will train each class on an "is or is-not" basis. This is also called "one-vs-all". Your result wi |
52,273 | How can I create a neural network that can recognize objects without having data for objects that aren't in the classification set? | Yes, and this is called: one-class classification when you train a model only on the positive data and try to predict the outliers.
check this it might help: https://medium.com/squad-engineering/one-class-classification-for-images-with-deep-features-69182fb4c9c5 | How can I create a neural network that can recognize objects without having data for objects that ar | Yes, and this is called: one-class classification when you train a model only on the positive data and try to predict the outliers.
check this it might help: https://medium.com/squad-engineering/one-c | How can I create a neural network that can recognize objects without having data for objects that aren't in the classification set?
Yes, and this is called: one-class classification when you train a model only on the positive data and try to predict the outliers.
check this it might help: https://medium.com/squad-engineering/one-class-classification-for-images-with-deep-features-69182fb4c9c5 | How can I create a neural network that can recognize objects without having data for objects that ar
Yes, and this is called: one-class classification when you train a model only on the positive data and try to predict the outliers.
check this it might help: https://medium.com/squad-engineering/one-c |
52,274 | How can I create a neural network that can recognize objects without having data for objects that aren't in the classification set? | I think you need to look at this problem in a different aspect. Based on your case, instead of training your model to recognize 'if this item is recyclable', you can approach this problem as 'if this item is not recyclable'. The reason is you only have 5 recyclable items, which is much less that those of in reality. So, by the way, you can look at this problem as an unsupervised learning way or a semi-supervised learning way, with the assumption that you have access to the items that are NOT recyclable. In detail, the suggested approaches are of two:
unsupervised learning: design your problem as an outlier detection problem. Build the training information with most of non-recyclable item with very few recyclable item. Follow the approaches such as k-means clustering, Gaussian mixture model, etc.
semi-supervised learning: in this one, you are doing a regression on only the non-recyclable item. If you provide a item that is recyclable to your trained model, your model will predicted a value with large deviation comparing with you predefined value. Please refer to one-class SVM or one-class neural network. In your subject, for image based detection, probably constitutional neural network will be better... | How can I create a neural network that can recognize objects without having data for objects that ar | I think you need to look at this problem in a different aspect. Based on your case, instead of training your model to recognize 'if this item is recyclable', you can approach this problem as 'if this | How can I create a neural network that can recognize objects without having data for objects that aren't in the classification set?
I think you need to look at this problem in a different aspect. Based on your case, instead of training your model to recognize 'if this item is recyclable', you can approach this problem as 'if this item is not recyclable'. The reason is you only have 5 recyclable items, which is much less that those of in reality. So, by the way, you can look at this problem as an unsupervised learning way or a semi-supervised learning way, with the assumption that you have access to the items that are NOT recyclable. In detail, the suggested approaches are of two:
unsupervised learning: design your problem as an outlier detection problem. Build the training information with most of non-recyclable item with very few recyclable item. Follow the approaches such as k-means clustering, Gaussian mixture model, etc.
semi-supervised learning: in this one, you are doing a regression on only the non-recyclable item. If you provide a item that is recyclable to your trained model, your model will predicted a value with large deviation comparing with you predefined value. Please refer to one-class SVM or one-class neural network. In your subject, for image based detection, probably constitutional neural network will be better... | How can I create a neural network that can recognize objects without having data for objects that ar
I think you need to look at this problem in a different aspect. Based on your case, instead of training your model to recognize 'if this item is recyclable', you can approach this problem as 'if this |
52,275 | How can I create a neural network that can recognize objects without having data for objects that aren't in the classification set? | I would suggest you to get more data and build a binary classifier.
First combine all of your data into 1 class (recyclable)
Then craw internet and find items not recyclable. We should found items that people usually throw in the trash. But not trying to get irrelevant images such as cats and dogs. | How can I create a neural network that can recognize objects without having data for objects that ar | I would suggest you to get more data and build a binary classifier.
First combine all of your data into 1 class (recyclable)
Then craw internet and find items not recyclable. We should found items th | How can I create a neural network that can recognize objects without having data for objects that aren't in the classification set?
I would suggest you to get more data and build a binary classifier.
First combine all of your data into 1 class (recyclable)
Then craw internet and find items not recyclable. We should found items that people usually throw in the trash. But not trying to get irrelevant images such as cats and dogs. | How can I create a neural network that can recognize objects without having data for objects that ar
I would suggest you to get more data and build a binary classifier.
First combine all of your data into 1 class (recyclable)
Then craw internet and find items not recyclable. We should found items th |
52,276 | Does it make sense to do Cross Validation with a Small Sample? | I have concerns about involving 250 predictors when you have 16 samples. However, let's set that aside for now and focus on cross-validation.
You don't have much data, so any split from the full set to the training and validation set is going to result in really very few observations on which you can train. However, there is something called leave-on-out cross validation (LOOCV) that might work for you. You have 16 observations. Train on 15 and validate on the other one. Repeat this until you have trained on every set of 15 with the 16th sample left out. The software you use should have a function to do this for you. For instance, Python's sklearn package has utilities for LOOCV. I'll include some code from the sklearn website.
# https://scikit-learn.org/stable/modules/generated/
# sklearn.model_selection.LeaveOneOut.html
#
>>> import numpy as np
>>> from sklearn.model_selection import LeaveOneOut
>>> X = np.array([[1, 2], [3, 4]])
>>> y = np.array([1, 2])
>>> loo = LeaveOneOut()
>>> loo.get_n_splits(X)
2
>>> print(loo)
LeaveOneOut()
>>> for train_index, test_index in loo.split(X):
... print("TRAIN:", train_index, "TEST:", test_index)
... X_train, X_test = X[train_index], X[test_index]
... y_train, y_test = y[train_index], y[test_index]
... print(X_train, X_test, y_train, y_test)
TRAIN: [1] TEST: [0]
[[3 4]] [[1 2]] [2] [1]
TRAIN: [0] TEST: [1]
[[1 2]] [[3 4]] [1] [2]
Do you, by any chance, work in genetics? | Does it make sense to do Cross Validation with a Small Sample? | I have concerns about involving 250 predictors when you have 16 samples. However, let's set that aside for now and focus on cross-validation.
You don't have much data, so any split from the full set t | Does it make sense to do Cross Validation with a Small Sample?
I have concerns about involving 250 predictors when you have 16 samples. However, let's set that aside for now and focus on cross-validation.
You don't have much data, so any split from the full set to the training and validation set is going to result in really very few observations on which you can train. However, there is something called leave-on-out cross validation (LOOCV) that might work for you. You have 16 observations. Train on 15 and validate on the other one. Repeat this until you have trained on every set of 15 with the 16th sample left out. The software you use should have a function to do this for you. For instance, Python's sklearn package has utilities for LOOCV. I'll include some code from the sklearn website.
# https://scikit-learn.org/stable/modules/generated/
# sklearn.model_selection.LeaveOneOut.html
#
>>> import numpy as np
>>> from sklearn.model_selection import LeaveOneOut
>>> X = np.array([[1, 2], [3, 4]])
>>> y = np.array([1, 2])
>>> loo = LeaveOneOut()
>>> loo.get_n_splits(X)
2
>>> print(loo)
LeaveOneOut()
>>> for train_index, test_index in loo.split(X):
... print("TRAIN:", train_index, "TEST:", test_index)
... X_train, X_test = X[train_index], X[test_index]
... y_train, y_test = y[train_index], y[test_index]
... print(X_train, X_test, y_train, y_test)
TRAIN: [1] TEST: [0]
[[3 4]] [[1 2]] [2] [1]
TRAIN: [0] TEST: [1]
[[1 2]] [[3 4]] [1] [2]
Do you, by any chance, work in genetics? | Does it make sense to do Cross Validation with a Small Sample?
I have concerns about involving 250 predictors when you have 16 samples. However, let's set that aside for now and focus on cross-validation.
You don't have much data, so any split from the full set t |
52,277 | Does it make sense to do Cross Validation with a Small Sample? | I'm being asked to perform CV on the set.
I'm going to assume that this cross validation will be for internal validation (part of verification) of the performance of the model you get from your 16 x 250 data set.
That is, you are not going to do any data-driven hyperparameter optimization (which can also use cross validation results).
Yes, cross validation does make sense here. The results will be very uncertain due to the fact that only 16 samples contribute to the validation results. But: given your small data set, repeated k-fold (8 fold would probably be the best choice) or similar resampling validation (out-of-bootstrap, repeated set validation) is the best you can do in this situation.
This large uncertainty, BTW, also means that data-driven optimization is basically impossible with such a small data set: this uncertainty due to the limited number of tested cases depends on the absolute number of tested cases - in validation there is no way to mitigate the small sample size (and unlike in training not even having fewer features can help).
As few cases and many features in training come with the risk of overfitting, it is important to check the stability of the modeling. This can be done in a very straightforward fashion from repeated (aka iterated) cross validation: any difference in the prediction for the same case between the runs (repetitions/iterations) cannot be due to the tested case, but must be due to differences in the model (i.e. the training does not lead to stable models).
Have a look at our paper for more details: Beleites, C. & Salzer, R.: Assessing and improving the stability of chemometric models in small sample size situations, Anal Bioanal Chem, 390, 1261-1271 (2008). DOI: 10.1007/s00216-007-1818-6
There are only 120 combinations of 2 cases out of 16, do you may want to consider running all those combinations instead of randomly assigned folds.
In contrast to @Dave and @oloney, I do not recommend leave-one-out CV, for two reasons:
LOO does not allow the aforementioned measurement of stability (each surrogate model is tested with exactly one case: we cannot distinguish wheter variation is due to the case or due to the model). But checking stability is really crucial with so small cases : features ratio.
The second reason refers to classification only: LOO on a classification task will always test a case that belongs to a class underrepresented in the respective training split. For very small sample sizes, this can cause huge pessimistic bias. If that's the case for you, you're probably better off doing a stratified resampling validation that does not (or hardly) disturb the relative frequencies. | Does it make sense to do Cross Validation with a Small Sample? | I'm being asked to perform CV on the set.
I'm going to assume that this cross validation will be for internal validation (part of verification) of the performance of the model you get from your 16 x | Does it make sense to do Cross Validation with a Small Sample?
I'm being asked to perform CV on the set.
I'm going to assume that this cross validation will be for internal validation (part of verification) of the performance of the model you get from your 16 x 250 data set.
That is, you are not going to do any data-driven hyperparameter optimization (which can also use cross validation results).
Yes, cross validation does make sense here. The results will be very uncertain due to the fact that only 16 samples contribute to the validation results. But: given your small data set, repeated k-fold (8 fold would probably be the best choice) or similar resampling validation (out-of-bootstrap, repeated set validation) is the best you can do in this situation.
This large uncertainty, BTW, also means that data-driven optimization is basically impossible with such a small data set: this uncertainty due to the limited number of tested cases depends on the absolute number of tested cases - in validation there is no way to mitigate the small sample size (and unlike in training not even having fewer features can help).
As few cases and many features in training come with the risk of overfitting, it is important to check the stability of the modeling. This can be done in a very straightforward fashion from repeated (aka iterated) cross validation: any difference in the prediction for the same case between the runs (repetitions/iterations) cannot be due to the tested case, but must be due to differences in the model (i.e. the training does not lead to stable models).
Have a look at our paper for more details: Beleites, C. & Salzer, R.: Assessing and improving the stability of chemometric models in small sample size situations, Anal Bioanal Chem, 390, 1261-1271 (2008). DOI: 10.1007/s00216-007-1818-6
There are only 120 combinations of 2 cases out of 16, do you may want to consider running all those combinations instead of randomly assigned folds.
In contrast to @Dave and @oloney, I do not recommend leave-one-out CV, for two reasons:
LOO does not allow the aforementioned measurement of stability (each surrogate model is tested with exactly one case: we cannot distinguish wheter variation is due to the case or due to the model). But checking stability is really crucial with so small cases : features ratio.
The second reason refers to classification only: LOO on a classification task will always test a case that belongs to a class underrepresented in the respective training split. For very small sample sizes, this can cause huge pessimistic bias. If that's the case for you, you're probably better off doing a stratified resampling validation that does not (or hardly) disturb the relative frequencies. | Does it make sense to do Cross Validation with a Small Sample?
I'm being asked to perform CV on the set.
I'm going to assume that this cross validation will be for internal validation (part of verification) of the performance of the model you get from your 16 x |
52,278 | Does it make sense to do Cross Validation with a Small Sample? | The theory behind cross validation works all the way down the case where $k = n$, which is called leave-one-out cross-validation. LOOCV is the best choice when $n$ is small. The upside to using cross validation is that your estimate of generalization error will be unbiased and you'll be able to form a non-parametric confidence intervals for estimated parameters. The downside is that it doesn't magically create sample from nothing; the generalization error will probably be very large, and the confidence intervals will be very wide.
If you're planning to use CV for model selection or feature selection, you probably won't have much luck with 16 observations and 250 features. Let's say you use BIC for model selection, and you consider all 250 models, each with a single predictor. You can use CV to estimate and draw a confidence interval around the BIC for each model, but you'll likely find the confidence intervals overlap considerably. There may be a "best" model with BIC $ = 10 \pm 50$ (lower is better), but if the other 249 models have BIC $= 11 \pm 50$, then it's extremely unlikely that the "best" model is actually the best. The upside is the CV will let you estimate confidence intervals, so you'll know if this is the case or not. The downside is that it won't necessary allow you to choose a single best model with any degree of confidence. | Does it make sense to do Cross Validation with a Small Sample? | The theory behind cross validation works all the way down the case where $k = n$, which is called leave-one-out cross-validation. LOOCV is the best choice when $n$ is small. The upside to using cross | Does it make sense to do Cross Validation with a Small Sample?
The theory behind cross validation works all the way down the case where $k = n$, which is called leave-one-out cross-validation. LOOCV is the best choice when $n$ is small. The upside to using cross validation is that your estimate of generalization error will be unbiased and you'll be able to form a non-parametric confidence intervals for estimated parameters. The downside is that it doesn't magically create sample from nothing; the generalization error will probably be very large, and the confidence intervals will be very wide.
If you're planning to use CV for model selection or feature selection, you probably won't have much luck with 16 observations and 250 features. Let's say you use BIC for model selection, and you consider all 250 models, each with a single predictor. You can use CV to estimate and draw a confidence interval around the BIC for each model, but you'll likely find the confidence intervals overlap considerably. There may be a "best" model with BIC $ = 10 \pm 50$ (lower is better), but if the other 249 models have BIC $= 11 \pm 50$, then it's extremely unlikely that the "best" model is actually the best. The upside is the CV will let you estimate confidence intervals, so you'll know if this is the case or not. The downside is that it won't necessary allow you to choose a single best model with any degree of confidence. | Does it make sense to do Cross Validation with a Small Sample?
The theory behind cross validation works all the way down the case where $k = n$, which is called leave-one-out cross-validation. LOOCV is the best choice when $n$ is small. The upside to using cross |
52,279 | Does it make sense to do Cross Validation with a Small Sample? | No, not only does cross-validation make no sense here, but no analysis makes any sense at all. You cannot validly analyze 250 features on a sample size of 16. Cross-validation cannot validly help you with that.
Let's go to the raw, absolute minimum: suppose you had only one predictor and you want to estimate an outcome with only 16 samples. That alone would be almost definitely an insufficient sample size for any valid conclusions.
A sample is supposedly a representative sample from some larger population with the hope that you would want to extrapolate your findings to the larger population. To what kind of larger population could you validly extrapolate 16 cases? I do not know of any statistical test (not to talk of machine learning algorithm) for which a sample of 16 would be valid--and I'm still talking about the minimum case of just one predictor. 250 predictors make your sample 250 times as insufficient as for one predictor, so you can only imagine the magnitude of the problem. (That is why I am not even bothering to mention the possibility of dimension reduction--which you should probably do with 250 predictors--because that would not solve the problem even if you could reduce all 250 predictors into just one dimension.)
CV would make the problem only worse by reducing your insufficient sample from 16 to something smaller. Even if you carried out leave-one-out cross-validation, each of the 16 training samples of size 15 would be invalid. 15 wrongs do not magically make a right.
Sorry; this is probably not what you want to hear, but I think your best solution is to collect a truly large sample. Remember that as far as the computer is concerned, machine learning is just playing with numbers. If you give it data, it will try to give you a response according to some mathematical rules. The computer cannot tell you whether its response is good or completely worthless--only a proper study design can assure that. | Does it make sense to do Cross Validation with a Small Sample? | No, not only does cross-validation make no sense here, but no analysis makes any sense at all. You cannot validly analyze 250 features on a sample size of 16. Cross-validation cannot validly help you | Does it make sense to do Cross Validation with a Small Sample?
No, not only does cross-validation make no sense here, but no analysis makes any sense at all. You cannot validly analyze 250 features on a sample size of 16. Cross-validation cannot validly help you with that.
Let's go to the raw, absolute minimum: suppose you had only one predictor and you want to estimate an outcome with only 16 samples. That alone would be almost definitely an insufficient sample size for any valid conclusions.
A sample is supposedly a representative sample from some larger population with the hope that you would want to extrapolate your findings to the larger population. To what kind of larger population could you validly extrapolate 16 cases? I do not know of any statistical test (not to talk of machine learning algorithm) for which a sample of 16 would be valid--and I'm still talking about the minimum case of just one predictor. 250 predictors make your sample 250 times as insufficient as for one predictor, so you can only imagine the magnitude of the problem. (That is why I am not even bothering to mention the possibility of dimension reduction--which you should probably do with 250 predictors--because that would not solve the problem even if you could reduce all 250 predictors into just one dimension.)
CV would make the problem only worse by reducing your insufficient sample from 16 to something smaller. Even if you carried out leave-one-out cross-validation, each of the 16 training samples of size 15 would be invalid. 15 wrongs do not magically make a right.
Sorry; this is probably not what you want to hear, but I think your best solution is to collect a truly large sample. Remember that as far as the computer is concerned, machine learning is just playing with numbers. If you give it data, it will try to give you a response according to some mathematical rules. The computer cannot tell you whether its response is good or completely worthless--only a proper study design can assure that. | Does it make sense to do Cross Validation with a Small Sample?
No, not only does cross-validation make no sense here, but no analysis makes any sense at all. You cannot validly analyze 250 features on a sample size of 16. Cross-validation cannot validly help you |
52,280 | Binomial distribution intituition for N | Note that the values of $X \sim Binomial(n,p)$ correspond to number of "positive" trials, not probability. As $n$ grows, the values of $\hat{p} = X/n$ converge to the true $p$, hence the probability as a long-run frequency definition. Values of $X = n\hat{p}$ clearly don't converge.
Same distinction might help understanding the variance: variance of the proportion of positive trials decreases, $Var(X/n) = np(1-p) / n^2 = p(1-p)/n$, so the proportion estimates get more precise. Variance of the actual number of positive outcomes increases, however. | Binomial distribution intituition for N | Note that the values of $X \sim Binomial(n,p)$ correspond to number of "positive" trials, not probability. As $n$ grows, the values of $\hat{p} = X/n$ converge to the true $p$, hence the probability a | Binomial distribution intituition for N
Note that the values of $X \sim Binomial(n,p)$ correspond to number of "positive" trials, not probability. As $n$ grows, the values of $\hat{p} = X/n$ converge to the true $p$, hence the probability as a long-run frequency definition. Values of $X = n\hat{p}$ clearly don't converge.
Same distinction might help understanding the variance: variance of the proportion of positive trials decreases, $Var(X/n) = np(1-p) / n^2 = p(1-p)/n$, so the proportion estimates get more precise. Variance of the actual number of positive outcomes increases, however. | Binomial distribution intituition for N
Note that the values of $X \sim Binomial(n,p)$ correspond to number of "positive" trials, not probability. As $n$ grows, the values of $\hat{p} = X/n$ converge to the true $p$, hence the probability a |
52,281 | Binomial distribution intituition for N | The variance of the distribution increases with N as you are measuring the total number of successes, not the ratio. Imagine you throw a coin once. The variance is tiny as all possible results are 0 heads and 1 heads (1/2 of a success from the mean).
If you throw it a million times, you won't often get results that close to the mean.
However, if your varaible of interest is the prportion of heads (rather than the raw number), variance will keep decreasing as you flip the coin more and more tiems (proportional to 1/N) | Binomial distribution intituition for N | The variance of the distribution increases with N as you are measuring the total number of successes, not the ratio. Imagine you throw a coin once. The variance is tiny as all possible results are 0 h | Binomial distribution intituition for N
The variance of the distribution increases with N as you are measuring the total number of successes, not the ratio. Imagine you throw a coin once. The variance is tiny as all possible results are 0 heads and 1 heads (1/2 of a success from the mean).
If you throw it a million times, you won't often get results that close to the mean.
However, if your varaible of interest is the prportion of heads (rather than the raw number), variance will keep decreasing as you flip the coin more and more tiems (proportional to 1/N) | Binomial distribution intituition for N
The variance of the distribution increases with N as you are measuring the total number of successes, not the ratio. Imagine you throw a coin once. The variance is tiny as all possible results are 0 h |
52,282 | Binomial distribution intituition for N | What binomial distribution describes, is the distribution of "successes" in $n$ Bernoulli trials, each having probability of success $p$. The most popular handbook example, is that you have a biased coin with probability of tossing heads equal to $p$, you throw it $n$ times and count the total number of heads tossed.
In single coin toss the variance is $p(1-p)$, if you toss it twice, it is $p(1-p) + p(1-p)$ by the property of variance of uncorrelated variables that $\mathrm{Var}(A+B) = \mathrm{Var}(A) + \mathrm{Var}(B)$, if you toss it $n$ times, it varies by $\sum_{i=1}^n p(1-p) = np(1-p)$.
Say that you record results of tossing some coin $n$ times, then your friend tosses it $m$ times. You repeat this experiment many times and in each row of your notebook you record the successes by you in one column and your friends results in the second column. In the end you observe that on average you observed $np \pm x$ heads, while your friend $mp \pm y$ heads. If you summed together the results rowwise, on average you would observe $np+mp = (n+m)p$ heads for both of you. How much would the sum vary? It wouldn't be $(n+m)p \pm x$, neither it wouldn't be $(n+m)p \pm y$, since in both cases this would mean that the additional tosses cannot make the result more extreme. This wouldn't make sense. Say that you never tossed more then $k$ heads, while your friend never tossed more then $r$ heads, then obviously if you sum the number of heads tossed by both of you, the result can be greater then $k$ or $r$. | Binomial distribution intituition for N | What binomial distribution describes, is the distribution of "successes" in $n$ Bernoulli trials, each having probability of success $p$. The most popular handbook example, is that you have a biased c | Binomial distribution intituition for N
What binomial distribution describes, is the distribution of "successes" in $n$ Bernoulli trials, each having probability of success $p$. The most popular handbook example, is that you have a biased coin with probability of tossing heads equal to $p$, you throw it $n$ times and count the total number of heads tossed.
In single coin toss the variance is $p(1-p)$, if you toss it twice, it is $p(1-p) + p(1-p)$ by the property of variance of uncorrelated variables that $\mathrm{Var}(A+B) = \mathrm{Var}(A) + \mathrm{Var}(B)$, if you toss it $n$ times, it varies by $\sum_{i=1}^n p(1-p) = np(1-p)$.
Say that you record results of tossing some coin $n$ times, then your friend tosses it $m$ times. You repeat this experiment many times and in each row of your notebook you record the successes by you in one column and your friends results in the second column. In the end you observe that on average you observed $np \pm x$ heads, while your friend $mp \pm y$ heads. If you summed together the results rowwise, on average you would observe $np+mp = (n+m)p$ heads for both of you. How much would the sum vary? It wouldn't be $(n+m)p \pm x$, neither it wouldn't be $(n+m)p \pm y$, since in both cases this would mean that the additional tosses cannot make the result more extreme. This wouldn't make sense. Say that you never tossed more then $k$ heads, while your friend never tossed more then $r$ heads, then obviously if you sum the number of heads tossed by both of you, the result can be greater then $k$ or $r$. | Binomial distribution intituition for N
What binomial distribution describes, is the distribution of "successes" in $n$ Bernoulli trials, each having probability of success $p$. The most popular handbook example, is that you have a biased c |
52,283 | Why do two implementations of the Anderson-Darling test produce such different p-values? | The Anderson-Darling test is a good test--but it has to be correctly applied and, as in most distributional tests, there is a subtle pitfall. Many analysts have fallen for it.
Distributional tests are Cinderella tests. In the fairy tale, the prince's men searched for a mysterious princess by comparing the feet of young ladies to the glass slipper Cinderella had left behind as she fled the ball. Any young lady whose foot fit the slipper would be a "significant" match to the slipper. This worked because the slipper had a specific shape, length, and width and few feet in the realm would likely fit it.
Imagine a twist on this fairy tale in which Cinderella's ugly stepsisters got wind of the search. Before the prince's men arrived, the stepsisters visited Ye Olde Paylesse Glasse Slipper Store and chose a slipper off the rack to match the width and length of the eldest sister's foot. Because glass slippers are not malleable and this one was not specifically tailored to the foot, it was not perfect--but at least the shoe fit.
Returning to their home just as the prince's men arrived, the older sister distracted the men while the younger surreptitiously substituted the slipper she had just purchased for the one brought by the men. When the men offered this slipper to the older sister, they were astonished that it fit her warty foot. Nevertheless, because she had passed the test she was promptly brought before the prince.
I hope the prince, like you, was sufficiently sharp-eyed and clever to recognize that the ugly lady brought before him could not possibly be the same beautiful girl he had danced with at the ball. He wondered, though, how the slipper could possibly fit her foot.
That is precisely what this question asks: how can the goftest version of the A-D test seem like the dataset is such a good match to the Normal distribution "slipper" while the nortest version indicates it is not? The answer is that the goftest version was offered a distribution constructed to match the data while the nortest version anticipates that subterfuge and compensates accordingly, as did the prince.
The goftest version of the Anderson-Darling test asks you to provide the glass slipper in the form of "distributional parameters," which are perfect analogs of length and width. This version compares your distribution to the data and reports how well it fits. As applied in the question, though, the distributional parameters were tailored to the data: they were estimated from them. In exactly the same way the ugly stepsister's foot turned out to be a good match to the slipper, so will these data appear to be a good match to a Normal distribution, even when they are (to any sufficiently sharp-eyed prince) obviously non-Normal.
The nortest version assumes you are going to cheat in this way and it automatically compensates for that. It reports the same degree of fit--that's why the two A-D statistics of 0.8892 shown in the question are the same--but it assesses its goodness in a manner that compensates for the cheating.
It is possible to adjust the goftest version to produce a correct p-value. One way is through simulation: by repeatedly offering samples from a Normal distribution to the test, we can discover the distribution of its test statistic (or, equivalently, the distribution of the p-values it assigns to that statistic). Only p-values that are extremely low relative to this null distribution should count as evidence that the shoe fits.
Here is a histogram of 100,000 iterations of this simulation.
P-values are supposed to have a uniform distribution (when the null hypothesis is true). The histogram would be almost level. This distribution is decidedly non-uniform.
The vertical line is at $0.42,$ the p-value reported for the original data. The values smaller than it are shown in red as "extreme." Only $0.0223$ of them are this extreme. (The first three decimal places of this number should be correct.) This is the correct p-value for this test. It is in close agreement with the value of $0.0251$ reported by nortest. (In this case, because I performed such a large simulation, the simulated p-value should be considered more accurate than the computed p-value.)
The R code that produced the simulation and the figure is reproduced below, but with the simulation size set to 1,000 rather than 100,000 so that it will execute in less than one second.
library(ggplot2)
x = iris$Sepal.Length
p.value <- goftest::ad.test(x, "pnorm", mean = mean(x), sd = sd(x))$p.value
#
# Simulation study.
#
mu <- mean(x)
sigma <- sd(x)
sim <- replicate(1e3, {
x.sim <- rnorm(length(x), mu, sigma)
goftest::ad.test(x.sim, "pnorm", mean = mean(x.sim), sd = sd(x.sim))$p.value
})
p.value.corrected <- mean(sim <= p.value)
message("Corrected p-value is ", p.value.corrected,
" +/- ", signif(2 * sd(sim <= p.value) / sqrt(length(sim)), 2))
#
# Display the p-value distribution.
#
X <- data.frame(p.value=sim, Status=ifelse(sim <= p.value, "Extreme", "Not extreme"))
ggplot(X, aes(p.value)) +
xlab("goftest::ad.test p-value") + ylab("Count") +
geom_histogram(aes(fill=Status), binwidth=0.05, boundary=0, alpha=0.65) +
geom_vline(xintercept=p.value, color="#404040", size=1) +
ggtitle("A-D 'p-values' for Simulated Normal Data",
"`goftest` calculations") | Why do two implementations of the Anderson-Darling test produce such different p-values? | The Anderson-Darling test is a good test--but it has to be correctly applied and, as in most distributional tests, there is a subtle pitfall. Many analysts have fallen for it.
Distributional tests a | Why do two implementations of the Anderson-Darling test produce such different p-values?
The Anderson-Darling test is a good test--but it has to be correctly applied and, as in most distributional tests, there is a subtle pitfall. Many analysts have fallen for it.
Distributional tests are Cinderella tests. In the fairy tale, the prince's men searched for a mysterious princess by comparing the feet of young ladies to the glass slipper Cinderella had left behind as she fled the ball. Any young lady whose foot fit the slipper would be a "significant" match to the slipper. This worked because the slipper had a specific shape, length, and width and few feet in the realm would likely fit it.
Imagine a twist on this fairy tale in which Cinderella's ugly stepsisters got wind of the search. Before the prince's men arrived, the stepsisters visited Ye Olde Paylesse Glasse Slipper Store and chose a slipper off the rack to match the width and length of the eldest sister's foot. Because glass slippers are not malleable and this one was not specifically tailored to the foot, it was not perfect--but at least the shoe fit.
Returning to their home just as the prince's men arrived, the older sister distracted the men while the younger surreptitiously substituted the slipper she had just purchased for the one brought by the men. When the men offered this slipper to the older sister, they were astonished that it fit her warty foot. Nevertheless, because she had passed the test she was promptly brought before the prince.
I hope the prince, like you, was sufficiently sharp-eyed and clever to recognize that the ugly lady brought before him could not possibly be the same beautiful girl he had danced with at the ball. He wondered, though, how the slipper could possibly fit her foot.
That is precisely what this question asks: how can the goftest version of the A-D test seem like the dataset is such a good match to the Normal distribution "slipper" while the nortest version indicates it is not? The answer is that the goftest version was offered a distribution constructed to match the data while the nortest version anticipates that subterfuge and compensates accordingly, as did the prince.
The goftest version of the Anderson-Darling test asks you to provide the glass slipper in the form of "distributional parameters," which are perfect analogs of length and width. This version compares your distribution to the data and reports how well it fits. As applied in the question, though, the distributional parameters were tailored to the data: they were estimated from them. In exactly the same way the ugly stepsister's foot turned out to be a good match to the slipper, so will these data appear to be a good match to a Normal distribution, even when they are (to any sufficiently sharp-eyed prince) obviously non-Normal.
The nortest version assumes you are going to cheat in this way and it automatically compensates for that. It reports the same degree of fit--that's why the two A-D statistics of 0.8892 shown in the question are the same--but it assesses its goodness in a manner that compensates for the cheating.
It is possible to adjust the goftest version to produce a correct p-value. One way is through simulation: by repeatedly offering samples from a Normal distribution to the test, we can discover the distribution of its test statistic (or, equivalently, the distribution of the p-values it assigns to that statistic). Only p-values that are extremely low relative to this null distribution should count as evidence that the shoe fits.
Here is a histogram of 100,000 iterations of this simulation.
P-values are supposed to have a uniform distribution (when the null hypothesis is true). The histogram would be almost level. This distribution is decidedly non-uniform.
The vertical line is at $0.42,$ the p-value reported for the original data. The values smaller than it are shown in red as "extreme." Only $0.0223$ of them are this extreme. (The first three decimal places of this number should be correct.) This is the correct p-value for this test. It is in close agreement with the value of $0.0251$ reported by nortest. (In this case, because I performed such a large simulation, the simulated p-value should be considered more accurate than the computed p-value.)
The R code that produced the simulation and the figure is reproduced below, but with the simulation size set to 1,000 rather than 100,000 so that it will execute in less than one second.
library(ggplot2)
x = iris$Sepal.Length
p.value <- goftest::ad.test(x, "pnorm", mean = mean(x), sd = sd(x))$p.value
#
# Simulation study.
#
mu <- mean(x)
sigma <- sd(x)
sim <- replicate(1e3, {
x.sim <- rnorm(length(x), mu, sigma)
goftest::ad.test(x.sim, "pnorm", mean = mean(x.sim), sd = sd(x.sim))$p.value
})
p.value.corrected <- mean(sim <= p.value)
message("Corrected p-value is ", p.value.corrected,
" +/- ", signif(2 * sd(sim <= p.value) / sqrt(length(sim)), 2))
#
# Display the p-value distribution.
#
X <- data.frame(p.value=sim, Status=ifelse(sim <= p.value, "Extreme", "Not extreme"))
ggplot(X, aes(p.value)) +
xlab("goftest::ad.test p-value") + ylab("Count") +
geom_histogram(aes(fill=Status), binwidth=0.05, boundary=0, alpha=0.65) +
geom_vline(xintercept=p.value, color="#404040", size=1) +
ggtitle("A-D 'p-values' for Simulated Normal Data",
"`goftest` calculations") | Why do two implementations of the Anderson-Darling test produce such different p-values?
The Anderson-Darling test is a good test--but it has to be correctly applied and, as in most distributional tests, there is a subtle pitfall. Many analysts have fallen for it.
Distributional tests a |
52,284 | Derivation of the closed-form solution to minimizing the least-squares cost function | Our loss function is $RSS(\beta) = (y - X\beta)^T(y -X\beta)$. Expanding this and using the fact that $(u - v)^T = u^T - v^T$, we have
$$
RSS(\beta) = y^Ty - y^TX\beta - \beta^TX^Ty + \beta^T X^T X \beta.
$$
Noting that $y^TX\beta$ is a scalar, and for any scalar $r \in \mathbb R$ we have $r = r^T$ we have $y^T X \beta = (y^T X \beta)^T = \beta^T X^T y$ so all together
$$
RSS(\beta) = y^T y - 2 \beta^T X^T y + \beta^T X^T X \beta.
$$
Now we'll differentiate with respect to $\beta$:
$$
\frac{\partial RSS}{\partial \beta} = \frac{\partial}{\partial \beta} y^T y - 2 \frac{\partial}{\partial \beta} \beta^T X^T y + \frac{\partial}{\partial \beta} \beta^T X^T X \beta
$$
$$
= 0 - 2 X^T y + 2 X^T X \beta.
$$
If you haven't seen derivatives with respect to a vector before, the Matrix Cookbook is a popular reference.
We want to find the minimum of $RSS$ so we'll set the derivative equal to $0$. This leads us to
$$
\frac{\partial RSS}{\partial \beta} \stackrel{\text{set}}= 0 \implies -2X^T y + 2X^T X \beta = 0
$$
$$
\implies X^T y - X^T X \beta = 0 \implies X^T(y - X \beta) = 0.
$$
Now we do use the assumption that $X$ is full column rank, which means we know $X^T X$ is positive definite and therefore invertible. This means
$$
X^Ty = X^T X \beta \implies (X^T X)^{-1}X^T y = \hat \beta
$$
where we achieved this by left-multiplying by $(X^T X)^{-1}$. | Derivation of the closed-form solution to minimizing the least-squares cost function | Our loss function is $RSS(\beta) = (y - X\beta)^T(y -X\beta)$. Expanding this and using the fact that $(u - v)^T = u^T - v^T$, we have
$$
RSS(\beta) = y^Ty - y^TX\beta - \beta^TX^Ty + \beta^T X^T X \ | Derivation of the closed-form solution to minimizing the least-squares cost function
Our loss function is $RSS(\beta) = (y - X\beta)^T(y -X\beta)$. Expanding this and using the fact that $(u - v)^T = u^T - v^T$, we have
$$
RSS(\beta) = y^Ty - y^TX\beta - \beta^TX^Ty + \beta^T X^T X \beta.
$$
Noting that $y^TX\beta$ is a scalar, and for any scalar $r \in \mathbb R$ we have $r = r^T$ we have $y^T X \beta = (y^T X \beta)^T = \beta^T X^T y$ so all together
$$
RSS(\beta) = y^T y - 2 \beta^T X^T y + \beta^T X^T X \beta.
$$
Now we'll differentiate with respect to $\beta$:
$$
\frac{\partial RSS}{\partial \beta} = \frac{\partial}{\partial \beta} y^T y - 2 \frac{\partial}{\partial \beta} \beta^T X^T y + \frac{\partial}{\partial \beta} \beta^T X^T X \beta
$$
$$
= 0 - 2 X^T y + 2 X^T X \beta.
$$
If you haven't seen derivatives with respect to a vector before, the Matrix Cookbook is a popular reference.
We want to find the minimum of $RSS$ so we'll set the derivative equal to $0$. This leads us to
$$
\frac{\partial RSS}{\partial \beta} \stackrel{\text{set}}= 0 \implies -2X^T y + 2X^T X \beta = 0
$$
$$
\implies X^T y - X^T X \beta = 0 \implies X^T(y - X \beta) = 0.
$$
Now we do use the assumption that $X$ is full column rank, which means we know $X^T X$ is positive definite and therefore invertible. This means
$$
X^Ty = X^T X \beta \implies (X^T X)^{-1}X^T y = \hat \beta
$$
where we achieved this by left-multiplying by $(X^T X)^{-1}$. | Derivation of the closed-form solution to minimizing the least-squares cost function
Our loss function is $RSS(\beta) = (y - X\beta)^T(y -X\beta)$. Expanding this and using the fact that $(u - v)^T = u^T - v^T$, we have
$$
RSS(\beta) = y^Ty - y^TX\beta - \beta^TX^Ty + \beta^T X^T X \ |
52,285 | Why can the covariance matrix be computed as $\frac{X X^T}{n-1}$? | Let $\mu = E(X)$. Then
$$Var(X) = E\left((X - \mu)(X - \mu)^T\right) = E\left(XX^T - \mu X^T - X \mu^T +
\mu \mu^T\right) \\ = E(XX^T) - \mu\mu^T$$
which generalizes the well-known scalar equality $Var(Z) = E(Z^2) - E(Z)^2$.
The natural estimator of $\Sigma := Var(X)$ is $\hat \Sigma = \frac 1{n-1}XX^T - \hat \mu \hat \mu^T$.
In many situations we can take $\mu = 0$ without any loss of generality. One common example is PCA. If we center our columns then we find that $\hat \mu = 0$ so our estimate of the variance is simply $\frac 1{n-1}XX^T$. The univariate analogue of this is the familiar $s^2 = \frac 1{n-1} \sum_i x_i^2$ when $\bar x = 0$.
As @Christoph Hanck points out in the comments, you need to distinguish between estimates and parameters here. There is only one definition of $\Sigma$, namely $E((X - \mu)(X - \mu)^T)$. So $\frac 1{n-1}XX^T$ is absolutely not the correct definition of the population covariance, but if $\mu=0$ it is an unbiased estimate for it, i.e. $Var(X) = E(\frac 1{n-1}XX^T)$. | Why can the covariance matrix be computed as $\frac{X X^T}{n-1}$? | Let $\mu = E(X)$. Then
$$Var(X) = E\left((X - \mu)(X - \mu)^T\right) = E\left(XX^T - \mu X^T - X \mu^T +
\mu \mu^T\right) \\ = E(XX^T) - \mu\mu^T$$
which generalizes the well-known scalar equality $ | Why can the covariance matrix be computed as $\frac{X X^T}{n-1}$?
Let $\mu = E(X)$. Then
$$Var(X) = E\left((X - \mu)(X - \mu)^T\right) = E\left(XX^T - \mu X^T - X \mu^T +
\mu \mu^T\right) \\ = E(XX^T) - \mu\mu^T$$
which generalizes the well-known scalar equality $Var(Z) = E(Z^2) - E(Z)^2$.
The natural estimator of $\Sigma := Var(X)$ is $\hat \Sigma = \frac 1{n-1}XX^T - \hat \mu \hat \mu^T$.
In many situations we can take $\mu = 0$ without any loss of generality. One common example is PCA. If we center our columns then we find that $\hat \mu = 0$ so our estimate of the variance is simply $\frac 1{n-1}XX^T$. The univariate analogue of this is the familiar $s^2 = \frac 1{n-1} \sum_i x_i^2$ when $\bar x = 0$.
As @Christoph Hanck points out in the comments, you need to distinguish between estimates and parameters here. There is only one definition of $\Sigma$, namely $E((X - \mu)(X - \mu)^T)$. So $\frac 1{n-1}XX^T$ is absolutely not the correct definition of the population covariance, but if $\mu=0$ it is an unbiased estimate for it, i.e. $Var(X) = E(\frac 1{n-1}XX^T)$. | Why can the covariance matrix be computed as $\frac{X X^T}{n-1}$?
Let $\mu = E(X)$. Then
$$Var(X) = E\left((X - \mu)(X - \mu)^T\right) = E\left(XX^T - \mu X^T - X \mu^T +
\mu \mu^T\right) \\ = E(XX^T) - \mu\mu^T$$
which generalizes the well-known scalar equality $ |
52,286 | Why can the covariance matrix be computed as $\frac{X X^T}{n-1}$? | COMMENT:
@Chacone's answer is great as is, but I think there is one step that is left unexplained, and it is clearer with expectation notation. To reflect @GeoMatt22's comment in the following proof $X$ is a $p\times 1$ random vector:
$$
\begin{align}
\text{Cov}(X)&=\mathbb E\left[\,\left(X- \mathbb E[X] \right) \, \left(X- \mathbb E[X] \right)^\top \right]\\[2ex]
&= \mathbb E\left[\,XX^\top - X\,\mathbb E[X]^\top - \mathbb E[X] \,X^\top + \mathbb E[X] \mathbb E\, [X]^\top \right]\\[2ex]
&= \mathbb E\left[\,XX^\top\right] - \mathbb E [X]\,\mathbb E[X]^\top - \mathbb E[X] \,\mathbb E [X]^\top + \mathbb E[X] \mathbb E\, [X]^\top \\[2ex]
&= \mathbb E[XX^\top] \; -\; \mathbb E[X]\, \mathbb E[X]^\top
\end{align}
$$
As for the transition to the sample estimate of the population covariance using this alternate (raw moment) formula, the denominators will need adjusing. In what follows $X$ is a $p \times n$ data matrix (following @GeoMat22's comment):
$$\begin{align}
\sigma^2(X)&=\frac{XX^\top}{n-1} \; -\; \frac{n}{n-1}\; \begin{bmatrix}\bar X_1\\ \bar X_2\\ \vdots \\ \bar X_p \end{bmatrix}\,
\; \begin{bmatrix}\bar X_1 & \bar X_2 & \cdots & \bar X_p \end{bmatrix}
\end{align}
$$
because it is necessary to multiply by $\frac{n}{n-1}$ to get rid of the biased $n$ denominator, and replace it with $n-1$. Evidently anything after the minus sign disappears if the mean is zero.
Here is an illustrative example in R:
> set.seed(0)
> # Sampling three random vectors: V1, V2 and V3:
> X1 = 1:5
> X2 = rnorm(5, 5, 1)
> X3 = runif(5)
> # Forming a matrix with each row representing a sample from a random vector:
> (X = (rbind(X1,X2,X3)))
[,1] [,2] [,3] [,4] [,5]
X1 1.00000000 2.0000000 3.0000000 4.0000000 5.0000000
X2 6.26295428 4.6737666 6.3297993 6.2724293 5.4146414
X3 0.06178627 0.2059746 0.1765568 0.6870228 0.3841037
> # Taking the estimate of the expectation for each random vector, bar Xi:
> mu = rowMeans(X)
> # Calculating manually the variance with the alternate formula
> (man.cov = ((X %*% t(X)) / (ncol(X) - 1)) - (ncol(X)/(ncol(X) - 1)) * (mu %*% t(mu)))
X1 X2 X3
X1 2.50000000 -0.02449075 0.28142079
X2 -0.02449075 0.53366886 0.02019664
X3 0.28142079 0.02019664 0.05940930
> # Comparing to the built-in formula:
> cov(t(X))
X1 X2 X3
X1 2.50000000 -0.02449075 0.28142079
X2 -0.02449075 0.53366886 0.02019664
X3 0.28142079 0.02019664 0.05940930
> # are the same...
> all.equal(man.cov, cov(t(X)))
[1] TRUE | Why can the covariance matrix be computed as $\frac{X X^T}{n-1}$? | COMMENT:
@Chacone's answer is great as is, but I think there is one step that is left unexplained, and it is clearer with expectation notation. To reflect @GeoMatt22's comment in the following proof $ | Why can the covariance matrix be computed as $\frac{X X^T}{n-1}$?
COMMENT:
@Chacone's answer is great as is, but I think there is one step that is left unexplained, and it is clearer with expectation notation. To reflect @GeoMatt22's comment in the following proof $X$ is a $p\times 1$ random vector:
$$
\begin{align}
\text{Cov}(X)&=\mathbb E\left[\,\left(X- \mathbb E[X] \right) \, \left(X- \mathbb E[X] \right)^\top \right]\\[2ex]
&= \mathbb E\left[\,XX^\top - X\,\mathbb E[X]^\top - \mathbb E[X] \,X^\top + \mathbb E[X] \mathbb E\, [X]^\top \right]\\[2ex]
&= \mathbb E\left[\,XX^\top\right] - \mathbb E [X]\,\mathbb E[X]^\top - \mathbb E[X] \,\mathbb E [X]^\top + \mathbb E[X] \mathbb E\, [X]^\top \\[2ex]
&= \mathbb E[XX^\top] \; -\; \mathbb E[X]\, \mathbb E[X]^\top
\end{align}
$$
As for the transition to the sample estimate of the population covariance using this alternate (raw moment) formula, the denominators will need adjusing. In what follows $X$ is a $p \times n$ data matrix (following @GeoMat22's comment):
$$\begin{align}
\sigma^2(X)&=\frac{XX^\top}{n-1} \; -\; \frac{n}{n-1}\; \begin{bmatrix}\bar X_1\\ \bar X_2\\ \vdots \\ \bar X_p \end{bmatrix}\,
\; \begin{bmatrix}\bar X_1 & \bar X_2 & \cdots & \bar X_p \end{bmatrix}
\end{align}
$$
because it is necessary to multiply by $\frac{n}{n-1}$ to get rid of the biased $n$ denominator, and replace it with $n-1$. Evidently anything after the minus sign disappears if the mean is zero.
Here is an illustrative example in R:
> set.seed(0)
> # Sampling three random vectors: V1, V2 and V3:
> X1 = 1:5
> X2 = rnorm(5, 5, 1)
> X3 = runif(5)
> # Forming a matrix with each row representing a sample from a random vector:
> (X = (rbind(X1,X2,X3)))
[,1] [,2] [,3] [,4] [,5]
X1 1.00000000 2.0000000 3.0000000 4.0000000 5.0000000
X2 6.26295428 4.6737666 6.3297993 6.2724293 5.4146414
X3 0.06178627 0.2059746 0.1765568 0.6870228 0.3841037
> # Taking the estimate of the expectation for each random vector, bar Xi:
> mu = rowMeans(X)
> # Calculating manually the variance with the alternate formula
> (man.cov = ((X %*% t(X)) / (ncol(X) - 1)) - (ncol(X)/(ncol(X) - 1)) * (mu %*% t(mu)))
X1 X2 X3
X1 2.50000000 -0.02449075 0.28142079
X2 -0.02449075 0.53366886 0.02019664
X3 0.28142079 0.02019664 0.05940930
> # Comparing to the built-in formula:
> cov(t(X))
X1 X2 X3
X1 2.50000000 -0.02449075 0.28142079
X2 -0.02449075 0.53366886 0.02019664
X3 0.28142079 0.02019664 0.05940930
> # are the same...
> all.equal(man.cov, cov(t(X)))
[1] TRUE | Why can the covariance matrix be computed as $\frac{X X^T}{n-1}$?
COMMENT:
@Chacone's answer is great as is, but I think there is one step that is left unexplained, and it is clearer with expectation notation. To reflect @GeoMatt22's comment in the following proof $ |
52,287 | The p value for the random forest regression model | When in doubt, simulate or permute.
In this specific case:
Randomly permute your dependent variable.
Fit a random forest.
Note the % variance explained.
Do steps 1-3 multiple times, say 1,000-10,000 times. You now have an empirical distribution of % variance explained through a random forest, under the null hypothesis of no relationship between your independent and dependent variable.
Insert the actual % variance explained in your original model into this distribution, and note which proportion of permutation-based "null" % variance explained values exceeds this true value. This proportion is your p value.
If you did the same thing in a standard linear regression model, you would (asymptotically) get the p value for the classical F test for variance explained.
As others write, your reviewer does not sound overly statistically savvy, but the approach I'm outlining above makes sense and should satisfy him. It's better than getting into an anonymous argument over the statistical competence of a reviewer, anyway. | The p value for the random forest regression model | When in doubt, simulate or permute.
In this specific case:
Randomly permute your dependent variable.
Fit a random forest.
Note the % variance explained.
Do steps 1-3 multiple times, say 1,000-10,000 | The p value for the random forest regression model
When in doubt, simulate or permute.
In this specific case:
Randomly permute your dependent variable.
Fit a random forest.
Note the % variance explained.
Do steps 1-3 multiple times, say 1,000-10,000 times. You now have an empirical distribution of % variance explained through a random forest, under the null hypothesis of no relationship between your independent and dependent variable.
Insert the actual % variance explained in your original model into this distribution, and note which proportion of permutation-based "null" % variance explained values exceeds this true value. This proportion is your p value.
If you did the same thing in a standard linear regression model, you would (asymptotically) get the p value for the classical F test for variance explained.
As others write, your reviewer does not sound overly statistically savvy, but the approach I'm outlining above makes sense and should satisfy him. It's better than getting into an anonymous argument over the statistical competence of a reviewer, anyway. | The p value for the random forest regression model
When in doubt, simulate or permute.
In this specific case:
Randomly permute your dependent variable.
Fit a random forest.
Note the % variance explained.
Do steps 1-3 multiple times, say 1,000-10,000 |
52,288 | what is suitable probability distribution for count data | Two common distributions for count data are the poisson or the negative-binomial one. If we fit these to your data, the NB works a lot better:
library(MASS) # for fitdistr()
xx <- 0:20
counts <- c(49, 36, 42, 26, 22, 22, 8, 12, 2, 4, 7, 0, 1, 1, 1, 1, 2, 1, 0, 1, 0)
obs <- rep(xx,counts)
poisson.density <- length(obs)*dpois(xx,mean(obs))
nb <- fitdistr(obs,"negative binomial")
nb.density <- length(obs)*dnbinom(xx,size=nb$estimate["size"],mu=nb$estimate["mu"])
foo <- barplot(counts,names.arg=xx,ylim=range(c(counts,poisson.density)))
lines(foo[,1],poisson.density,lwd=2)
lines(foo[,1],nb.density,lwd=2,col="red")
legend("topright",lwd=2,col=c("black","red"),legend=c("Poisson","Negative Binomial"))
However, there seems to be a peak at zero. Might there be a separate process that leads to zeros? If so, you might want to look at zero-inflation, e.g., the zero-inflated Poisson (ZIP).
You say that you have already removed weekends. You might still have intra-weekly seasonality (e.g., on Fridays), so you might want to consider modeling these in a regression framework, e.g., using Poisson regression or negative binomial regression. Then again, this might be overkill.
Finally, if all you are interested in is simulation, you could simply resample from the data you already have. Or fit a semiparametric kernel density to your observed frequencies and sample from that. | what is suitable probability distribution for count data | Two common distributions for count data are the poisson or the negative-binomial one. If we fit these to your data, the NB works a lot better:
library(MASS) # for fitdistr()
xx <- 0:20
counts <- c(49 | what is suitable probability distribution for count data
Two common distributions for count data are the poisson or the negative-binomial one. If we fit these to your data, the NB works a lot better:
library(MASS) # for fitdistr()
xx <- 0:20
counts <- c(49, 36, 42, 26, 22, 22, 8, 12, 2, 4, 7, 0, 1, 1, 1, 1, 2, 1, 0, 1, 0)
obs <- rep(xx,counts)
poisson.density <- length(obs)*dpois(xx,mean(obs))
nb <- fitdistr(obs,"negative binomial")
nb.density <- length(obs)*dnbinom(xx,size=nb$estimate["size"],mu=nb$estimate["mu"])
foo <- barplot(counts,names.arg=xx,ylim=range(c(counts,poisson.density)))
lines(foo[,1],poisson.density,lwd=2)
lines(foo[,1],nb.density,lwd=2,col="red")
legend("topright",lwd=2,col=c("black","red"),legend=c("Poisson","Negative Binomial"))
However, there seems to be a peak at zero. Might there be a separate process that leads to zeros? If so, you might want to look at zero-inflation, e.g., the zero-inflated Poisson (ZIP).
You say that you have already removed weekends. You might still have intra-weekly seasonality (e.g., on Fridays), so you might want to consider modeling these in a regression framework, e.g., using Poisson regression or negative binomial regression. Then again, this might be overkill.
Finally, if all you are interested in is simulation, you could simply resample from the data you already have. Or fit a semiparametric kernel density to your observed frequencies and sample from that. | what is suitable probability distribution for count data
Two common distributions for count data are the poisson or the negative-binomial one. If we fit these to your data, the NB works a lot better:
library(MASS) # for fitdistr()
xx <- 0:20
counts <- c(49 |
52,289 | Principle of Analogy and Method of Moments | Least squares estimator in the classical linear regression model is a Method of Moments estimator.
The model is
$$\mathbf y = \mathbf X\beta + \mathbf u$$
Instead of minimizing the sum of squared residuals, we can obtain the OLS estimator by noting that under the assumptions of the specific model, it holds that ("orhtogonality condition")
$$E(\mathbf X' \mathbf u)= \mathbf 0$$
$$\implies E(\mathbf X'( \mathbf y - \mathbf X\beta))=\mathbf 0 \implies E(\mathbf X'\mathbf y)=E(\mathbf X'\mathbf X)\beta$$
$$\implies \beta = \left[E(\mathbf X'\mathbf X)\right]^{-1}E(\mathbf X'\mathbf y)$$
So if we knew the true expected values (and our assumptions were correct), we could calculate the true value of the unknown coefficient.
With non-experimental data, we don't. But we know that if our sample is ergodic-stationary (and i.i.d. samples are), then expected values are consistently estimated by their sample analogues, the corresponding sample means. Hence we have an "acceptable" estimator in
$$\hat \beta = \left[((1/n)\mathbf X'\mathbf X)\right]^{-1}((1/n)\mathbf X'\mathbf y) = (\mathbf X'\mathbf X)^{-1}\mathbf X'\mathbf y $$
which is the same estimator we will obtain if we minimize the sum of squared residuals.
If you reverse the calculations, and noting that the residuals are a function of $\hat \beta$, $\mathbf {\hat u} = \mathbf {\hat u(\hat \beta)}$ you will find that $\mathbf X' \mathbf {\hat u} (\hat \beta) = \mathbf 0$. Divide by $n$ for this to look like a sample mean.
So "we choose those estimates that make the sample obey what we assumed the population obeys". And we do that because we accept that the sample is representative of the population, so it should "behave like" the population (as we assumed the latter to behave).
As for $GMM$, each orthogonality condition is an equation. If you have $m$ equations and $k<m$ unknown coefficients, then the system of equations is "over-identified" and no exact solution exists, there is nothing more to it.
Hansen contrasted this with the original use of "Method of Moments": if a distribution is characterized by, say, three unknown parameters, the MoM tactic is to estimate using the sample the first three moments of the distribution (which appear in equations involving these parameters), and obtain an exactly-identified system of equations. See how this works in this answer, as an example. | Principle of Analogy and Method of Moments | Least squares estimator in the classical linear regression model is a Method of Moments estimator.
The model is
$$\mathbf y = \mathbf X\beta + \mathbf u$$
Instead of minimizing the sum of squared r | Principle of Analogy and Method of Moments
Least squares estimator in the classical linear regression model is a Method of Moments estimator.
The model is
$$\mathbf y = \mathbf X\beta + \mathbf u$$
Instead of minimizing the sum of squared residuals, we can obtain the OLS estimator by noting that under the assumptions of the specific model, it holds that ("orhtogonality condition")
$$E(\mathbf X' \mathbf u)= \mathbf 0$$
$$\implies E(\mathbf X'( \mathbf y - \mathbf X\beta))=\mathbf 0 \implies E(\mathbf X'\mathbf y)=E(\mathbf X'\mathbf X)\beta$$
$$\implies \beta = \left[E(\mathbf X'\mathbf X)\right]^{-1}E(\mathbf X'\mathbf y)$$
So if we knew the true expected values (and our assumptions were correct), we could calculate the true value of the unknown coefficient.
With non-experimental data, we don't. But we know that if our sample is ergodic-stationary (and i.i.d. samples are), then expected values are consistently estimated by their sample analogues, the corresponding sample means. Hence we have an "acceptable" estimator in
$$\hat \beta = \left[((1/n)\mathbf X'\mathbf X)\right]^{-1}((1/n)\mathbf X'\mathbf y) = (\mathbf X'\mathbf X)^{-1}\mathbf X'\mathbf y $$
which is the same estimator we will obtain if we minimize the sum of squared residuals.
If you reverse the calculations, and noting that the residuals are a function of $\hat \beta$, $\mathbf {\hat u} = \mathbf {\hat u(\hat \beta)}$ you will find that $\mathbf X' \mathbf {\hat u} (\hat \beta) = \mathbf 0$. Divide by $n$ for this to look like a sample mean.
So "we choose those estimates that make the sample obey what we assumed the population obeys". And we do that because we accept that the sample is representative of the population, so it should "behave like" the population (as we assumed the latter to behave).
As for $GMM$, each orthogonality condition is an equation. If you have $m$ equations and $k<m$ unknown coefficients, then the system of equations is "over-identified" and no exact solution exists, there is nothing more to it.
Hansen contrasted this with the original use of "Method of Moments": if a distribution is characterized by, say, three unknown parameters, the MoM tactic is to estimate using the sample the first three moments of the distribution (which appear in equations involving these parameters), and obtain an exactly-identified system of equations. See how this works in this answer, as an example. | Principle of Analogy and Method of Moments
Least squares estimator in the classical linear regression model is a Method of Moments estimator.
The model is
$$\mathbf y = \mathbf X\beta + \mathbf u$$
Instead of minimizing the sum of squared r |
52,290 | Is AIC a measure of goodness of fit? [duplicate] | Just to expand a little on Hossein's answer: AIC is a measure of relative goodness of fit. If you take a model and calculate its AIC then you might get a value of, say, 2000. That number on its own is meaningless, and tells you nothing about how well your model fits. However, say you then fit another model which contains one more explanatory variable. When you calculate the AIC again, you see that it is dropped to 1500. That is now evidence that model 2 is a better fit to the data than model 1.
AIC is useful for comparing models, but it does not tell you anything about the goodness of fit of a single, isolated model. | Is AIC a measure of goodness of fit? [duplicate] | Just to expand a little on Hossein's answer: AIC is a measure of relative goodness of fit. If you take a model and calculate its AIC then you might get a value of, say, 2000. That number on its own is | Is AIC a measure of goodness of fit? [duplicate]
Just to expand a little on Hossein's answer: AIC is a measure of relative goodness of fit. If you take a model and calculate its AIC then you might get a value of, say, 2000. That number on its own is meaningless, and tells you nothing about how well your model fits. However, say you then fit another model which contains one more explanatory variable. When you calculate the AIC again, you see that it is dropped to 1500. That is now evidence that model 2 is a better fit to the data than model 1.
AIC is useful for comparing models, but it does not tell you anything about the goodness of fit of a single, isolated model. | Is AIC a measure of goodness of fit? [duplicate]
Just to expand a little on Hossein's answer: AIC is a measure of relative goodness of fit. If you take a model and calculate its AIC then you might get a value of, say, 2000. That number on its own is |
52,291 | Is AIC a measure of goodness of fit? [duplicate] | AIC like many other model quality measures has two parts: goodness of fit and model simplicity. If you only measure the quality of a model by its goodness of fit, it favors overfitted models. On the other hand, if you only measure the model quality by its simplicity, it favors underfitted models. Therefore, AIC considers both criteria in evaluating a model. | Is AIC a measure of goodness of fit? [duplicate] | AIC like many other model quality measures has two parts: goodness of fit and model simplicity. If you only measure the quality of a model by its goodness of fit, it favors overfitted models. On the o | Is AIC a measure of goodness of fit? [duplicate]
AIC like many other model quality measures has two parts: goodness of fit and model simplicity. If you only measure the quality of a model by its goodness of fit, it favors overfitted models. On the other hand, if you only measure the model quality by its simplicity, it favors underfitted models. Therefore, AIC considers both criteria in evaluating a model. | Is AIC a measure of goodness of fit? [duplicate]
AIC like many other model quality measures has two parts: goodness of fit and model simplicity. If you only measure the quality of a model by its goodness of fit, it favors overfitted models. On the o |
52,292 | Need a smoother fit curve | This seems to be a case of dose-response modelling. There is an excellent paper by Ritz et al. (2015) that describes how these analyses can be performed using R. An introduction is provided here.
Using your data and the R package drc (which is the package described in the paper by Ritz et al.), I fitted a 4-parameter log-logistic function to the data.
# Load package
library(drc)
# The data
XY <- structure(list(Values = c(91.8, 95.3, 99.8, 123.3, 202.9, 619.8,
1214.2, 1519.1, 1509.2, 1523.3, 1595.2, 1625.1), Concn = c(1000,
300, 100, 30, 10, 3, 1, 0.3, 0.1, 0.03, 0.01, 0)), .Names = c("Values",
"Concn"), class = "data.frame", row.names = c(NA, -12L))
# Fit a four-parameter log-logistic function to the data
fit <- drm(Values~Concn, data = XY, fct = LL.4())
# Plot the fit
plot(fit, type = "confidence", broken = TRUE, col = "grey50", lwd = 2)
plot(fit, type = "obs", broken = TRUE, pch = 1, lwd = 2, col = "blue", add = TRUE)
The fit looks pretty good to me. | Need a smoother fit curve | This seems to be a case of dose-response modelling. There is an excellent paper by Ritz et al. (2015) that describes how these analyses can be performed using R. An introduction is provided here.
Usin | Need a smoother fit curve
This seems to be a case of dose-response modelling. There is an excellent paper by Ritz et al. (2015) that describes how these analyses can be performed using R. An introduction is provided here.
Using your data and the R package drc (which is the package described in the paper by Ritz et al.), I fitted a 4-parameter log-logistic function to the data.
# Load package
library(drc)
# The data
XY <- structure(list(Values = c(91.8, 95.3, 99.8, 123.3, 202.9, 619.8,
1214.2, 1519.1, 1509.2, 1523.3, 1595.2, 1625.1), Concn = c(1000,
300, 100, 30, 10, 3, 1, 0.3, 0.1, 0.03, 0.01, 0)), .Names = c("Values",
"Concn"), class = "data.frame", row.names = c(NA, -12L))
# Fit a four-parameter log-logistic function to the data
fit <- drm(Values~Concn, data = XY, fct = LL.4())
# Plot the fit
plot(fit, type = "confidence", broken = TRUE, col = "grey50", lwd = 2)
plot(fit, type = "obs", broken = TRUE, pch = 1, lwd = 2, col = "blue", add = TRUE)
The fit looks pretty good to me. | Need a smoother fit curve
This seems to be a case of dose-response modelling. There is an excellent paper by Ritz et al. (2015) that describes how these analyses can be performed using R. An introduction is provided here.
Usin |
52,293 | Need a smoother fit curve | I also think the original plot is smooth to some extent already. Another approach is to have ggplot do the smoothing for you. Not sure if this is what you want.
ggplot(XY, aes(x=log(Concn), y = Values)) + geom_smooth(method="loess") | Need a smoother fit curve | I also think the original plot is smooth to some extent already. Another approach is to have ggplot do the smoothing for you. Not sure if this is what you want.
ggplot(XY, aes(x=log(Concn), y = Values | Need a smoother fit curve
I also think the original plot is smooth to some extent already. Another approach is to have ggplot do the smoothing for you. Not sure if this is what you want.
ggplot(XY, aes(x=log(Concn), y = Values)) + geom_smooth(method="loess") | Need a smoother fit curve
I also think the original plot is smooth to some extent already. Another approach is to have ggplot do the smoothing for you. Not sure if this is what you want.
ggplot(XY, aes(x=log(Concn), y = Values |
52,294 | Need a smoother fit curve | You've plotted your curve as a linear interpolation between eleven data points. Even if the true underlying curve smooth, drawing it by taking eleven sample points and interpolating linearly is going to look pointy.
You need more sample points when drawing the curve. Create a sequence of x-values to use as sample points:
x <- seq(from = min(XY$Values), to = max(XY$Values), length.out = 250)
My go to is to create 250 sample points. Then feed these into your predict function and you will get a (within human perception) smooth rendering of the curve. | Need a smoother fit curve | You've plotted your curve as a linear interpolation between eleven data points. Even if the true underlying curve smooth, drawing it by taking eleven sample points and interpolating linearly is going | Need a smoother fit curve
You've plotted your curve as a linear interpolation between eleven data points. Even if the true underlying curve smooth, drawing it by taking eleven sample points and interpolating linearly is going to look pointy.
You need more sample points when drawing the curve. Create a sequence of x-values to use as sample points:
x <- seq(from = min(XY$Values), to = max(XY$Values), length.out = 250)
My go to is to create 250 sample points. Then feed these into your predict function and you will get a (within human perception) smooth rendering of the curve. | Need a smoother fit curve
You've plotted your curve as a linear interpolation between eleven data points. Even if the true underlying curve smooth, drawing it by taking eleven sample points and interpolating linearly is going |
52,295 | Need a smoother fit curve | Your function looks pretty smooth to me, and what I think you want is more flexibility.
So here's a spline with 8 degrees of freedom:
library(splines)
my_glm <- glm(Values ~ ns(Concn, df = 8), data = XY)
plot(XY$Values ~ XY$Concn , data = XY, col = 4,
main = "XY Std curve", log = "x")
lines(XY$Concn, predict(my_glm))
[Can't get the picture to upload at the moment]
Now that's flexible and goes right through all the points. But then I see you've got this nice theoretical model that must have come from your subject matter:
Values ~ (ymax* Concn / (ec50 + Concn)) + Ns*XY$Concn + ymin
and I'm kind of jealous. Is there anything else that might explain bias in this theoretical model (if the discrepancy we're seeing is indeed bias)? Even if you can't explain away the bias, it sure is simple and does a pretty nice job explaining the variation. In fact, the R-squared is above 99.9%:
cor(XY$Values, predict(my_glm))^2 | Need a smoother fit curve | Your function looks pretty smooth to me, and what I think you want is more flexibility.
So here's a spline with 8 degrees of freedom:
library(splines)
my_glm <- glm(Values ~ ns(Concn, df = 8), data = | Need a smoother fit curve
Your function looks pretty smooth to me, and what I think you want is more flexibility.
So here's a spline with 8 degrees of freedom:
library(splines)
my_glm <- glm(Values ~ ns(Concn, df = 8), data = XY)
plot(XY$Values ~ XY$Concn , data = XY, col = 4,
main = "XY Std curve", log = "x")
lines(XY$Concn, predict(my_glm))
[Can't get the picture to upload at the moment]
Now that's flexible and goes right through all the points. But then I see you've got this nice theoretical model that must have come from your subject matter:
Values ~ (ymax* Concn / (ec50 + Concn)) + Ns*XY$Concn + ymin
and I'm kind of jealous. Is there anything else that might explain bias in this theoretical model (if the discrepancy we're seeing is indeed bias)? Even if you can't explain away the bias, it sure is simple and does a pretty nice job explaining the variation. In fact, the R-squared is above 99.9%:
cor(XY$Values, predict(my_glm))^2 | Need a smoother fit curve
Your function looks pretty smooth to me, and what I think you want is more flexibility.
So here's a spline with 8 degrees of freedom:
library(splines)
my_glm <- glm(Values ~ ns(Concn, df = 8), data = |
52,296 | Bayesian inference: numerically sampling from the posterior predictive | If you can simulate values from $P(x_{\text{new}}|\theta)$, you can simply use your $N$ samples from posterior predictive and generate $x_{\text{new},i}$ for each posterior sample from this model to get a sample from the posterior predictive $\{x_{\text{new}}\}_{i=1}^N$.
This amounts to obtaining a collection $\{x_{\text{new},i}, \theta_i\}$ and discarding the value $\theta_i$, thus marginalizing over the vector of model parameters. | Bayesian inference: numerically sampling from the posterior predictive | If you can simulate values from $P(x_{\text{new}}|\theta)$, you can simply use your $N$ samples from posterior predictive and generate $x_{\text{new},i}$ for each posterior sample from this model to g | Bayesian inference: numerically sampling from the posterior predictive
If you can simulate values from $P(x_{\text{new}}|\theta)$, you can simply use your $N$ samples from posterior predictive and generate $x_{\text{new},i}$ for each posterior sample from this model to get a sample from the posterior predictive $\{x_{\text{new}}\}_{i=1}^N$.
This amounts to obtaining a collection $\{x_{\text{new},i}, \theta_i\}$ and discarding the value $\theta_i$, thus marginalizing over the vector of model parameters. | Bayesian inference: numerically sampling from the posterior predictive
If you can simulate values from $P(x_{\text{new}}|\theta)$, you can simply use your $N$ samples from posterior predictive and generate $x_{\text{new},i}$ for each posterior sample from this model to g |
52,297 | Bayesian inference: numerically sampling from the posterior predictive | Here is an instantiated example of the answer provided by lbelzile. The application is linear regression, and the goal is to find the posterior predicted distribution of $y$ values at a probed $x$ value:
http://doingbayesiandataanalysis.blogspot.com/2016/10/posterior-predictive-distribution-for.html
The key idea is that at every step in the MCMC chain, use that step's parameter values to randomly generate a $y$ value from the model.
Edit in response to comment: Below is an extended excerpt from the blog post.
Suppose you've done a (robust) Bayesian multiple linear regression, and now you want the posterior distribution on the predicted value of $y$ for some probe value of $⟨x_1,x_2,x_3,...⟩$. That is, not the posterior distribution on the mean of the predicted value, but the posterior distribution on the predicted value itself. I showed how to do this for simple linear regression in a previous post; in this post I show how to do it for multiple linear regression. (A lot of commenters and emailers have asked me to do this.)
The basic idea is simple: At each step in the MCMC chain, use the parameter values to randomly generate a simulated datum $y$ at the probed value of $x$. Then examine the resulting distribution of simulated $y$ values; that is the posterior distribution of the predicted $y$ values.
To implement the idea, the first programming choice is whether to simulate the $y$ value with JAGS (or Stan or whatever) while it is generating the MCMC chain, or to simulate the $y$ value after the MCMC chain has previously been generated. There are pros and cons of each option. Generating the value by JAGS has the benefit of keeping the code that generates the $y$ value close to the code that expresses the model, so there is less chance of mistakenly simulating data by a different model than is being fit to the data. On the other hand, this method requires us to pre-specify all the $x$ values we want to probe. If you want to choose the probed $x$ values after JAGS has already generated the MCMC chain, then you'll need to re-express the model outside of JAGS, in R, and run the risk of mistakenly expressing it differently (e.g., using precision instead of standard deviation, or thinking that y=rt(...) in R will use the same syntax as y~dt(...) in JAGS). I will show an implementation in which JAGS simulated the $y$ values while generating the MCMC chain.
[... example in original post not copied here ...]
The Jags model specification looks like the following. Notice at the very end the randomly generated $y$ values, denoted yP[i].
# Standardize the data:
data {
ym <- mean(y)
ysd <- sd(y)
for ( i in 1:Ntotal ) {
zy[i] <- ( y[i] - ym ) / ysd
}
for ( j in 1:Nx ) {
xm[j] <- mean(x[,j])
xsd[j] <- sd(x[,j])
for ( i in 1:Ntotal ) {
zx[i,j] <- ( x[i,j] - xm[j] ) / xsd[j]
}
# standardize the probe values:
for ( i in 1:Nprobe ) {
zxProbe[i,j] <- ( xProbe[i,j] - xm[j] ) / xsd[j]
}
}
}
# Specify the model for standardized data:
model {
for ( i in 1:Ntotal ) {
zy[i] ~ dt( zbeta0 + sum( zbeta[1:Nx] * zx[i,1:Nx] ) , 1/zsigma^2 , nu )
}
# Priors vague on standardized scale:
zbeta0 ~ dnorm( 0 , 1/2^2 )
for ( j in 1:Nx ) {
zbeta[j] ~ dnorm( 0 , 1/2^2 )
}
zsigma ~ dunif( 1.0E-5 , 1.0E+1 )
nu ~ dexp(1/30.0)
# Transform to original scale:
beta[1:Nx] <- ( zbeta[1:Nx] / xsd[1:Nx] )*ysd
beta0 <- zbeta0*ysd + ym - sum( zbeta[1:Nx] * xm[1:Nx] / xsd[1:Nx] )*ysd
sigma <- zsigma*ysd
# Predicted y values at xProbe:
for ( i in 1:Nprobe ) {
zyP[i] ~ dt( zbeta0 + sum( zbeta[1:Nx] * zxProbe[i,1:Nx] ) ,
1/zsigma^2 , nu )
yP[i] <- zyP[i] * ysd + ym
}
} | Bayesian inference: numerically sampling from the posterior predictive | Here is an instantiated example of the answer provided by lbelzile. The application is linear regression, and the goal is to find the posterior predicted distribution of $y$ values at a probed $x$ val | Bayesian inference: numerically sampling from the posterior predictive
Here is an instantiated example of the answer provided by lbelzile. The application is linear regression, and the goal is to find the posterior predicted distribution of $y$ values at a probed $x$ value:
http://doingbayesiandataanalysis.blogspot.com/2016/10/posterior-predictive-distribution-for.html
The key idea is that at every step in the MCMC chain, use that step's parameter values to randomly generate a $y$ value from the model.
Edit in response to comment: Below is an extended excerpt from the blog post.
Suppose you've done a (robust) Bayesian multiple linear regression, and now you want the posterior distribution on the predicted value of $y$ for some probe value of $⟨x_1,x_2,x_3,...⟩$. That is, not the posterior distribution on the mean of the predicted value, but the posterior distribution on the predicted value itself. I showed how to do this for simple linear regression in a previous post; in this post I show how to do it for multiple linear regression. (A lot of commenters and emailers have asked me to do this.)
The basic idea is simple: At each step in the MCMC chain, use the parameter values to randomly generate a simulated datum $y$ at the probed value of $x$. Then examine the resulting distribution of simulated $y$ values; that is the posterior distribution of the predicted $y$ values.
To implement the idea, the first programming choice is whether to simulate the $y$ value with JAGS (or Stan or whatever) while it is generating the MCMC chain, or to simulate the $y$ value after the MCMC chain has previously been generated. There are pros and cons of each option. Generating the value by JAGS has the benefit of keeping the code that generates the $y$ value close to the code that expresses the model, so there is less chance of mistakenly simulating data by a different model than is being fit to the data. On the other hand, this method requires us to pre-specify all the $x$ values we want to probe. If you want to choose the probed $x$ values after JAGS has already generated the MCMC chain, then you'll need to re-express the model outside of JAGS, in R, and run the risk of mistakenly expressing it differently (e.g., using precision instead of standard deviation, or thinking that y=rt(...) in R will use the same syntax as y~dt(...) in JAGS). I will show an implementation in which JAGS simulated the $y$ values while generating the MCMC chain.
[... example in original post not copied here ...]
The Jags model specification looks like the following. Notice at the very end the randomly generated $y$ values, denoted yP[i].
# Standardize the data:
data {
ym <- mean(y)
ysd <- sd(y)
for ( i in 1:Ntotal ) {
zy[i] <- ( y[i] - ym ) / ysd
}
for ( j in 1:Nx ) {
xm[j] <- mean(x[,j])
xsd[j] <- sd(x[,j])
for ( i in 1:Ntotal ) {
zx[i,j] <- ( x[i,j] - xm[j] ) / xsd[j]
}
# standardize the probe values:
for ( i in 1:Nprobe ) {
zxProbe[i,j] <- ( xProbe[i,j] - xm[j] ) / xsd[j]
}
}
}
# Specify the model for standardized data:
model {
for ( i in 1:Ntotal ) {
zy[i] ~ dt( zbeta0 + sum( zbeta[1:Nx] * zx[i,1:Nx] ) , 1/zsigma^2 , nu )
}
# Priors vague on standardized scale:
zbeta0 ~ dnorm( 0 , 1/2^2 )
for ( j in 1:Nx ) {
zbeta[j] ~ dnorm( 0 , 1/2^2 )
}
zsigma ~ dunif( 1.0E-5 , 1.0E+1 )
nu ~ dexp(1/30.0)
# Transform to original scale:
beta[1:Nx] <- ( zbeta[1:Nx] / xsd[1:Nx] )*ysd
beta0 <- zbeta0*ysd + ym - sum( zbeta[1:Nx] * xm[1:Nx] / xsd[1:Nx] )*ysd
sigma <- zsigma*ysd
# Predicted y values at xProbe:
for ( i in 1:Nprobe ) {
zyP[i] ~ dt( zbeta0 + sum( zbeta[1:Nx] * zxProbe[i,1:Nx] ) ,
1/zsigma^2 , nu )
yP[i] <- zyP[i] * ysd + ym
}
} | Bayesian inference: numerically sampling from the posterior predictive
Here is an instantiated example of the answer provided by lbelzile. The application is linear regression, and the goal is to find the posterior predicted distribution of $y$ values at a probed $x$ val |
52,298 | Variance of Cohen's $d$ for within subjects designs | There is no know derivation of the variance of $d_{av}$. In fact, this is not how one should compute the $d$ value for a within-subjects design (neither with $d_{rm}$ or the $d_{av}$).
There are two approaches for computing a $d$ value for a within-subjects design. The first uses change score standardization and is given by $$d_c = \frac{M_d}{SD_d},$$ where $M_d$ is the mean change and $SD_d$ is the SD of the change scores (which is equal to $SD_d = \sqrt{SD_1^2 + SD_2^2 - 2 \times r \times SD_1SD_2}$). The large-sample variance of $d_c$ is $$Var[d_c] = \frac{1}{n} + \frac{d_c^2}{2 \times n}.$$
The second approach uses raw score standardization. Here, we standardize based on the SD of either the pre- or the post-test scores (typically, the SD of the pre-test scores is used). So, we compute $$d_r = \frac{M_d}{SD_1}.$$ The large-sample variance of $d_r$ is $$Var[d_r] = \frac{2(1-r)}{n} + \frac{d_r^2}{2 \times n}.$$
We do not take the average of $SD_1$ and $SD_2$ in the second approach (and if anything, we would compute $\sqrt{(SD_1^2 + SD_2^2)/2}$ but again we also do not do that). If you would do this, then it becomes nearly impossible to derive the variance, because $SD_1^2 + SD_2^2$ does not follow a (scaled) $\chi^2$ distribution, since the pre- and post-test scores are not independent. If you look at the answer given here (which is about the independent samples case), you will see that the derivation of the variance involves the non-central t-distribution, which is also involved in deriving the two variance equations above. However, to get that non-central t, you need (the square-root of) a (scaled) $\chi^2$ distribution in the denominator. | Variance of Cohen's $d$ for within subjects designs | There is no know derivation of the variance of $d_{av}$. In fact, this is not how one should compute the $d$ value for a within-subjects design (neither with $d_{rm}$ or the $d_{av}$).
There are two a | Variance of Cohen's $d$ for within subjects designs
There is no know derivation of the variance of $d_{av}$. In fact, this is not how one should compute the $d$ value for a within-subjects design (neither with $d_{rm}$ or the $d_{av}$).
There are two approaches for computing a $d$ value for a within-subjects design. The first uses change score standardization and is given by $$d_c = \frac{M_d}{SD_d},$$ where $M_d$ is the mean change and $SD_d$ is the SD of the change scores (which is equal to $SD_d = \sqrt{SD_1^2 + SD_2^2 - 2 \times r \times SD_1SD_2}$). The large-sample variance of $d_c$ is $$Var[d_c] = \frac{1}{n} + \frac{d_c^2}{2 \times n}.$$
The second approach uses raw score standardization. Here, we standardize based on the SD of either the pre- or the post-test scores (typically, the SD of the pre-test scores is used). So, we compute $$d_r = \frac{M_d}{SD_1}.$$ The large-sample variance of $d_r$ is $$Var[d_r] = \frac{2(1-r)}{n} + \frac{d_r^2}{2 \times n}.$$
We do not take the average of $SD_1$ and $SD_2$ in the second approach (and if anything, we would compute $\sqrt{(SD_1^2 + SD_2^2)/2}$ but again we also do not do that). If you would do this, then it becomes nearly impossible to derive the variance, because $SD_1^2 + SD_2^2$ does not follow a (scaled) $\chi^2$ distribution, since the pre- and post-test scores are not independent. If you look at the answer given here (which is about the independent samples case), you will see that the derivation of the variance involves the non-central t-distribution, which is also involved in deriving the two variance equations above. However, to get that non-central t, you need (the square-root of) a (scaled) $\chi^2$ distribution in the denominator. | Variance of Cohen's $d$ for within subjects designs
There is no know derivation of the variance of $d_{av}$. In fact, this is not how one should compute the $d$ value for a within-subjects design (neither with $d_{rm}$ or the $d_{av}$).
There are two a |
52,299 | Variance of Cohen's $d$ for within subjects designs | Structural equation modeling (SEM) is your friend in this case. We need the sample means and the covariance matrix as inputs. We may label the parameters (m1, m2, sd1, and sd2) and define $d_{av}$=(m2-m1)/((sd1+sd2)/2). SEM automatically calculates $Var(d_{av})$ by taken the correlation between the pre- and post-test scores into account.
The followings are the R code using the lavaan package. You may refer to Chapter 3 in Cheung (2015) for the details.
library("lavaan")
## Sample covariance matrix on pre- and post-test scores
lower <- '10
8 12'
( Cov <- getCov(lower, diag=TRUE, names=c("x_pre","x_post")) )
## Means of the pre- and post-tests
Mean <- c(10, 13)
## Sample size
N <- 50
model1 <- '# Label the sds with sd1 and sd2
eta_pre =~ sd1*x_pre
eta_post =~ sd2*x_post
# Fix the error variances at 0
x_pre ~~ 0*x_pre
x_post ~~ 0*x_post
# Label the means with m1 and m2
x_pre ~ m1*1
x_post ~ m2*1
# Define d_av
d_av := (m2-m1)/((sd1+sd2)/2)'
## Fit the model
fit1 <- cfa(model1, sample.cov=Cov, sample.mean=Mean,
sample.nobs=N, std.lv=TRUE,
sample.cov.rescale=FALSE)
## Display d_av and its SE
parameterEstimates(fit1)[12, -c(1,2,3)]
## Output
label est se z pvalue ci.lower ci.upper
12 d_av 0.905 0.131 6.9 0 0.648 1.163
Reference
Cheung, M. W.-L. (2015). Meta-analysis: A structural equation modeling approach. Chichester, West Sussex: John Wiley & Sons, Inc. | Variance of Cohen's $d$ for within subjects designs | Structural equation modeling (SEM) is your friend in this case. We need the sample means and the covariance matrix as inputs. We may label the parameters (m1, m2, sd1, and sd2) and define $d_{av}$=(m2 | Variance of Cohen's $d$ for within subjects designs
Structural equation modeling (SEM) is your friend in this case. We need the sample means and the covariance matrix as inputs. We may label the parameters (m1, m2, sd1, and sd2) and define $d_{av}$=(m2-m1)/((sd1+sd2)/2). SEM automatically calculates $Var(d_{av})$ by taken the correlation between the pre- and post-test scores into account.
The followings are the R code using the lavaan package. You may refer to Chapter 3 in Cheung (2015) for the details.
library("lavaan")
## Sample covariance matrix on pre- and post-test scores
lower <- '10
8 12'
( Cov <- getCov(lower, diag=TRUE, names=c("x_pre","x_post")) )
## Means of the pre- and post-tests
Mean <- c(10, 13)
## Sample size
N <- 50
model1 <- '# Label the sds with sd1 and sd2
eta_pre =~ sd1*x_pre
eta_post =~ sd2*x_post
# Fix the error variances at 0
x_pre ~~ 0*x_pre
x_post ~~ 0*x_post
# Label the means with m1 and m2
x_pre ~ m1*1
x_post ~ m2*1
# Define d_av
d_av := (m2-m1)/((sd1+sd2)/2)'
## Fit the model
fit1 <- cfa(model1, sample.cov=Cov, sample.mean=Mean,
sample.nobs=N, std.lv=TRUE,
sample.cov.rescale=FALSE)
## Display d_av and its SE
parameterEstimates(fit1)[12, -c(1,2,3)]
## Output
label est se z pvalue ci.lower ci.upper
12 d_av 0.905 0.131 6.9 0 0.648 1.163
Reference
Cheung, M. W.-L. (2015). Meta-analysis: A structural equation modeling approach. Chichester, West Sussex: John Wiley & Sons, Inc. | Variance of Cohen's $d$ for within subjects designs
Structural equation modeling (SEM) is your friend in this case. We need the sample means and the covariance matrix as inputs. We may label the parameters (m1, m2, sd1, and sd2) and define $d_{av}$=(m2 |
52,300 | What is collinearity and how does it differ from multicollinearity? | In statistics, the terms collinearity and multicollinearity are overlapping. Collinearity is a linear association between two explanatory variables. Multicollinearity in a multiple regression model are highly linearly related associations between two or more explanatory variables.
In case of perfect multicollinearity the design matrix $X$ has less than full rank, and therefore the moment matrix $X^{\mathsf{T}}X$ cannot be matrix inverted. Under these circumstances, for a general linear model $y = X \beta + \epsilon$, the ordinary least-squares estimator $\hat{\beta}_{OLS} = (X^{\mathsf{T}}X)^{-1}X^{\mathsf{T}}y$ does not exist. | What is collinearity and how does it differ from multicollinearity? | In statistics, the terms collinearity and multicollinearity are overlapping. Collinearity is a linear association between two explanatory variables. Multicollinearity in a multiple regression model ar | What is collinearity and how does it differ from multicollinearity?
In statistics, the terms collinearity and multicollinearity are overlapping. Collinearity is a linear association between two explanatory variables. Multicollinearity in a multiple regression model are highly linearly related associations between two or more explanatory variables.
In case of perfect multicollinearity the design matrix $X$ has less than full rank, and therefore the moment matrix $X^{\mathsf{T}}X$ cannot be matrix inverted. Under these circumstances, for a general linear model $y = X \beta + \epsilon$, the ordinary least-squares estimator $\hat{\beta}_{OLS} = (X^{\mathsf{T}}X)^{-1}X^{\mathsf{T}}y$ does not exist. | What is collinearity and how does it differ from multicollinearity?
In statistics, the terms collinearity and multicollinearity are overlapping. Collinearity is a linear association between two explanatory variables. Multicollinearity in a multiple regression model ar |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.