idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
54,401 | Does triple interaction need to include all main effect variables? | Consider your proposed model for the expectation of the response (assuming also an intercept term):
$$\newcommand{\E}{\operatorname{E}}
\E Y = \beta_0 + \beta_1 A + \beta_2 B + \beta_3 D + \beta_4 AB + \beta_5 AD + \beta_6 ABD
$$
When $D=0$
$$
\E Y = \beta_0 + \beta_1 A + \beta_2 B + \beta_4 AB
$$
When $D=1$
$$
\E Y = (\beta_0 + \beta_3) + (\beta_1 +\beta_5) A + \beta_2 B + (\beta_4 + \beta_6) AB
$$
When $D=0$ & $A=0$
$$
\E Y = \beta_0 + \beta_2 B
$$
When $D=1$ & $A=0$
$$
\E Y = (\beta_0 + \beta_3) + \beta_2 B
$$
So you're imposing a constraint that when $A$ is zero, the slopes for $B$ are equal whatever the value of $D$. Omitting the $BD$ term while including the $ABD$ term is not, therefore, to say "the effect of $B$ will not be affected by whether $D=$1 or $D=0$", but rather "the effect of $B$ will not be affected by whether $D=$1 or $D=0$ in the special case that $A=0$". Note that it requires the scale on which you measure $A$ to have a meaningful zero point, at least for the nonce. If that's what you want, fine—but I'd guess it isn't.
Such considerations arise whenever the Marginality Principle is violated. See Venables (1998), "Exegeses on linear models", S-Plus Users' Conference, Washington DC; also Including the interaction but not the main effects in a model, & Do all interaction terms need their individual terms in a regression model?. Valid reasons for fitting a model that violates the MP include both believing in the model (say to use it for prediction) & disbelieving in it (say to test it); but there's no sense in imposing arbitrary constraints or in testing arbitrary hypotheses.
Less fundamental, & checkable, but worth noting, is that the effect of $A$ on the change in slopes of $B$ from $D=0$ to $D=1$ is also assumed linear, so forcing it through the origin can move your regression line further away from the data when that assumption is wrong. Even when the coefficient of the $BD$ term is known to be zero, including it can allow a locally better-fit model when the sample values of $A$ are far from zero. | Does triple interaction need to include all main effect variables? | Consider your proposed model for the expectation of the response (assuming also an intercept term):
$$\newcommand{\E}{\operatorname{E}}
\E Y = \beta_0 + \beta_1 A + \beta_2 B + \beta_3 D + \beta_4 AB | Does triple interaction need to include all main effect variables?
Consider your proposed model for the expectation of the response (assuming also an intercept term):
$$\newcommand{\E}{\operatorname{E}}
\E Y = \beta_0 + \beta_1 A + \beta_2 B + \beta_3 D + \beta_4 AB + \beta_5 AD + \beta_6 ABD
$$
When $D=0$
$$
\E Y = \beta_0 + \beta_1 A + \beta_2 B + \beta_4 AB
$$
When $D=1$
$$
\E Y = (\beta_0 + \beta_3) + (\beta_1 +\beta_5) A + \beta_2 B + (\beta_4 + \beta_6) AB
$$
When $D=0$ & $A=0$
$$
\E Y = \beta_0 + \beta_2 B
$$
When $D=1$ & $A=0$
$$
\E Y = (\beta_0 + \beta_3) + \beta_2 B
$$
So you're imposing a constraint that when $A$ is zero, the slopes for $B$ are equal whatever the value of $D$. Omitting the $BD$ term while including the $ABD$ term is not, therefore, to say "the effect of $B$ will not be affected by whether $D=$1 or $D=0$", but rather "the effect of $B$ will not be affected by whether $D=$1 or $D=0$ in the special case that $A=0$". Note that it requires the scale on which you measure $A$ to have a meaningful zero point, at least for the nonce. If that's what you want, fine—but I'd guess it isn't.
Such considerations arise whenever the Marginality Principle is violated. See Venables (1998), "Exegeses on linear models", S-Plus Users' Conference, Washington DC; also Including the interaction but not the main effects in a model, & Do all interaction terms need their individual terms in a regression model?. Valid reasons for fitting a model that violates the MP include both believing in the model (say to use it for prediction) & disbelieving in it (say to test it); but there's no sense in imposing arbitrary constraints or in testing arbitrary hypotheses.
Less fundamental, & checkable, but worth noting, is that the effect of $A$ on the change in slopes of $B$ from $D=0$ to $D=1$ is also assumed linear, so forcing it through the origin can move your regression line further away from the data when that assumption is wrong. Even when the coefficient of the $BD$ term is known to be zero, including it can allow a locally better-fit model when the sample values of $A$ are far from zero. | Does triple interaction need to include all main effect variables?
Consider your proposed model for the expectation of the response (assuming also an intercept term):
$$\newcommand{\E}{\operatorname{E}}
\E Y = \beta_0 + \beta_1 A + \beta_2 B + \beta_3 D + \beta_4 AB |
54,402 | Testing if frequency of event occurrences over week days is uniform | The test used will determine how to assess how much data are needed. However, standard tests, such as the $\chi^2$, would seem to be inferior or inappropriate, for two reasons:
The alternative hypothesis is more specific than mere lack of independence: it focuses on a high count during one particular day.
More importantly, the hypothesis was inspired by the data itself.
Let's examine these in turn and then draw conclusions.
Standard tests may lack power
For reference, here is a standard test of independence:
x <- c(3,2,1,2,1,2,6) # The data
chisq.test(x, simulate.p.value=TRUE, B=9999)
X-squared = 7.2941, df = NA, p-value = 0.3263
(The p-value of $0.33$ is computed via simulation because the $\chi^2$ approximation to the distribution of the test statistic begins breaking down with such small counts.)
If--before seeing the data--it had been hypothesized that weekends might provoke more errors, then it would be more powerful to compare the Saturday+Sunday total to the Monday-Friday total, rather than using the $\chi^2$ statistic. Although we can analyze this special test fully (and obtain analytical results), it's simplest and more flexible just to perform a quick simulation. (The following is R code for $100,000$ iterations; it takes under a second to execute.)
n.iter <- 1e5 # Number of iterations
set.seed(17) # Start a reproducible simulation
n <- sum(x) # Sum of all data
sim <- rmultinom(n.iter, n, rep(1, length(x))) # The simulated data, in columns
x.satsun <- sum(x[6:7]) # The test statistic
sim.satsun <- colSums(sim[6:7, ]) # The simulation distribution
cat(mean(c(sim.satsun >= x.satsun, 1))) # Estimated p-value
0.08357916
The output, shown on the last line, is the p-value of this test. It is much smaller than the $\chi^2$ p-value previously computed. This result would be considered significant by anyone needing 90% confidence, whereas few people would consider the $\chi^2$ p-value significant. That's evidence of the greater power to detect a difference.
Greater power is important: it leads to much smaller sample sizes. But I won't develop this idea, due to the conclusions in the next section.
A data-generated hypothesis gives false confidence
It is a much more serious issue that the hypothesis was inspired by the data. What we really need to test is this:
If there were no association between events and day of the week, what are the chances that the analyst would nevertheless have observed an unusual pattern "at face value"?
Although this is not definitively answerable, because we have no way to model the analyst's thought process, we can still make progress by considering some realistic possibilities. To be honest about it, we must contemplate patterns other than the one that actually appeared. For instance, if there had been 8 events on Wednesday and no more than 3 on any other day, it's a good bet that such a pattern would have been noted (leading to a hypothesis that Wednesdays are somehow error-inducing).
Other patterns I believe likely to be noted by any observant, interested analyst would include all apparent clusters of data, including:
Any single day with a high count.
Any two adjacent days with a high count.
Any adjacent days with a high count.
"Adjacent" of course means in a circular sense: Sunday is adjacent to Monday even though those days are far apart in the data listing. Other patterns are possible, such as two separate days with high counts. I will not attempt an exhaustive list; these three patterns will suffice to make the point.
It is useful to evaluate the chance that a perfectly random dataset would have evoked notice in this sense. We can evaluate that chance by simulating many random datasets and counting any that look at least as unusual as the actual data on any of these criteria. Since we already have our simulation, the analysis is a matter of a few seconds' more work:
stat <- function(y) {
y.2 <- c(y[-1], y[1]) + y # Totals of adjacent days
y.3 <- y.2 + c(y[-(1:2)], y[1:2]) # Totals of 3-day groups
c(max(y), max(y.2), max(y.3)) # Largest values for 1, 2, 3 days
}
sim.stat <- apply(sim, 2, stat)
x.stat <- stat(x)
extreme <- colSums(sim.stat >= x.stat) >= 1
cat(p.value <- mean(c(extreme, 1)))
0.3889561
This result is a much more realistic assessment of the situation than we have seen before. It suggests there is almost no objective evidence that events are related to day of week.
Conclusions
The best solution, then, might be to conclude there likely is not anything unusual going on. Keep monitoring the events, but do not worry about how much time will be needed to produce "significant" results. | Testing if frequency of event occurrences over week days is uniform | The test used will determine how to assess how much data are needed. However, standard tests, such as the $\chi^2$, would seem to be inferior or inappropriate, for two reasons:
The alternative hypot | Testing if frequency of event occurrences over week days is uniform
The test used will determine how to assess how much data are needed. However, standard tests, such as the $\chi^2$, would seem to be inferior or inappropriate, for two reasons:
The alternative hypothesis is more specific than mere lack of independence: it focuses on a high count during one particular day.
More importantly, the hypothesis was inspired by the data itself.
Let's examine these in turn and then draw conclusions.
Standard tests may lack power
For reference, here is a standard test of independence:
x <- c(3,2,1,2,1,2,6) # The data
chisq.test(x, simulate.p.value=TRUE, B=9999)
X-squared = 7.2941, df = NA, p-value = 0.3263
(The p-value of $0.33$ is computed via simulation because the $\chi^2$ approximation to the distribution of the test statistic begins breaking down with such small counts.)
If--before seeing the data--it had been hypothesized that weekends might provoke more errors, then it would be more powerful to compare the Saturday+Sunday total to the Monday-Friday total, rather than using the $\chi^2$ statistic. Although we can analyze this special test fully (and obtain analytical results), it's simplest and more flexible just to perform a quick simulation. (The following is R code for $100,000$ iterations; it takes under a second to execute.)
n.iter <- 1e5 # Number of iterations
set.seed(17) # Start a reproducible simulation
n <- sum(x) # Sum of all data
sim <- rmultinom(n.iter, n, rep(1, length(x))) # The simulated data, in columns
x.satsun <- sum(x[6:7]) # The test statistic
sim.satsun <- colSums(sim[6:7, ]) # The simulation distribution
cat(mean(c(sim.satsun >= x.satsun, 1))) # Estimated p-value
0.08357916
The output, shown on the last line, is the p-value of this test. It is much smaller than the $\chi^2$ p-value previously computed. This result would be considered significant by anyone needing 90% confidence, whereas few people would consider the $\chi^2$ p-value significant. That's evidence of the greater power to detect a difference.
Greater power is important: it leads to much smaller sample sizes. But I won't develop this idea, due to the conclusions in the next section.
A data-generated hypothesis gives false confidence
It is a much more serious issue that the hypothesis was inspired by the data. What we really need to test is this:
If there were no association between events and day of the week, what are the chances that the analyst would nevertheless have observed an unusual pattern "at face value"?
Although this is not definitively answerable, because we have no way to model the analyst's thought process, we can still make progress by considering some realistic possibilities. To be honest about it, we must contemplate patterns other than the one that actually appeared. For instance, if there had been 8 events on Wednesday and no more than 3 on any other day, it's a good bet that such a pattern would have been noted (leading to a hypothesis that Wednesdays are somehow error-inducing).
Other patterns I believe likely to be noted by any observant, interested analyst would include all apparent clusters of data, including:
Any single day with a high count.
Any two adjacent days with a high count.
Any adjacent days with a high count.
"Adjacent" of course means in a circular sense: Sunday is adjacent to Monday even though those days are far apart in the data listing. Other patterns are possible, such as two separate days with high counts. I will not attempt an exhaustive list; these three patterns will suffice to make the point.
It is useful to evaluate the chance that a perfectly random dataset would have evoked notice in this sense. We can evaluate that chance by simulating many random datasets and counting any that look at least as unusual as the actual data on any of these criteria. Since we already have our simulation, the analysis is a matter of a few seconds' more work:
stat <- function(y) {
y.2 <- c(y[-1], y[1]) + y # Totals of adjacent days
y.3 <- y.2 + c(y[-(1:2)], y[1:2]) # Totals of 3-day groups
c(max(y), max(y.2), max(y.3)) # Largest values for 1, 2, 3 days
}
sim.stat <- apply(sim, 2, stat)
x.stat <- stat(x)
extreme <- colSums(sim.stat >= x.stat) >= 1
cat(p.value <- mean(c(extreme, 1)))
0.3889561
This result is a much more realistic assessment of the situation than we have seen before. It suggests there is almost no objective evidence that events are related to day of week.
Conclusions
The best solution, then, might be to conclude there likely is not anything unusual going on. Keep monitoring the events, but do not worry about how much time will be needed to produce "significant" results. | Testing if frequency of event occurrences over week days is uniform
The test used will determine how to assess how much data are needed. However, standard tests, such as the $\chi^2$, would seem to be inferior or inappropriate, for two reasons:
The alternative hypot |
54,403 | Testing if frequency of event occurrences over week days is uniform | It seems to me that you want to test the null hypothesis that errors occur regardless of day of the week. To test this, you can perform a chi-square test that compares the actual observed number of errors on each day of the week with the expected number under the null hypothesis. That expected number for each day is the total number of errors observed (17) divided by 7. | Testing if frequency of event occurrences over week days is uniform | It seems to me that you want to test the null hypothesis that errors occur regardless of day of the week. To test this, you can perform a chi-square test that compares the actual observed number of er | Testing if frequency of event occurrences over week days is uniform
It seems to me that you want to test the null hypothesis that errors occur regardless of day of the week. To test this, you can perform a chi-square test that compares the actual observed number of errors on each day of the week with the expected number under the null hypothesis. That expected number for each day is the total number of errors observed (17) divided by 7. | Testing if frequency of event occurrences over week days is uniform
It seems to me that you want to test the null hypothesis that errors occur regardless of day of the week. To test this, you can perform a chi-square test that compares the actual observed number of er |
54,404 | What is the point in regression through the origin? [duplicate] | Zero intercept models are seldom used in practice. In theory, you would use a zero intercept model if you knew that the model line has to go through 0. For example, if you are modelling GDP against population, presumably when there is 0 population, there is 0 GDP. A zero intercept model would make sense.
Except ... regression models don't usually hold over a wide range of values of the independent variables. The linearity of a GDP by population model is going to break down way before population hits 0 -- imagine one financial model that works both for China and Tuvalu! Makes no sense.
So in practice, we usually let the intercept float and focus on the other parameters. Nevertheless, as a training exercise, it doesn't hurt to get students to go through the math.
@gung's comment has reminded me of another issue here. If an intercept term is included in the model, the least squares estimate of the slope parameter will be unbiased, whether the true value of the intercept is 0 or not. You lose one degree of freedom for error, but that's a small price to pay for the protection against bias. | What is the point in regression through the origin? [duplicate] | Zero intercept models are seldom used in practice. In theory, you would use a zero intercept model if you knew that the model line has to go through 0. For example, if you are modelling GDP against po | What is the point in regression through the origin? [duplicate]
Zero intercept models are seldom used in practice. In theory, you would use a zero intercept model if you knew that the model line has to go through 0. For example, if you are modelling GDP against population, presumably when there is 0 population, there is 0 GDP. A zero intercept model would make sense.
Except ... regression models don't usually hold over a wide range of values of the independent variables. The linearity of a GDP by population model is going to break down way before population hits 0 -- imagine one financial model that works both for China and Tuvalu! Makes no sense.
So in practice, we usually let the intercept float and focus on the other parameters. Nevertheless, as a training exercise, it doesn't hurt to get students to go through the math.
@gung's comment has reminded me of another issue here. If an intercept term is included in the model, the least squares estimate of the slope parameter will be unbiased, whether the true value of the intercept is 0 or not. You lose one degree of freedom for error, but that's a small price to pay for the protection against bias. | What is the point in regression through the origin? [duplicate]
Zero intercept models are seldom used in practice. In theory, you would use a zero intercept model if you knew that the model line has to go through 0. For example, if you are modelling GDP against po |
54,405 | What math/stats knowledge does learning Bayesian probability require? | You need working knowledge of calculus, like being able to take integrals, and not like knowing Weierstrass theorem. For instance, if you can take this integral without looking at any references with a pen and a paper, you're probably equipped to take the course:
$\int_{-\infty}^{\infty}\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}|x^3|dx$
Knowing linear algebra helps too, but you can pick it up on the way. I'm not talking about anything crazy, it's simple matrix manipulations, pretty much within what's described in Algebra.B section of Madelung's Die Mathematischen Hilfsmittel des Physikers book. It's available online here, and is an awesome little book on applied math. For instance, if you can solve this equation, you're good to go:
$\det\left|\begin{matrix}1-\lambda& 2\\2 &1-\lambda\end{matrix}\right|=0$
You don't need measure theory and real analysis, but it helps to know them. The key is not to enroll in courses taught at math dept for math majors. Take a course specifically designed for applied folks, maybe for psychologists or other math-challenged constituents. Courses taught for physicists could be a good compromise: they have enough math to actually gain useful skills, but they don't bother with proofs and other crazy stuff mathematicians are obsessed about. | What math/stats knowledge does learning Bayesian probability require? | You need working knowledge of calculus, like being able to take integrals, and not like knowing Weierstrass theorem. For instance, if you can take this integral without looking at any references with | What math/stats knowledge does learning Bayesian probability require?
You need working knowledge of calculus, like being able to take integrals, and not like knowing Weierstrass theorem. For instance, if you can take this integral without looking at any references with a pen and a paper, you're probably equipped to take the course:
$\int_{-\infty}^{\infty}\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}|x^3|dx$
Knowing linear algebra helps too, but you can pick it up on the way. I'm not talking about anything crazy, it's simple matrix manipulations, pretty much within what's described in Algebra.B section of Madelung's Die Mathematischen Hilfsmittel des Physikers book. It's available online here, and is an awesome little book on applied math. For instance, if you can solve this equation, you're good to go:
$\det\left|\begin{matrix}1-\lambda& 2\\2 &1-\lambda\end{matrix}\right|=0$
You don't need measure theory and real analysis, but it helps to know them. The key is not to enroll in courses taught at math dept for math majors. Take a course specifically designed for applied folks, maybe for psychologists or other math-challenged constituents. Courses taught for physicists could be a good compromise: they have enough math to actually gain useful skills, but they don't bother with proofs and other crazy stuff mathematicians are obsessed about. | What math/stats knowledge does learning Bayesian probability require?
You need working knowledge of calculus, like being able to take integrals, and not like knowing Weierstrass theorem. For instance, if you can take this integral without looking at any references with |
54,406 | What math/stats knowledge does learning Bayesian probability require? | I was in a similar boat, having been a Math/CS double major but needing to learn a lot of Bayesian probability for work. What I'd recommend:
Practical ability to do and understand integrals
An understanding of numerical methods for approximating integrals (sampling, Monte Carlo method)
Knowing the R programming language is very useful - most texts I've come across do the examples in R.
General understanding of model evaluation - you would certainly get this in stats, but you can get it in other fields (psychology, biology, machine learning) as well.
There are lots of general modeling "tricks" that I didn't see studying pure math that would have been nice to know before jumping into Bayesian probability, such as regularization and parameter selection.
Since you are a math major, you probably will see real analysis anyway, but I did not really find it that helpful for this. I did not find my background in measure theory helpful either. It very well might be that the class offered at your school is built with the assumed stats background - in which case, I recommend getting a book about Bayesian statistics/probability (I found this one useful) and working through it yourself. | What math/stats knowledge does learning Bayesian probability require? | I was in a similar boat, having been a Math/CS double major but needing to learn a lot of Bayesian probability for work. What I'd recommend:
Practical ability to do and understand integrals
An unders | What math/stats knowledge does learning Bayesian probability require?
I was in a similar boat, having been a Math/CS double major but needing to learn a lot of Bayesian probability for work. What I'd recommend:
Practical ability to do and understand integrals
An understanding of numerical methods for approximating integrals (sampling, Monte Carlo method)
Knowing the R programming language is very useful - most texts I've come across do the examples in R.
General understanding of model evaluation - you would certainly get this in stats, but you can get it in other fields (psychology, biology, machine learning) as well.
There are lots of general modeling "tricks" that I didn't see studying pure math that would have been nice to know before jumping into Bayesian probability, such as regularization and parameter selection.
Since you are a math major, you probably will see real analysis anyway, but I did not really find it that helpful for this. I did not find my background in measure theory helpful either. It very well might be that the class offered at your school is built with the assumed stats background - in which case, I recommend getting a book about Bayesian statistics/probability (I found this one useful) and working through it yourself. | What math/stats knowledge does learning Bayesian probability require?
I was in a similar boat, having been a Math/CS double major but needing to learn a lot of Bayesian probability for work. What I'd recommend:
Practical ability to do and understand integrals
An unders |
54,407 | Kolmogorov-Smirnov vs Mann-Whitney U When There Are Ties | I'm not sure what the basis is for your colleague's claim -- but they should support the claims they make before you accept them as true -- there's an astonishing amount of misinformed folklore about. (How do they know that this is true? Do you have good reason to think it must be true in your case?)
Both tests assume$^\dagger$ continuous distributions and both are impacted by ties (however, it's relatively easy to deal with ties in the Mann-Whitney and some software will do so automatically).
--
$\dagger$ Edit: To support my claim of the assumption of continuity in respect of the Mann-Whitney (since whuber says I am wrong on this point, I had better justify it), I refer to the beginning of Mann and Whitney (1947):
1. Summary. Let $x$ and $y$ be two random variables with continuous cumulative distribution functions $f$ and $g$.
So for Mann and Whitney's version of the test, they do explicitly assume continuity - and not idly, since they do rely on it in their derivation. However, it's possible (as I mention later) to deal with ties in the Mann-Whitney by working out the distribution of the test statistic at the null under the pattern of ties, or by correctly computing the effect of ties on the variance of the statistic under the normal approximation (what's usually referred to as the 'adjustment for ties').
--
For both tests, if the effect of the ties are not properly dealt with, both kinds of error rate are impacted - their type I error rates are lowered, and lowering the significance level necessarily lowers power ($=1-\beta$).
It's not 100% clear to me which test might be the most impacted, nor under what circumstances, but offhand I'd have expected the greater sensitivity generally went with the KS test* - and this is even before one 'adjusts' the Mann-Whitney for ties (i.e. if you used the normal approximation and used the variance for the no ties case).
*(personally, I'd use simulation suited to the specific instance to see what the properties would be under the sorts of conditions you see, at those sample sizes.)
Below is an illustration of the impact on the distribution of p-values under identical population distributions with of a moderate level of ties$^\ddagger$ with sample sizes of 33 and 67 under the default settings in R (which for the Mann-Whitney uses the normal approximation with correct calculation of variance in the presence of ties for this sample size):
For the tests to work 'as advertized' under the null, these distributions should look close to uniform. As you see, the Mann-Whitney (at least when properly calculating the variance of the sum of the ranks under the presence of ties, as here) is indeed very close to uniform. Since (as we can see) for the Kolmogorov-Smirnov test the proportion of p-values below $\alpha$ will be much smaller than $\alpha$, the test is highly conservative, with corresponding effects on power. [If anything, the effect is somewhat stronger than I'd have anticipated.]
$\ddagger\,$(the impact on the variance of the test statistic is fairly small in percentage terms)
Further, if your interest lies in a location-shift alternative, the Mann-Whitney would have greater power against that alternative to start with, so even if it did lose more power as a result of the discreteness (which I doubt), it may still have more power afterward.
You don't say how heavily tied your data are, nor in what sort of pattern. If both tests are more impacted than you're prepared to accept, you can work with the permutation distribution of either test statistic for you data (or with the permutation distribution of some other statistic, including a difference in sample medians if you wish).
In spite of many books (especially in some particular areas of application) stating that it is, the Mann-Whitney is not actually a test for a difference in medians. However, if you additionally assume that the populations distributions are the same under the null, and restrict the alternative to a location-shift, then it's a test for difference in any reasonable location measure - population medians, population lower quartiles, even population means (if they exist).
Indeed, one needn't restrict oneself to location shift alternatives. Assuming identical distributions under the null against an alternative that will move medians (or any other measure of location) will work; so for example, it would work perfectly well that way as a test of medians under an assumption of scale-shift. We must keep in mind however, that the Mann-Whitney is a far more general test than that and that when we rely on an assumption to make it a test for medians or whatever, we do actually lean on our assumption for the conclusion to make it mean what we want it to.
In short, which test do I trust?
Don't simply trust what anyone says (including me!) - unless they have solid evidence (I haven't brought any that's directly relevant to your situation,, and none relating to power because I haven't seen your pattern of ties and I am not 100% sure whether you're only interested in location shifts).
What kind of data do you have (what are you measuring, how are you measuring it, and how do ties arise)? What are you interested in finding out? Why do you mention medians?
Use simulation to find out how any tests you contemplate behave in circumstances similar to yours, and decide for yourself whether there's a problem to worry about. For both tests, see what the impact of ties is on the test, both under the null and under alternatives you care about, and then the case of the Mann-Whitney, see the effect of the adjustment for ties, and compare it with dealing with the exact permutation distribution (or in large samples like yours, with the randomization distribution). For the KS you can look at the exact permutation distribution as well. | Kolmogorov-Smirnov vs Mann-Whitney U When There Are Ties | I'm not sure what the basis is for your colleague's claim -- but they should support the claims they make before you accept them as true -- there's an astonishing amount of misinformed folklore about. | Kolmogorov-Smirnov vs Mann-Whitney U When There Are Ties
I'm not sure what the basis is for your colleague's claim -- but they should support the claims they make before you accept them as true -- there's an astonishing amount of misinformed folklore about. (How do they know that this is true? Do you have good reason to think it must be true in your case?)
Both tests assume$^\dagger$ continuous distributions and both are impacted by ties (however, it's relatively easy to deal with ties in the Mann-Whitney and some software will do so automatically).
--
$\dagger$ Edit: To support my claim of the assumption of continuity in respect of the Mann-Whitney (since whuber says I am wrong on this point, I had better justify it), I refer to the beginning of Mann and Whitney (1947):
1. Summary. Let $x$ and $y$ be two random variables with continuous cumulative distribution functions $f$ and $g$.
So for Mann and Whitney's version of the test, they do explicitly assume continuity - and not idly, since they do rely on it in their derivation. However, it's possible (as I mention later) to deal with ties in the Mann-Whitney by working out the distribution of the test statistic at the null under the pattern of ties, or by correctly computing the effect of ties on the variance of the statistic under the normal approximation (what's usually referred to as the 'adjustment for ties').
--
For both tests, if the effect of the ties are not properly dealt with, both kinds of error rate are impacted - their type I error rates are lowered, and lowering the significance level necessarily lowers power ($=1-\beta$).
It's not 100% clear to me which test might be the most impacted, nor under what circumstances, but offhand I'd have expected the greater sensitivity generally went with the KS test* - and this is even before one 'adjusts' the Mann-Whitney for ties (i.e. if you used the normal approximation and used the variance for the no ties case).
*(personally, I'd use simulation suited to the specific instance to see what the properties would be under the sorts of conditions you see, at those sample sizes.)
Below is an illustration of the impact on the distribution of p-values under identical population distributions with of a moderate level of ties$^\ddagger$ with sample sizes of 33 and 67 under the default settings in R (which for the Mann-Whitney uses the normal approximation with correct calculation of variance in the presence of ties for this sample size):
For the tests to work 'as advertized' under the null, these distributions should look close to uniform. As you see, the Mann-Whitney (at least when properly calculating the variance of the sum of the ranks under the presence of ties, as here) is indeed very close to uniform. Since (as we can see) for the Kolmogorov-Smirnov test the proportion of p-values below $\alpha$ will be much smaller than $\alpha$, the test is highly conservative, with corresponding effects on power. [If anything, the effect is somewhat stronger than I'd have anticipated.]
$\ddagger\,$(the impact on the variance of the test statistic is fairly small in percentage terms)
Further, if your interest lies in a location-shift alternative, the Mann-Whitney would have greater power against that alternative to start with, so even if it did lose more power as a result of the discreteness (which I doubt), it may still have more power afterward.
You don't say how heavily tied your data are, nor in what sort of pattern. If both tests are more impacted than you're prepared to accept, you can work with the permutation distribution of either test statistic for you data (or with the permutation distribution of some other statistic, including a difference in sample medians if you wish).
In spite of many books (especially in some particular areas of application) stating that it is, the Mann-Whitney is not actually a test for a difference in medians. However, if you additionally assume that the populations distributions are the same under the null, and restrict the alternative to a location-shift, then it's a test for difference in any reasonable location measure - population medians, population lower quartiles, even population means (if they exist).
Indeed, one needn't restrict oneself to location shift alternatives. Assuming identical distributions under the null against an alternative that will move medians (or any other measure of location) will work; so for example, it would work perfectly well that way as a test of medians under an assumption of scale-shift. We must keep in mind however, that the Mann-Whitney is a far more general test than that and that when we rely on an assumption to make it a test for medians or whatever, we do actually lean on our assumption for the conclusion to make it mean what we want it to.
In short, which test do I trust?
Don't simply trust what anyone says (including me!) - unless they have solid evidence (I haven't brought any that's directly relevant to your situation,, and none relating to power because I haven't seen your pattern of ties and I am not 100% sure whether you're only interested in location shifts).
What kind of data do you have (what are you measuring, how are you measuring it, and how do ties arise)? What are you interested in finding out? Why do you mention medians?
Use simulation to find out how any tests you contemplate behave in circumstances similar to yours, and decide for yourself whether there's a problem to worry about. For both tests, see what the impact of ties is on the test, both under the null and under alternatives you care about, and then the case of the Mann-Whitney, see the effect of the adjustment for ties, and compare it with dealing with the exact permutation distribution (or in large samples like yours, with the randomization distribution). For the KS you can look at the exact permutation distribution as well. | Kolmogorov-Smirnov vs Mann-Whitney U When There Are Ties
I'm not sure what the basis is for your colleague's claim -- but they should support the claims they make before you accept them as true -- there's an astonishing amount of misinformed folklore about. |
54,408 | Time-series forecasting (in C#) | Let's take your questions one at a time:
Is there any method to determine optimal smoothing parameters without testing all of them?
You can cast your problem in a state space framework and then numerically optimize your parameters using standard numerical libraries. Forecasting with Exponential Smoothing - The State Space Approach by Hyndman et al. would be a good place to start.
Using R.NET to use the forecast package of R will be faster?
Hard to say. I have no experience with R.NET. Using, e.g., ets() (which does use a state space approach) in the forecast package directly will certainly be faster, especially if, as you do, you specify the model rather than letting ets() find the (hopefully) best one.
If so, should I:
Use daily or monthly data?
This should really depend on what you want to do with the forecast. What forecast granularity do you really need to make decisions? Sometimes, it is better to forecast higher-frequency data and then aggregate, but usually, I'd rather aggregate the history and forecast on the granularity I plan on using later on.
Plus, daily data will likely be intermittent, in which case you can't use Holt(-Winters) or ARIMA, but should go with Croston's method. This may be helpful. Intermittent demands are usually harder to forecast.
EDIT: you write that you need to determine safety amounts. Well, now you will actually need to think about your supply chain. Maybe forecasting is not your problem at all - if sales are all 0 or 1 and you can replenish stocks within a day, your best strategy would be to always have 1 unit on hand and replenish that after every sale, forgetting entirely about forecasting.
If that is not the case (you write that you have seasonality on an aggregate level), you may need to do something ad-hoc, since I don't think there is anything on seasonal intermittent demand out there. You could aggregate data to get seasonal forecasts, then push those down to the SKU level to get forecasts on that level (e.g., by distributing the aggregate forecasts according to historical proportions), finally get safety amounts by taking quantiles of, e.g., the Poisson distribution. As I said, this is pretty ad-hoc, with little statistical grounding, but it should get you 90% there - and given that forecasting is an inexact science, the last 10% may not be feasible, anyway.
Make also an auto.arima? How to determine which model is better?
Yes, try that one, too. Use a holdout sample to determine which model is better, as you describe in your comment, which is very good practice.
Look also at averages of forecasts from different methods - often, such averages yield better forecasts than the component forecasts. EDIT: That is, fit both a Holt-Winters and an auto.arima model, calculate forecasts from both models, and then, for each time bucket in the future, take the average of the two forecasts from the two models. You can do this with even more models, too - averaging seems to work best if the component models are "very different". Essentially, you are reducing the variance of your forecasts.
Is my method of backtesting (make a model only with data previous to that point) valid to determine if a model is better than another?
As I wrote above: yes, it is. This is out-of-sample testing, which is really the best way to assess forecast accuracy and method quality.
EDIT: How can I do that (get predictions over the last 12 data without considering them in the model) in R?
Unfortunately, there is no way to take an ets()-fitted object and update it with a new data point (as in update() for lm()-fitted models). You will need to call ets() twelve times.
You could, of course, fit the first model and then re-use the model ets() chose in this first fit for subsequent refits. This model is reported in the components part of the ets() result. For instance, taking the first five years of the USAccDeaths dataset:
fit <- ets(ts(USAccDeaths[1:60],start=c(1973,1),frequency=12))
Refit using the same model:
refit <- ets(ts(USAccDeaths[1:61],start=c(1973,1),frequency=12),
model=paste(fit$components[1:3],collapse=""))
This will make refitting quite a lot faster, but of course the refit may not find the MSE-optimal model any more. Then again, the MSE-optimal model should not change too much if you add just a few more observations.
As always, I highly recommend this free online forecasting textbook. | Time-series forecasting (in C#) | Let's take your questions one at a time:
Is there any method to determine optimal smoothing parameters without testing all of them?
You can cast your problem in a state space framework and then nume | Time-series forecasting (in C#)
Let's take your questions one at a time:
Is there any method to determine optimal smoothing parameters without testing all of them?
You can cast your problem in a state space framework and then numerically optimize your parameters using standard numerical libraries. Forecasting with Exponential Smoothing - The State Space Approach by Hyndman et al. would be a good place to start.
Using R.NET to use the forecast package of R will be faster?
Hard to say. I have no experience with R.NET. Using, e.g., ets() (which does use a state space approach) in the forecast package directly will certainly be faster, especially if, as you do, you specify the model rather than letting ets() find the (hopefully) best one.
If so, should I:
Use daily or monthly data?
This should really depend on what you want to do with the forecast. What forecast granularity do you really need to make decisions? Sometimes, it is better to forecast higher-frequency data and then aggregate, but usually, I'd rather aggregate the history and forecast on the granularity I plan on using later on.
Plus, daily data will likely be intermittent, in which case you can't use Holt(-Winters) or ARIMA, but should go with Croston's method. This may be helpful. Intermittent demands are usually harder to forecast.
EDIT: you write that you need to determine safety amounts. Well, now you will actually need to think about your supply chain. Maybe forecasting is not your problem at all - if sales are all 0 or 1 and you can replenish stocks within a day, your best strategy would be to always have 1 unit on hand and replenish that after every sale, forgetting entirely about forecasting.
If that is not the case (you write that you have seasonality on an aggregate level), you may need to do something ad-hoc, since I don't think there is anything on seasonal intermittent demand out there. You could aggregate data to get seasonal forecasts, then push those down to the SKU level to get forecasts on that level (e.g., by distributing the aggregate forecasts according to historical proportions), finally get safety amounts by taking quantiles of, e.g., the Poisson distribution. As I said, this is pretty ad-hoc, with little statistical grounding, but it should get you 90% there - and given that forecasting is an inexact science, the last 10% may not be feasible, anyway.
Make also an auto.arima? How to determine which model is better?
Yes, try that one, too. Use a holdout sample to determine which model is better, as you describe in your comment, which is very good practice.
Look also at averages of forecasts from different methods - often, such averages yield better forecasts than the component forecasts. EDIT: That is, fit both a Holt-Winters and an auto.arima model, calculate forecasts from both models, and then, for each time bucket in the future, take the average of the two forecasts from the two models. You can do this with even more models, too - averaging seems to work best if the component models are "very different". Essentially, you are reducing the variance of your forecasts.
Is my method of backtesting (make a model only with data previous to that point) valid to determine if a model is better than another?
As I wrote above: yes, it is. This is out-of-sample testing, which is really the best way to assess forecast accuracy and method quality.
EDIT: How can I do that (get predictions over the last 12 data without considering them in the model) in R?
Unfortunately, there is no way to take an ets()-fitted object and update it with a new data point (as in update() for lm()-fitted models). You will need to call ets() twelve times.
You could, of course, fit the first model and then re-use the model ets() chose in this first fit for subsequent refits. This model is reported in the components part of the ets() result. For instance, taking the first five years of the USAccDeaths dataset:
fit <- ets(ts(USAccDeaths[1:60],start=c(1973,1),frequency=12))
Refit using the same model:
refit <- ets(ts(USAccDeaths[1:61],start=c(1973,1),frequency=12),
model=paste(fit$components[1:3],collapse=""))
This will make refitting quite a lot faster, but of course the refit may not find the MSE-optimal model any more. Then again, the MSE-optimal model should not change too much if you add just a few more observations.
As always, I highly recommend this free online forecasting textbook. | Time-series forecasting (in C#)
Let's take your questions one at a time:
Is there any method to determine optimal smoothing parameters without testing all of them?
You can cast your problem in a state space framework and then nume |
54,409 | How to find set of directions in Stahel-Donoho outlyingness measure? | Summary: the only 'correct' definition is the original one [0][1], the other ones were designed to solve a problem with it that happens in a specific context and would probably have been best called 'pseudo-Stahel-Donoho' distances because they don't, in general, have the same interpretation.
Now, I will show you how to obtain the standard SD distances in R, explain why they are computed the way they are, then explain what these pseudo-SD 'distances' are and why the need for them arose.
Given an $n$ by $p$ data matrix $X=\{x_1,\ldots,x_n\}$ whose entries lie in general (linear) position in $\mathbb{R}^p$, the original definition is to obtain a $p$-vector $d$ as the vector of coefficients of the hyperplane through $p$ data points chosen randomly out of your sample of $n$ observations. In R this is done so:
moar_directions<-function(x){
n<-nrow(x)
p<-ncol(x)
P<-x[sample(n,p),,drop=FALSE]
G<-rep(NA,p)
E<-matrix(1,p,1)
if((qrP<-qr(P))$rank==p) G<-solve(qrP,E)
return(G)
}
n<-100
p<-5
x<-matrix(rnorm(n*p),nc=p)
moar_directions(x)
(see for example line 13 of robustbase::adjOutlyingness here, which you can check by tipping:
library(robustbase)
body(robustbase::adjOutlyingness)
and [2]).
Then, projecting the data unto the direction normal to $d$ is done so:
x%*%moar_directions(x)
The SD outlyingness of $x_i$ with respect to $X=\{x_1,\ldots,x_n\}$ on a single projection unto $d$ is ([0][1]):
$$O(x_i,d,X)=\frac{|x_i'd-\mbox{med}_i(x_i'd)|}{\mbox{mad}_i(x_i'd)}\;\;\;(0)$$
which in R is obtained as:
w<-x%*%moar_directions(x)
abs(w-median(w))/mad(w)
And computing the SD outlyingness as the maximum outlyingness over all the members of $B_p^n$, where
$B_p^n$ is the set of all ${n\choose p}$ such directions $d$ in a $n$ by $p$ data matrix whose entries lie in general (linear) position in $\mathbb{R}^p$. Often, $|B_p^n|={n\choose p}$ will be too large, and it will not be possible to consider all its members in which case one can sample randomly $K$ directions from it to compute an approximation to the SD outlyigness of $x_i$ w.r.t. $X$:
$$O(x_i,X)=\underset{\{d_k\}_{k=1}^K\in B_p^n}{\max}\frac{|x_i'd_k-\mbox{med}_i(x_i'd_k)|}{\mbox{mad}_i(x_i'd_k)}$$
which is R is obtained as:
library(matrixStats)
K<-100
AllThoseProj<-matrix(NA,n,K)
for(i in 1:K){
w<-moar_directions(x)
AllThoseProj[,i]<-abs(w-median(w))/mad(w)
}
rowMaxs(AllThoseProj)
(I will use for a couple of lines K=100 as my definition of 'many' then latter explain what is meant by many in this context)
To motivate the original definition, notice[3] that:
$$d(x_i,X)=\underset{d\in B_p}{\max}\frac{|x_i'd_k-\mbox{mean}_i(x_i'd_k)|}{\mbox{sd}_i(x_i'd_k)}\;\;(1)$$
where
$$d^2(x_i,X)=(x_i-\mbox{mean}_i(x_i))'\mbox{Cov}_i(x_i)^{-1}(x_i-\mbox{mean}_i(x_i))$$
is the vector of squared Mahalanobis distances. So the original SD outlyingness index was designed as a way to compute
a consistent estimator of $d^2(x_i)$ with 50% breakdown point (see [1]) and
equality (1) only holds when the directions $d$ are defined as above.
Now, I will delve a bit more on what is meant by 'many' in ' Often, $|B_p^n|$ will be too large, and ... one can sample randomly many directions'
The notion of many that should be used here depends on the objective being pursued. If you are using the SD to approximate the vector of $d(x_i,X)$'s, then $K$ in the high hundreds should already give a good approximation. If you are using the SD as a robust alternative to the vector of $d(x_i,X)$'s, then a much higher value of $K$ will be necessary (see page 13 of [0] for a discussion of this).
Now, I will delve a bit more on these pseudo-SD distances.
The problem with the way $d$ is defined above is that it is only uniquely defined in settings where $n>p$. In the last decade a lot of research started to be done on high dimensional robustness. The idea built up incrementally to search for outliers through projection pursuit (as was done with the SD distances) but using a type of projection that would also 'work' when $p>n$. There are many such notions, but, gradually, a consensus emerged around the the idea of using directions through two points (see [4] for a late implementation of this idea). Directions through two points are defined as (see [4], and line 52 of body(rrcov:::extradir) in package rrcov for example):
moar_directions_2points<-function(x){
n<-nrow(x)
p<-ncol(x)
P<-x[sample(n,2),,drop=FALSE]
G<-rep(NA,p)
E<-G
E<-P[1,]-P[2,]
N<-sqrt(crossprod(E))
if(N>1e-8) G<-E/N
return(G)
}
Now, if we substitute in equation (0) the original definition of the directions $d$ by the ones above, the new outlyingness index (let's call it pseudo-SD) is no longer (except when $p=2$) a consistent estimator of the vector of $d(x_i,X)$'s, so it is a bit hard to interpret what it is. On the other hand, it can still be computed when $p>n$ in which case the vector of $d(x_i)$'s is not even defined anyway. The fact that we lost consistency by using directions through 2 data points also means we no longer have a target (a quantity we are trying to estimate, like we did with the the vector of $d(x_i,X)$'s in the case of the SD) so the question of what is meant by 'many' here is not really addressable. rrcov for example uses 'many' (in the case of directions through two points) to mean 250.
References:
[0] Stahel W. (1981).
Breakdown of Covariance Estimators.
Research Report 31, Fachgrupp fur Statistik, E.T.H. Zurich.
[1] Donoho. D.L. (1982).
Breakdown properties of multivariate location estimators
Ph.D. Qualifying Paper Harvard University.
[2] Hubert, M. and Van der Veeken, S. (2007). Outlier detection for skewed data. Journal of chemometrics vol:22 issue:3-4 pages:235-246.
[3] Rousseeuw, P.J. and Leroy, A.M. (1987).
Robust Regression and Outlier Detection. Wiley, New York.
[4] Hubert, M., Rousseeuw P. J. and Vanden Branden, K. (2005). ROBPCA: A New Approach to Robust Principal Component Analysis. Technometrics
Volume 47, Issue 1. | How to find set of directions in Stahel-Donoho outlyingness measure? | Summary: the only 'correct' definition is the original one [0][1], the other ones were designed to solve a problem with it that happens in a specific context and would probably have been best called ' | How to find set of directions in Stahel-Donoho outlyingness measure?
Summary: the only 'correct' definition is the original one [0][1], the other ones were designed to solve a problem with it that happens in a specific context and would probably have been best called 'pseudo-Stahel-Donoho' distances because they don't, in general, have the same interpretation.
Now, I will show you how to obtain the standard SD distances in R, explain why they are computed the way they are, then explain what these pseudo-SD 'distances' are and why the need for them arose.
Given an $n$ by $p$ data matrix $X=\{x_1,\ldots,x_n\}$ whose entries lie in general (linear) position in $\mathbb{R}^p$, the original definition is to obtain a $p$-vector $d$ as the vector of coefficients of the hyperplane through $p$ data points chosen randomly out of your sample of $n$ observations. In R this is done so:
moar_directions<-function(x){
n<-nrow(x)
p<-ncol(x)
P<-x[sample(n,p),,drop=FALSE]
G<-rep(NA,p)
E<-matrix(1,p,1)
if((qrP<-qr(P))$rank==p) G<-solve(qrP,E)
return(G)
}
n<-100
p<-5
x<-matrix(rnorm(n*p),nc=p)
moar_directions(x)
(see for example line 13 of robustbase::adjOutlyingness here, which you can check by tipping:
library(robustbase)
body(robustbase::adjOutlyingness)
and [2]).
Then, projecting the data unto the direction normal to $d$ is done so:
x%*%moar_directions(x)
The SD outlyingness of $x_i$ with respect to $X=\{x_1,\ldots,x_n\}$ on a single projection unto $d$ is ([0][1]):
$$O(x_i,d,X)=\frac{|x_i'd-\mbox{med}_i(x_i'd)|}{\mbox{mad}_i(x_i'd)}\;\;\;(0)$$
which in R is obtained as:
w<-x%*%moar_directions(x)
abs(w-median(w))/mad(w)
And computing the SD outlyingness as the maximum outlyingness over all the members of $B_p^n$, where
$B_p^n$ is the set of all ${n\choose p}$ such directions $d$ in a $n$ by $p$ data matrix whose entries lie in general (linear) position in $\mathbb{R}^p$. Often, $|B_p^n|={n\choose p}$ will be too large, and it will not be possible to consider all its members in which case one can sample randomly $K$ directions from it to compute an approximation to the SD outlyigness of $x_i$ w.r.t. $X$:
$$O(x_i,X)=\underset{\{d_k\}_{k=1}^K\in B_p^n}{\max}\frac{|x_i'd_k-\mbox{med}_i(x_i'd_k)|}{\mbox{mad}_i(x_i'd_k)}$$
which is R is obtained as:
library(matrixStats)
K<-100
AllThoseProj<-matrix(NA,n,K)
for(i in 1:K){
w<-moar_directions(x)
AllThoseProj[,i]<-abs(w-median(w))/mad(w)
}
rowMaxs(AllThoseProj)
(I will use for a couple of lines K=100 as my definition of 'many' then latter explain what is meant by many in this context)
To motivate the original definition, notice[3] that:
$$d(x_i,X)=\underset{d\in B_p}{\max}\frac{|x_i'd_k-\mbox{mean}_i(x_i'd_k)|}{\mbox{sd}_i(x_i'd_k)}\;\;(1)$$
where
$$d^2(x_i,X)=(x_i-\mbox{mean}_i(x_i))'\mbox{Cov}_i(x_i)^{-1}(x_i-\mbox{mean}_i(x_i))$$
is the vector of squared Mahalanobis distances. So the original SD outlyingness index was designed as a way to compute
a consistent estimator of $d^2(x_i)$ with 50% breakdown point (see [1]) and
equality (1) only holds when the directions $d$ are defined as above.
Now, I will delve a bit more on what is meant by 'many' in ' Often, $|B_p^n|$ will be too large, and ... one can sample randomly many directions'
The notion of many that should be used here depends on the objective being pursued. If you are using the SD to approximate the vector of $d(x_i,X)$'s, then $K$ in the high hundreds should already give a good approximation. If you are using the SD as a robust alternative to the vector of $d(x_i,X)$'s, then a much higher value of $K$ will be necessary (see page 13 of [0] for a discussion of this).
Now, I will delve a bit more on these pseudo-SD distances.
The problem with the way $d$ is defined above is that it is only uniquely defined in settings where $n>p$. In the last decade a lot of research started to be done on high dimensional robustness. The idea built up incrementally to search for outliers through projection pursuit (as was done with the SD distances) but using a type of projection that would also 'work' when $p>n$. There are many such notions, but, gradually, a consensus emerged around the the idea of using directions through two points (see [4] for a late implementation of this idea). Directions through two points are defined as (see [4], and line 52 of body(rrcov:::extradir) in package rrcov for example):
moar_directions_2points<-function(x){
n<-nrow(x)
p<-ncol(x)
P<-x[sample(n,2),,drop=FALSE]
G<-rep(NA,p)
E<-G
E<-P[1,]-P[2,]
N<-sqrt(crossprod(E))
if(N>1e-8) G<-E/N
return(G)
}
Now, if we substitute in equation (0) the original definition of the directions $d$ by the ones above, the new outlyingness index (let's call it pseudo-SD) is no longer (except when $p=2$) a consistent estimator of the vector of $d(x_i,X)$'s, so it is a bit hard to interpret what it is. On the other hand, it can still be computed when $p>n$ in which case the vector of $d(x_i)$'s is not even defined anyway. The fact that we lost consistency by using directions through 2 data points also means we no longer have a target (a quantity we are trying to estimate, like we did with the the vector of $d(x_i,X)$'s in the case of the SD) so the question of what is meant by 'many' here is not really addressable. rrcov for example uses 'many' (in the case of directions through two points) to mean 250.
References:
[0] Stahel W. (1981).
Breakdown of Covariance Estimators.
Research Report 31, Fachgrupp fur Statistik, E.T.H. Zurich.
[1] Donoho. D.L. (1982).
Breakdown properties of multivariate location estimators
Ph.D. Qualifying Paper Harvard University.
[2] Hubert, M. and Van der Veeken, S. (2007). Outlier detection for skewed data. Journal of chemometrics vol:22 issue:3-4 pages:235-246.
[3] Rousseeuw, P.J. and Leroy, A.M. (1987).
Robust Regression and Outlier Detection. Wiley, New York.
[4] Hubert, M., Rousseeuw P. J. and Vanden Branden, K. (2005). ROBPCA: A New Approach to Robust Principal Component Analysis. Technometrics
Volume 47, Issue 1. | How to find set of directions in Stahel-Donoho outlyingness measure?
Summary: the only 'correct' definition is the original one [0][1], the other ones were designed to solve a problem with it that happens in a specific context and would probably have been best called ' |
54,410 | MANOVA when sample size is smaller than the number of DVs | Any method will be "sensitive to a small number of cases", meaning that a small study has lower power to detect features of interest than a larger one. With multivariate data, you are not only comparing means -- you are typically estimating a covariance matrix as well, and this requires more data to do with any accuracy.
It's worth considering what happens in MANOVA. Basically, the algorithm looks for linear combinations of the variables that do a good job of distinguishing the groups. Effectively, it has dimension reduction built in. If your variables are actually independent, MANOVA will not do better than a bunch of univariate comparisons (t-tests).
Let's look at your suggestions:
Remove variables. Certainly, if you think some variables won't distinguish the groups, then remove them.
Reduce dimensionality. Best option, if you have subject-matter reasons for combining the variables in some way. Perhaps the sum of your variables would be a good measure of something. Instead of letting MANOVA find the best linear combination, you supply one, based on theoretical considerations, and do a t-test of the scores.
Duplicate cases. This won't work. You can't make a singular matrix become non-singular by duplicating rows.
Use an alternative to MANOVA not sensitive to a small number of cases. The two-sample t-test can actually detect group differences on very small samples. You could do t-tests on all your variables, but to be honest, you would need to adjust the significance level using a Bonferroni approach. | MANOVA when sample size is smaller than the number of DVs | Any method will be "sensitive to a small number of cases", meaning that a small study has lower power to detect features of interest than a larger one. With multivariate data, you are not only compari | MANOVA when sample size is smaller than the number of DVs
Any method will be "sensitive to a small number of cases", meaning that a small study has lower power to detect features of interest than a larger one. With multivariate data, you are not only comparing means -- you are typically estimating a covariance matrix as well, and this requires more data to do with any accuracy.
It's worth considering what happens in MANOVA. Basically, the algorithm looks for linear combinations of the variables that do a good job of distinguishing the groups. Effectively, it has dimension reduction built in. If your variables are actually independent, MANOVA will not do better than a bunch of univariate comparisons (t-tests).
Let's look at your suggestions:
Remove variables. Certainly, if you think some variables won't distinguish the groups, then remove them.
Reduce dimensionality. Best option, if you have subject-matter reasons for combining the variables in some way. Perhaps the sum of your variables would be a good measure of something. Instead of letting MANOVA find the best linear combination, you supply one, based on theoretical considerations, and do a t-test of the scores.
Duplicate cases. This won't work. You can't make a singular matrix become non-singular by duplicating rows.
Use an alternative to MANOVA not sensitive to a small number of cases. The two-sample t-test can actually detect group differences on very small samples. You could do t-tests on all your variables, but to be honest, you would need to adjust the significance level using a Bonferroni approach. | MANOVA when sample size is smaller than the number of DVs
Any method will be "sensitive to a small number of cases", meaning that a small study has lower power to detect features of interest than a larger one. With multivariate data, you are not only compari |
54,411 | Fuzzy RDD issue | The Fuzzy RD design can conceptualized as a local IV model (that is, an instrumental variables regression with weights that decline as observations move away from the cutoff). You need to instrument for the treated indicator with a dummy for being above the cutoff, while controlling for the running variable $Z$ and the interaction of above-the-cutoff dummy and $Z$. This can be found on page 958 of the 2nd edition of "Adult" Wooldridge. You don't have weights and you are missing these interactions in your two models.
Here's a simulation in Stata that demonstrates this equivalence. We start by installing two RD commands and making some fake data:
. clear
. /* install two commands that do fuzzy RD */
. capture net install rdrobust, from(http://www-personal.umich.edu/~cattaneo/rdrobust/stata) replace
. capture ssc install rd
. /* Generate Fake Data and Weights */
. mat c=(1,.5\.5,1)
. set seed 10011979
. drawnorm e pretest, n(1000) corr(c) clear
(obs 1000)
. gen z=pretest-0 // z is running variable
. gen above=z>0 // above is above-the-cutoff indicator
. gen treated=cond(uniform()<.8,above,1-above) // treated indicator
. gen y=z-z^3+treated+e // define outcome y
. gen w=max(0,1-abs(z)) // define triangle kernel weight
Here's the IV estimate. Note how you can do the interaction on the fly by using the factor variable notation. I am not using powers of $Z$ in my model, just a simple linear term:
. /* IV Version */
. ivregress 2sls y (treated=i.above) z c.z#i.above [pw=w]
(sum of wgt is 3.8525e+02)
Instrumental variables (2SLS) regression Number of obs = 703
Wald chi2(3) = 464.46
Prob > chi2 = 0.0000
R-squared = 0.3792
Root MSE = .9368
------------------------------------------------------------------------------
| Robust
y | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
treated | 1.641632 .3639524 4.51 0.000 .9282981 2.354965
z | .9815473 .2558127 3.84 0.000 .4801637 1.482931
|
above#c.z |
1 | -.4895004 .3488331 -1.40 0.161 -1.173201 .1942
|
_cons | -.2436488 .1800297 -1.35 0.176 -.5965006 .109203
------------------------------------------------------------------------------
Instrumented: treated
Instruments: z 1.above#c.z 1.above
There are two user-written commands that estimate fuzzy RD models:
. /* FRD Versions */
. rd y treated z, bw(1) z0(0) kernel(triangle)
Three variables specified; jump in treatment
at Z=0 will be estimated. Local Wald Estimate
is the ratio of jump in outcome to jump in treatment.
Assignment variable Z is z
Treatment variable X_T is treated
Outcome variable y is y
Estimating for bandwidth 1
Estimating for bandwidth .5
Estimating for bandwidth 2
------------------------------------------------------------------------------
y | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
numer | .7219002 .1669301 4.32 0.000 .3947233 1.049077
denom | .4397455 .0724289 6.07 0.000 .2977876 .5817035
lwald | 1.641632 .3642115 4.51 0.000 .9277902 2.355473
numer50 | .6432259 .240981 2.67 0.008 .1709118 1.11554
denom50 | .3158332 .1027168 3.07 0.002 .1145121 .5171544
lwald50 | 2.0366 .7731264 2.63 0.008 .5212996 3.5519
numer200 | 1.518334 .1283546 11.83 0.000 1.266764 1.769905
denom200 | .4920938 .0529886 9.29 0.000 .3882381 .5959496
lwald200 | 3.085456 .3284832 9.39 0.000 2.441641 3.729272
------------------------------------------------------------------------------
The first lwald coefficient is the FRD treatment effect. Here's another command that does FRD:
. rdrobust y z, fuzzy(treated) kernel(triangular) h(1) bwselect(IK)
Preparing data.
Computing variance-covariance matrix.
Computing RD estimates.
Estimation completed.
Sharp RD estimates using local polynomial regression.
Cutoff c = 0 | Left of c Right of c Number of obs = 1000
----------------------+---------------------- NN matches = 3
Number of obs | 373 330 BW type = Manual
Order loc. poly. (p) | 1 1 Kernel type = Triangular
Order bias (q) | 2 2
BW loc. poly. (h) | 1.000 1.000
BW bias (b) | 1.000 1.000
rho (h/b) | 1.000 1.000
Structural Estimates. Outcome: y. Running variable: z. Instrument: treated.
--------------------------------------------------------------------------------------
Method | Coef. Std. Err. z P>|z| [95% Conf. Interval]
----------------------+---------------------------------------------------------------
Conventional | 1.6416 .36912 4.4474 0.000 .918162 2.3651
Robust | - - 2.9955 0.003 .584165 2.79549
--------------------------------------------------------------------------------------
First-Stage Estimates. Outcome: treated. Running variable: z.
--------------------------------------------------------------------------------------
Method | Coef. Std. Err. z P>|z| [95% Conf. Interval]
----------------------+---------------------------------------------------------------
Conventional | .43975 .07148 6.1523 0.000 .299654 .579837
Robust | - - 2.7248 0.006 .083139 .509228
--------------------------------------------------------------------------------------
The conventional coefficient above is the FRD treatment effect. Both FRD estimates and their standard errors match the LWIV.
Now for you second question. Here I may be on shakier grounds since I am less familiar with the literature. I am assuming that you want to estimate a single model for men and women to get a single estimate of the effect. There are two options to accomplish this. One is to estimate the two models and re-weight the estimates. It seems prudent not to weight by the overall gender-specific sample size, nor by the treated sample size. Personally, I like to make the weights proportional to the number of units within some range of the discontinuity for each group to make sure that the observations too far from the cutoff don't matter in determining the weights. You can use the bandwidth for that. Because the estimates from the two individual discontinuities will be independent, once you have variances for each estimate it is easy to get a variance for the combined estimate since the covariance is zero.
The other option is to re-center all the observations by gender, pool them, and then apply an estimator for a single discontinuity with the running variable now a relative one rather than an absolute one. The resulting estimate implicitly weights the various discontinuity estimates by the number of observations at the discontinuity in each case.
I think I prefer the former approach, since it allows the bandwidth to vary by gender and for any heterogeneity in the treatment effect to emerge. | Fuzzy RDD issue | The Fuzzy RD design can conceptualized as a local IV model (that is, an instrumental variables regression with weights that decline as observations move away from the cutoff). You need to instrument f | Fuzzy RDD issue
The Fuzzy RD design can conceptualized as a local IV model (that is, an instrumental variables regression with weights that decline as observations move away from the cutoff). You need to instrument for the treated indicator with a dummy for being above the cutoff, while controlling for the running variable $Z$ and the interaction of above-the-cutoff dummy and $Z$. This can be found on page 958 of the 2nd edition of "Adult" Wooldridge. You don't have weights and you are missing these interactions in your two models.
Here's a simulation in Stata that demonstrates this equivalence. We start by installing two RD commands and making some fake data:
. clear
. /* install two commands that do fuzzy RD */
. capture net install rdrobust, from(http://www-personal.umich.edu/~cattaneo/rdrobust/stata) replace
. capture ssc install rd
. /* Generate Fake Data and Weights */
. mat c=(1,.5\.5,1)
. set seed 10011979
. drawnorm e pretest, n(1000) corr(c) clear
(obs 1000)
. gen z=pretest-0 // z is running variable
. gen above=z>0 // above is above-the-cutoff indicator
. gen treated=cond(uniform()<.8,above,1-above) // treated indicator
. gen y=z-z^3+treated+e // define outcome y
. gen w=max(0,1-abs(z)) // define triangle kernel weight
Here's the IV estimate. Note how you can do the interaction on the fly by using the factor variable notation. I am not using powers of $Z$ in my model, just a simple linear term:
. /* IV Version */
. ivregress 2sls y (treated=i.above) z c.z#i.above [pw=w]
(sum of wgt is 3.8525e+02)
Instrumental variables (2SLS) regression Number of obs = 703
Wald chi2(3) = 464.46
Prob > chi2 = 0.0000
R-squared = 0.3792
Root MSE = .9368
------------------------------------------------------------------------------
| Robust
y | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
treated | 1.641632 .3639524 4.51 0.000 .9282981 2.354965
z | .9815473 .2558127 3.84 0.000 .4801637 1.482931
|
above#c.z |
1 | -.4895004 .3488331 -1.40 0.161 -1.173201 .1942
|
_cons | -.2436488 .1800297 -1.35 0.176 -.5965006 .109203
------------------------------------------------------------------------------
Instrumented: treated
Instruments: z 1.above#c.z 1.above
There are two user-written commands that estimate fuzzy RD models:
. /* FRD Versions */
. rd y treated z, bw(1) z0(0) kernel(triangle)
Three variables specified; jump in treatment
at Z=0 will be estimated. Local Wald Estimate
is the ratio of jump in outcome to jump in treatment.
Assignment variable Z is z
Treatment variable X_T is treated
Outcome variable y is y
Estimating for bandwidth 1
Estimating for bandwidth .5
Estimating for bandwidth 2
------------------------------------------------------------------------------
y | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
numer | .7219002 .1669301 4.32 0.000 .3947233 1.049077
denom | .4397455 .0724289 6.07 0.000 .2977876 .5817035
lwald | 1.641632 .3642115 4.51 0.000 .9277902 2.355473
numer50 | .6432259 .240981 2.67 0.008 .1709118 1.11554
denom50 | .3158332 .1027168 3.07 0.002 .1145121 .5171544
lwald50 | 2.0366 .7731264 2.63 0.008 .5212996 3.5519
numer200 | 1.518334 .1283546 11.83 0.000 1.266764 1.769905
denom200 | .4920938 .0529886 9.29 0.000 .3882381 .5959496
lwald200 | 3.085456 .3284832 9.39 0.000 2.441641 3.729272
------------------------------------------------------------------------------
The first lwald coefficient is the FRD treatment effect. Here's another command that does FRD:
. rdrobust y z, fuzzy(treated) kernel(triangular) h(1) bwselect(IK)
Preparing data.
Computing variance-covariance matrix.
Computing RD estimates.
Estimation completed.
Sharp RD estimates using local polynomial regression.
Cutoff c = 0 | Left of c Right of c Number of obs = 1000
----------------------+---------------------- NN matches = 3
Number of obs | 373 330 BW type = Manual
Order loc. poly. (p) | 1 1 Kernel type = Triangular
Order bias (q) | 2 2
BW loc. poly. (h) | 1.000 1.000
BW bias (b) | 1.000 1.000
rho (h/b) | 1.000 1.000
Structural Estimates. Outcome: y. Running variable: z. Instrument: treated.
--------------------------------------------------------------------------------------
Method | Coef. Std. Err. z P>|z| [95% Conf. Interval]
----------------------+---------------------------------------------------------------
Conventional | 1.6416 .36912 4.4474 0.000 .918162 2.3651
Robust | - - 2.9955 0.003 .584165 2.79549
--------------------------------------------------------------------------------------
First-Stage Estimates. Outcome: treated. Running variable: z.
--------------------------------------------------------------------------------------
Method | Coef. Std. Err. z P>|z| [95% Conf. Interval]
----------------------+---------------------------------------------------------------
Conventional | .43975 .07148 6.1523 0.000 .299654 .579837
Robust | - - 2.7248 0.006 .083139 .509228
--------------------------------------------------------------------------------------
The conventional coefficient above is the FRD treatment effect. Both FRD estimates and their standard errors match the LWIV.
Now for you second question. Here I may be on shakier grounds since I am less familiar with the literature. I am assuming that you want to estimate a single model for men and women to get a single estimate of the effect. There are two options to accomplish this. One is to estimate the two models and re-weight the estimates. It seems prudent not to weight by the overall gender-specific sample size, nor by the treated sample size. Personally, I like to make the weights proportional to the number of units within some range of the discontinuity for each group to make sure that the observations too far from the cutoff don't matter in determining the weights. You can use the bandwidth for that. Because the estimates from the two individual discontinuities will be independent, once you have variances for each estimate it is easy to get a variance for the combined estimate since the covariance is zero.
The other option is to re-center all the observations by gender, pool them, and then apply an estimator for a single discontinuity with the running variable now a relative one rather than an absolute one. The resulting estimate implicitly weights the various discontinuity estimates by the number of observations at the discontinuity in each case.
I think I prefer the former approach, since it allows the bandwidth to vary by gender and for any heterogeneity in the treatment effect to emerge. | Fuzzy RDD issue
The Fuzzy RD design can conceptualized as a local IV model (that is, an instrumental variables regression with weights that decline as observations move away from the cutoff). You need to instrument f |
54,412 | How to interpret ANOVA output when comparing two nested mixed-effect models? | The Chisq value is the test statistic of the likelihood ratio test (LRT) being applied to the two models. This value is computed as twice the difference in the log-likelihoods of the two models (the log likelihood is in column logLik). Asymptotically, the log-likelihood ratio follows a Chi-square distribution with degrees of freedom equal to the difference in degrees of freedom of the two models; here this is $|6 - 7| = 1$ and is shown in the Chi Df column. As such, the probability of observing a test statistic as extreme as the observed (0.0452) value if the two models were equivalent can be computed from that Chi-square distribution. This probability is the value 0.8315.
You can compute this yourself as
> pchisq(0.0452, df = 1, lower.tail=FALSE)
[1] 0.8316367
(This doesn't quite match as I used the rounded values printed to the console whereas the software will have used higher precision.)
In more practical terms, this just reaffirms you interpretation of the relative merits of the two models you made via AIC. The log-likelihoods of the two models are almost exactly equal indicating the two models do a similar job of fitting the data. The LRT is telling you that you'd be very likely to observe a test statistic (Chisq) as large as the one reported if the two models provided the same fit. Hence you fail to reject the null hypothesis that the likelihoods of the two models are equivalent.
frm.oage is "better" in the sense that it does as well as the more complex model with 1 fewer parameters. The AICs of the two models differ by almost 2 AIC units; which is, from the definition of AIC, what you'd expect if you added a redundant parameter with no additional explanatory power to the model
This is not unexpected; AIC is computed from the log-likelihood of a model. All indicators of the comparison of the "fits" of the two models is that they are essentially the same. | How to interpret ANOVA output when comparing two nested mixed-effect models? | The Chisq value is the test statistic of the likelihood ratio test (LRT) being applied to the two models. This value is computed as twice the difference in the log-likelihoods of the two models (the l | How to interpret ANOVA output when comparing two nested mixed-effect models?
The Chisq value is the test statistic of the likelihood ratio test (LRT) being applied to the two models. This value is computed as twice the difference in the log-likelihoods of the two models (the log likelihood is in column logLik). Asymptotically, the log-likelihood ratio follows a Chi-square distribution with degrees of freedom equal to the difference in degrees of freedom of the two models; here this is $|6 - 7| = 1$ and is shown in the Chi Df column. As such, the probability of observing a test statistic as extreme as the observed (0.0452) value if the two models were equivalent can be computed from that Chi-square distribution. This probability is the value 0.8315.
You can compute this yourself as
> pchisq(0.0452, df = 1, lower.tail=FALSE)
[1] 0.8316367
(This doesn't quite match as I used the rounded values printed to the console whereas the software will have used higher precision.)
In more practical terms, this just reaffirms you interpretation of the relative merits of the two models you made via AIC. The log-likelihoods of the two models are almost exactly equal indicating the two models do a similar job of fitting the data. The LRT is telling you that you'd be very likely to observe a test statistic (Chisq) as large as the one reported if the two models provided the same fit. Hence you fail to reject the null hypothesis that the likelihoods of the two models are equivalent.
frm.oage is "better" in the sense that it does as well as the more complex model with 1 fewer parameters. The AICs of the two models differ by almost 2 AIC units; which is, from the definition of AIC, what you'd expect if you added a redundant parameter with no additional explanatory power to the model
This is not unexpected; AIC is computed from the log-likelihood of a model. All indicators of the comparison of the "fits" of the two models is that they are essentially the same. | How to interpret ANOVA output when comparing two nested mixed-effect models?
The Chisq value is the test statistic of the likelihood ratio test (LRT) being applied to the two models. This value is computed as twice the difference in the log-likelihoods of the two models (the l |
54,413 | Determine when time-series should be logged (or any other transformation) and applied automatically | As @Irishstat points out you could use boxcox power transformation, which is a more general transformation function which also includes log transformation. R's forecast package has a function called BoxCox.lambda and BoxCox, you could use these two functions and determine if your data needs transformation. if lambda is close to 1 then your data needs no transformation, else you data needs appropriate power transformation.
Using your data
x <- ts(c(5403.676, 6773.505, 7231.117, 7835.552, 5236.710, 5526.619, 6555.782, 11464.727, 7210.069, 7501.610, 8670.903, 10872.935, 8209.023, 8153.393, 10196.448, 13244.502, 8356.733, 10188.442, 10601.322, 12617.821, 11786.526, 10044.987, 11006.005, 15101.946, 10992.273, 11421.189, 10731.312),frequency =4)
lambda <- BoxCox.lambda(x, method=c("guerrero"))
lambda
0.3855427
x.transform <- BoxCox(x,lambda)
plot(x.transform)
Using the box.cox lambda yielded a lambda value 0.3855. You could use this in the BoxCox function as shown above.
Let us know if you find this post useful. | Determine when time-series should be logged (or any other transformation) and applied automatically | As @Irishstat points out you could use boxcox power transformation, which is a more general transformation function which also includes log transformation. R's forecast package has a function called B | Determine when time-series should be logged (or any other transformation) and applied automatically
As @Irishstat points out you could use boxcox power transformation, which is a more general transformation function which also includes log transformation. R's forecast package has a function called BoxCox.lambda and BoxCox, you could use these two functions and determine if your data needs transformation. if lambda is close to 1 then your data needs no transformation, else you data needs appropriate power transformation.
Using your data
x <- ts(c(5403.676, 6773.505, 7231.117, 7835.552, 5236.710, 5526.619, 6555.782, 11464.727, 7210.069, 7501.610, 8670.903, 10872.935, 8209.023, 8153.393, 10196.448, 13244.502, 8356.733, 10188.442, 10601.322, 12617.821, 11786.526, 10044.987, 11006.005, 15101.946, 10992.273, 11421.189, 10731.312),frequency =4)
lambda <- BoxCox.lambda(x, method=c("guerrero"))
lambda
0.3855427
x.transform <- BoxCox(x,lambda)
plot(x.transform)
Using the box.cox lambda yielded a lambda value 0.3855. You could use this in the BoxCox function as shown above.
Let us know if you find this post useful. | Determine when time-series should be logged (or any other transformation) and applied automatically
As @Irishstat points out you could use boxcox power transformation, which is a more general transformation function which also includes log transformation. R's forecast package has a function called B |
54,414 | Determine when time-series should be logged (or any other transformation) and applied automatically | Power Transformations found via a Box-Cox test http://onlinestatbook.com/2/transformations/box-cox.html are useful/correct when a linear relationship is found between the expected value and the variability of the model errors. It has little to do with the variability of the original series. The range of transformations is from none to a reciprocal. Care should be taken to account for pulse outliers as untreated they can distort the Box-Cox conclusions. Furthermore note that error variance may also change in discrete steps quite free of the expected value . The appropriate remedy in this case is to Generalized Least Squares or as it is often known as Weighted Least Squares.
You might look very closely at my response to Seeking certain type of ARIMA explanation
UPON RECEIPT OF DATA ( :some 27 quarterly observations starting at 2008 q1
The ACF of the original series suggests a fairly strong seasonal structure. AUTOBOX automatically identified a model and shown here which yielded an ACF of the error process suggesting model sufficiency . The model includes an identified intervention at period 21 (2013 quarter 1 ) of the 27 observations. A plot of the actual and the cleansed highlights the anomaly. The actual/fit/forecast graph is here with forecasts here . In summary there was no need for any variance stabilization transformation for this data set. The optimal box-cox coefficient requires a model and in this case is 1.0. If you don't specify a model as is possible with boxcoxfit then in the absence of a good ARIMA structure and the identified anomaly at period 21 you might then get a lambda like .52 which is probably the result of an incorrect model. | Determine when time-series should be logged (or any other transformation) and applied automatically | Power Transformations found via a Box-Cox test http://onlinestatbook.com/2/transformations/box-cox.html are useful/correct when a linear relationship is found between the expected value and the varia | Determine when time-series should be logged (or any other transformation) and applied automatically
Power Transformations found via a Box-Cox test http://onlinestatbook.com/2/transformations/box-cox.html are useful/correct when a linear relationship is found between the expected value and the variability of the model errors. It has little to do with the variability of the original series. The range of transformations is from none to a reciprocal. Care should be taken to account for pulse outliers as untreated they can distort the Box-Cox conclusions. Furthermore note that error variance may also change in discrete steps quite free of the expected value . The appropriate remedy in this case is to Generalized Least Squares or as it is often known as Weighted Least Squares.
You might look very closely at my response to Seeking certain type of ARIMA explanation
UPON RECEIPT OF DATA ( :some 27 quarterly observations starting at 2008 q1
The ACF of the original series suggests a fairly strong seasonal structure. AUTOBOX automatically identified a model and shown here which yielded an ACF of the error process suggesting model sufficiency . The model includes an identified intervention at period 21 (2013 quarter 1 ) of the 27 observations. A plot of the actual and the cleansed highlights the anomaly. The actual/fit/forecast graph is here with forecasts here . In summary there was no need for any variance stabilization transformation for this data set. The optimal box-cox coefficient requires a model and in this case is 1.0. If you don't specify a model as is possible with boxcoxfit then in the absence of a good ARIMA structure and the identified anomaly at period 21 you might then get a lambda like .52 which is probably the result of an incorrect model. | Determine when time-series should be logged (or any other transformation) and applied automatically
Power Transformations found via a Box-Cox test http://onlinestatbook.com/2/transformations/box-cox.html are useful/correct when a linear relationship is found between the expected value and the varia |
54,415 | Tournament Plotting: Who is good against whom? | I'm guessing you have all the pairwise win-rates? Then perhaps plot then in a grid, with colors indicating win-rate. An implementation in python:
from itertools import product
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
np.random.seed(34563)
# Create win_rates. Symmetric with .5's down the diagonal
win_rates = np.random.uniform(size=(10, 10))
for i, j in product(range(10), repeat=2):
if i == j:
win_rates[i, j] = .5
if i > j:
win_rates[i, j] = 1 - win_rates[j, i]
def plot_heatmap(win_rates):
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.imshow(win_rates, cmap=cm.bwr_r, interpolation='none')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
m = cm.ScalarMappable(cmap=cm.bwr_r)
m.set_array(win_rates)
fig.colorbar(m)
return fig
A natural ordering is simply by placement in the tournament. First let's try by overall win rate, here assuming that everyone plays an equal amount of games against everyone else.
overall_win_rates = np.mean(win_rates, axis=1)
sorted_indexes = np.argsort(-overall_win_rates)
win_rates = win_rates[sorted_indexes] # Sort rows
win_rates = win_rates[:, sorted_indexes] # Sort columns
overall_win_rates = overall_win_rates[sorted_indexes]
fig = plot_heatmap(win_rates)
Clustering could also be used to order the contestants. Input the matrix of win-rates into some clustering algorithm (see scikit-learn for a variety of them in Python) and put people in the same group next to each other in the ordering. This will cluster people based on their win-rates versus particular people, which is how I would hope to extract some semblance of playing styles. Combine that ordering with a plot like the one above.
This is an example using affinity propagation. I'm actually not very familiar with this algorithm, but it's one of the few that doesn't have the number of clusters as a parameter, which is nice for the application.
from sklearn import cluster
ap = cluster.AffinityPropagation()
clusters = ap.fit_predict(win_rates)
sorted_indexes = np.lexsort([-overall_win_rates, clusters])
win_rates = win_rates[sorted_indexes]
win_rates = win_rates[:, sorted_indexes]
overall_win_rates = overall_win_rates[sorted_indexes]
fig = plot_heatmap(win_rates)
Here, the top rows are a cluster, the next three are a cluster, and the last four are a cluster. Though keep in mind that this is just using the defaults provided, in practice you probably would want to tune the clustering parameters.
Within clusters the rows are sorted by overall win rate. One of several improvements you could make to this figure is an extra column or some way to show overall win rate on the figure, and possibly an indicator for clusters as well. | Tournament Plotting: Who is good against whom? | I'm guessing you have all the pairwise win-rates? Then perhaps plot then in a grid, with colors indicating win-rate. An implementation in python:
from itertools import product
import numpy as np
impo | Tournament Plotting: Who is good against whom?
I'm guessing you have all the pairwise win-rates? Then perhaps plot then in a grid, with colors indicating win-rate. An implementation in python:
from itertools import product
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
np.random.seed(34563)
# Create win_rates. Symmetric with .5's down the diagonal
win_rates = np.random.uniform(size=(10, 10))
for i, j in product(range(10), repeat=2):
if i == j:
win_rates[i, j] = .5
if i > j:
win_rates[i, j] = 1 - win_rates[j, i]
def plot_heatmap(win_rates):
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.imshow(win_rates, cmap=cm.bwr_r, interpolation='none')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
m = cm.ScalarMappable(cmap=cm.bwr_r)
m.set_array(win_rates)
fig.colorbar(m)
return fig
A natural ordering is simply by placement in the tournament. First let's try by overall win rate, here assuming that everyone plays an equal amount of games against everyone else.
overall_win_rates = np.mean(win_rates, axis=1)
sorted_indexes = np.argsort(-overall_win_rates)
win_rates = win_rates[sorted_indexes] # Sort rows
win_rates = win_rates[:, sorted_indexes] # Sort columns
overall_win_rates = overall_win_rates[sorted_indexes]
fig = plot_heatmap(win_rates)
Clustering could also be used to order the contestants. Input the matrix of win-rates into some clustering algorithm (see scikit-learn for a variety of them in Python) and put people in the same group next to each other in the ordering. This will cluster people based on their win-rates versus particular people, which is how I would hope to extract some semblance of playing styles. Combine that ordering with a plot like the one above.
This is an example using affinity propagation. I'm actually not very familiar with this algorithm, but it's one of the few that doesn't have the number of clusters as a parameter, which is nice for the application.
from sklearn import cluster
ap = cluster.AffinityPropagation()
clusters = ap.fit_predict(win_rates)
sorted_indexes = np.lexsort([-overall_win_rates, clusters])
win_rates = win_rates[sorted_indexes]
win_rates = win_rates[:, sorted_indexes]
overall_win_rates = overall_win_rates[sorted_indexes]
fig = plot_heatmap(win_rates)
Here, the top rows are a cluster, the next three are a cluster, and the last four are a cluster. Though keep in mind that this is just using the defaults provided, in practice you probably would want to tune the clustering parameters.
Within clusters the rows are sorted by overall win rate. One of several improvements you could make to this figure is an extra column or some way to show overall win rate on the figure, and possibly an indicator for clusters as well. | Tournament Plotting: Who is good against whom?
I'm guessing you have all the pairwise win-rates? Then perhaps plot then in a grid, with colors indicating win-rate. An implementation in python:
from itertools import product
import numpy as np
impo |
54,416 | Tournament Plotting: Who is good against whom? | I would recommend using a fluctuation diagram over a heatmap. Here is an example:
Fluctuation diagrams use area to represent count, instead of color, which is higher on Cleveland's hierarchy of visual skills.
Sort the players by number of wins, both vertically and horizontally.
It is also possible to include some indicators of confidence in the counts, by using a fluctuation diagram. In the plot above, a permutation test was done to determine if the actual count was around about the expected value, given the marginal frequency (eg how often times the player wins regardless who the opponent is), or above or below.
In the plot above, solid grey boxes indicate the actual count. The grey outline indicates expected count, if both variables were independent (you would use if both players had equal chance of being selected). Boxes are recolored if they are significantly bigger or smaller than the expected count.
Oh, plot was computed using R. If anyone wants the code I can send it. | Tournament Plotting: Who is good against whom? | I would recommend using a fluctuation diagram over a heatmap. Here is an example:
Fluctuation diagrams use area to represent count, instead of color, which is higher on Cleveland's hierarchy of visua | Tournament Plotting: Who is good against whom?
I would recommend using a fluctuation diagram over a heatmap. Here is an example:
Fluctuation diagrams use area to represent count, instead of color, which is higher on Cleveland's hierarchy of visual skills.
Sort the players by number of wins, both vertically and horizontally.
It is also possible to include some indicators of confidence in the counts, by using a fluctuation diagram. In the plot above, a permutation test was done to determine if the actual count was around about the expected value, given the marginal frequency (eg how often times the player wins regardless who the opponent is), or above or below.
In the plot above, solid grey boxes indicate the actual count. The grey outline indicates expected count, if both variables were independent (you would use if both players had equal chance of being selected). Boxes are recolored if they are significantly bigger or smaller than the expected count.
Oh, plot was computed using R. If anyone wants the code I can send it. | Tournament Plotting: Who is good against whom?
I would recommend using a fluctuation diagram over a heatmap. Here is an example:
Fluctuation diagrams use area to represent count, instead of color, which is higher on Cleveland's hierarchy of visua |
54,417 | R and JMP produce different regression results due to sum of squares calculation and other factors | Figured it out. After condition <- mydata$condition in my original R code, adding these lines (instead of what I had there originally) makes the R results identical to the JMP results:
# change contrasts from the R defaults:
options(contrasts=c("contr.sum", "contr.poly"))
# center income:
income_c <- scale(income, scale=F, center=T)
# use centered income instead of uncentered income, which I was using before:
psc.income.regress <- lm(psc ~ income_c * condition)
# gives the coefficients, t, and p seen in JMP output:
summary(psc.income.regress)
# gives the SS and F seen in JMP output:
drop1(psc.income.regress, ~., test="F") | R and JMP produce different regression results due to sum of squares calculation and other factors | Figured it out. After condition <- mydata$condition in my original R code, adding these lines (instead of what I had there originally) makes the R results identical to the JMP results:
# change cont | R and JMP produce different regression results due to sum of squares calculation and other factors
Figured it out. After condition <- mydata$condition in my original R code, adding these lines (instead of what I had there originally) makes the R results identical to the JMP results:
# change contrasts from the R defaults:
options(contrasts=c("contr.sum", "contr.poly"))
# center income:
income_c <- scale(income, scale=F, center=T)
# use centered income instead of uncentered income, which I was using before:
psc.income.regress <- lm(psc ~ income_c * condition)
# gives the coefficients, t, and p seen in JMP output:
summary(psc.income.regress)
# gives the SS and F seen in JMP output:
drop1(psc.income.regress, ~., test="F") | R and JMP produce different regression results due to sum of squares calculation and other factors
Figured it out. After condition <- mydata$condition in my original R code, adding these lines (instead of what I had there originally) makes the R results identical to the JMP results:
# change cont |
54,418 | R and JMP produce different regression results due to sum of squares calculation and other factors | JMP centers polynomials by default. You can override that default by unclicking that option under the red triangle by "Model Specification" in the "Fit Model" dialog box. Doing so will produce the same results for the Type III sums-of-squares as R, when you use the car package and appropriate contrasts. In other words, if you wish to reproduce JMP's default results in R, you must center your numerical predictor in R first. If you wish to reproduce R's default in JMP, you must unselect the "Center Polynomials" option in JMP first. | R and JMP produce different regression results due to sum of squares calculation and other factors | JMP centers polynomials by default. You can override that default by unclicking that option under the red triangle by "Model Specification" in the "Fit Model" dialog box. Doing so will produce the sam | R and JMP produce different regression results due to sum of squares calculation and other factors
JMP centers polynomials by default. You can override that default by unclicking that option under the red triangle by "Model Specification" in the "Fit Model" dialog box. Doing so will produce the same results for the Type III sums-of-squares as R, when you use the car package and appropriate contrasts. In other words, if you wish to reproduce JMP's default results in R, you must center your numerical predictor in R first. If you wish to reproduce R's default in JMP, you must unselect the "Center Polynomials" option in JMP first. | R and JMP produce different regression results due to sum of squares calculation and other factors
JMP centers polynomials by default. You can override that default by unclicking that option under the red triangle by "Model Specification" in the "Fit Model" dialog box. Doing so will produce the sam |
54,419 | Book for introductory nonparametric econometrics/statistics | I would recommend two books if you are interested in smoothing techniques, especially in density estimation and regression (rather than in tests that don’t require classical normality assumptions, which are often based on ranks rather than the raw data):
Nonparametric and Semiparametric Models by Härdle, Müller, Sperlich, and Werwatz
Li and Racine's Nonparametric Econometrics: Theory and Practice
The first is much slimmer, a bit more introductory, with lots of examples and illustrations. It covers histograms, nonparametric density estimation, nonparametric regression, semiparametric and generalized regression models, single index models, generalized partial linear models, additive models and their marginal effects and generalized additive models.
The second tome covers nonparametric kernel methods, semiparametric methods, consistent model specification tests, nonparametric nearest neighbor and series methods, and some time series, simultaneous equations, and panel data models at the end. There is not too much about QR in this book. Koenker's QR would make a nice supplement.
It is also worth mentioning some other books. While comprehensive and worth reading later, I found Pagan and Ullah to be a difficult first introductions to this material. I have heard good things about Yatchew's Semiparametric Regression book, but I have not read it myself. | Book for introductory nonparametric econometrics/statistics | I would recommend two books if you are interested in smoothing techniques, especially in density estimation and regression (rather than in tests that don’t require classical normality assumptions, whi | Book for introductory nonparametric econometrics/statistics
I would recommend two books if you are interested in smoothing techniques, especially in density estimation and regression (rather than in tests that don’t require classical normality assumptions, which are often based on ranks rather than the raw data):
Nonparametric and Semiparametric Models by Härdle, Müller, Sperlich, and Werwatz
Li and Racine's Nonparametric Econometrics: Theory and Practice
The first is much slimmer, a bit more introductory, with lots of examples and illustrations. It covers histograms, nonparametric density estimation, nonparametric regression, semiparametric and generalized regression models, single index models, generalized partial linear models, additive models and their marginal effects and generalized additive models.
The second tome covers nonparametric kernel methods, semiparametric methods, consistent model specification tests, nonparametric nearest neighbor and series methods, and some time series, simultaneous equations, and panel data models at the end. There is not too much about QR in this book. Koenker's QR would make a nice supplement.
It is also worth mentioning some other books. While comprehensive and worth reading later, I found Pagan and Ullah to be a difficult first introductions to this material. I have heard good things about Yatchew's Semiparametric Regression book, but I have not read it myself. | Book for introductory nonparametric econometrics/statistics
I would recommend two books if you are interested in smoothing techniques, especially in density estimation and regression (rather than in tests that don’t require classical normality assumptions, whi |
54,420 | Book for introductory nonparametric econometrics/statistics | There is a recent title by Henderson/Parmeter: Applied Nonparametric Econometrics. As the name suggests, its focus is more applied than that of Li and Racine, although it does devote quite some attention to theoretical underpinnings, too.
Compared to the titles mentioned by @Dimitriy, it (unsurprisingly given its more recent date) surveys many of the recent developments in for example the area of nonparametric instrumental variables or panel data estimation.
What I additionally like a lot (probably in line with many CVers) is its usage of R as well as...cross-validation ;-). | Book for introductory nonparametric econometrics/statistics | There is a recent title by Henderson/Parmeter: Applied Nonparametric Econometrics. As the name suggests, its focus is more applied than that of Li and Racine, although it does devote quite some attent | Book for introductory nonparametric econometrics/statistics
There is a recent title by Henderson/Parmeter: Applied Nonparametric Econometrics. As the name suggests, its focus is more applied than that of Li and Racine, although it does devote quite some attention to theoretical underpinnings, too.
Compared to the titles mentioned by @Dimitriy, it (unsurprisingly given its more recent date) surveys many of the recent developments in for example the area of nonparametric instrumental variables or panel data estimation.
What I additionally like a lot (probably in line with many CVers) is its usage of R as well as...cross-validation ;-). | Book for introductory nonparametric econometrics/statistics
There is a recent title by Henderson/Parmeter: Applied Nonparametric Econometrics. As the name suggests, its focus is more applied than that of Li and Racine, although it does devote quite some attent |
54,421 | Book for introductory nonparametric econometrics/statistics | The online StatSoft textbook is a good place to start. | Book for introductory nonparametric econometrics/statistics | The online StatSoft textbook is a good place to start. | Book for introductory nonparametric econometrics/statistics
The online StatSoft textbook is a good place to start. | Book for introductory nonparametric econometrics/statistics
The online StatSoft textbook is a good place to start. |
54,422 | How to represent categorical data in a pie graph form? | A pie chart - if it's suitable at all - is only suitable for mutually exclusive categories, since it represents a division of a whole into mutually exclusive subsets.
So, don't use a pie chart for categories that are not mutually exclusive. You're misleading people by doing so. (Why would you want to actively - and seemingly deliberately - mislead people about the information you're presenting?)
If you know the overlap in subsets, you might be able to use area to represent proportion, with overlapping area representing the intersection of the categories, but it won't look like a pie.
Alternatively, if possible, use the information you have to break the data into mutually exclusive subsets, which can then be represented as a pie chart. (Where possible, arrange those subsets so adjacent subsets combine to give other meaningful subsets)
This 'break into mutually exclusive subsets' and 'represent as a pie chart' will generally be less informative than other choices that represent all available combinations (think along the lines of a Venn-diagram, but with area also representing probability). | How to represent categorical data in a pie graph form? | A pie chart - if it's suitable at all - is only suitable for mutually exclusive categories, since it represents a division of a whole into mutually exclusive subsets.
So, don't use a pie chart for cat | How to represent categorical data in a pie graph form?
A pie chart - if it's suitable at all - is only suitable for mutually exclusive categories, since it represents a division of a whole into mutually exclusive subsets.
So, don't use a pie chart for categories that are not mutually exclusive. You're misleading people by doing so. (Why would you want to actively - and seemingly deliberately - mislead people about the information you're presenting?)
If you know the overlap in subsets, you might be able to use area to represent proportion, with overlapping area representing the intersection of the categories, but it won't look like a pie.
Alternatively, if possible, use the information you have to break the data into mutually exclusive subsets, which can then be represented as a pie chart. (Where possible, arrange those subsets so adjacent subsets combine to give other meaningful subsets)
This 'break into mutually exclusive subsets' and 'represent as a pie chart' will generally be less informative than other choices that represent all available combinations (think along the lines of a Venn-diagram, but with area also representing probability). | How to represent categorical data in a pie graph form?
A pie chart - if it's suitable at all - is only suitable for mutually exclusive categories, since it represents a division of a whole into mutually exclusive subsets.
So, don't use a pie chart for cat |
54,423 | Comparing mclust() and k-means centroids | Gaussian Mixture Modeling is not the same as k-means.
None of the models has a 1:1 correspondence to k-means. The closes is probably
"EII" = spherical, equal volume
but Mclust will still use a soft assignment, whereas k-means uses a hard assignment. There is a closer relationship between EII and fuzzy c-means (although I don't think they are the same either, due to different weighting functions).
Also note that k-means is commonly randomly seeded, so different runs of k-means will often yield different results, too! | Comparing mclust() and k-means centroids | Gaussian Mixture Modeling is not the same as k-means.
None of the models has a 1:1 correspondence to k-means. The closes is probably
"EII" = spherical, equal volume
but Mclust will still use a s | Comparing mclust() and k-means centroids
Gaussian Mixture Modeling is not the same as k-means.
None of the models has a 1:1 correspondence to k-means. The closes is probably
"EII" = spherical, equal volume
but Mclust will still use a soft assignment, whereas k-means uses a hard assignment. There is a closer relationship between EII and fuzzy c-means (although I don't think they are the same either, due to different weighting functions).
Also note that k-means is commonly randomly seeded, so different runs of k-means will often yield different results, too! | Comparing mclust() and k-means centroids
Gaussian Mixture Modeling is not the same as k-means.
None of the models has a 1:1 correspondence to k-means. The closes is probably
"EII" = spherical, equal volume
but Mclust will still use a s |
54,424 | Does it make sense to do CV-error-weighted model averaging? | There already exists a method very similar to what you're describing (weighted averages of predictions using cross-validated scores as weights). It is called SuperLearner.
http://biostats.bepress.com/ucbbiostat/paper222/
(This book focuses on another method called TMLE, but succinctly describes SuperLearner and its theory in a few chapters)
Googling SuperLearner will get you the latest research on that. | Does it make sense to do CV-error-weighted model averaging? | There already exists a method very similar to what you're describing (weighted averages of predictions using cross-validated scores as weights). It is called SuperLearner.
http://biostats.bepress.com/ | Does it make sense to do CV-error-weighted model averaging?
There already exists a method very similar to what you're describing (weighted averages of predictions using cross-validated scores as weights). It is called SuperLearner.
http://biostats.bepress.com/ucbbiostat/paper222/
(This book focuses on another method called TMLE, but succinctly describes SuperLearner and its theory in a few chapters)
Googling SuperLearner will get you the latest research on that. | Does it make sense to do CV-error-weighted model averaging?
There already exists a method very similar to what you're describing (weighted averages of predictions using cross-validated scores as weights). It is called SuperLearner.
http://biostats.bepress.com/ |
54,425 | Does it make sense to do CV-error-weighted model averaging? | I argue that an average model with weights as functions of cross-validation scores makes sense.
Yes, but...
... only if you can be sure that your cross-validation measurement of the performance is good enough. In practice, e.g. I deal with sample sizes that are far too small to reliably compare models. Related questions and answers come up regularly.
IMHO, the basic rules for data-driven model optimization applies here as well.
So my advise would be: before trying to use any such scheme, make sure your measurement of model performance is precise enough to allow meaningful distinction between the models (or cases, if you boost).
Yet most people who do cross-validation to tune hyperparameters just choose the value of the hyperparameter that minimizes cross-validation error.
I suspect that far more people do so than have sufficient data to apply such a scheme reliably. See e.g. @DikranMarsupial's paper for a start:
G. C. Cawley and N. L. C. Talbot, Over-fitting in model selection and subsequent selection bias in performance evaluation, Journal of Machine Learning Research, 2010. Research, vol. 11, pp. 2079-2107, July 2010. (pdf)
In what circumstances might we expect a cross-validation-error-weighted average model to perform better than a single "best" model?
I think this question is ill-posed as it is: by definition I'd say that you cannot top the best model. It doesn't matter how that model is derived.
I guess a better question may be in which situations we have to expect that
on the one hand, we cannot rely on getting a sufficiently good single model, but
on the other hand cross-validation is reliable enough to allow us to derive good weights for the averaging.
I'd guess that this margin is rather narrow (if it actually exists in practice): Averaging (weighted or not) can only help to reduce errors caused by model instability. Good performance estimates need sufficient sample size. So situations with high-dimensional data and sample sizes large enough to allow good performance estimation while not yet reaching the sample size sufficient to produce stable models would be the candidate. I'd go for situations that actually require complex modes, because otherwise reducing model complexity/typical regularization would be used.
I'd also guess that the weighting needs a whole lot of care to avoid overfitting (compare to the problems of boosting). | Does it make sense to do CV-error-weighted model averaging? | I argue that an average model with weights as functions of cross-validation scores makes sense.
Yes, but...
... only if you can be sure that your cross-validation measurement of the performance is g | Does it make sense to do CV-error-weighted model averaging?
I argue that an average model with weights as functions of cross-validation scores makes sense.
Yes, but...
... only if you can be sure that your cross-validation measurement of the performance is good enough. In practice, e.g. I deal with sample sizes that are far too small to reliably compare models. Related questions and answers come up regularly.
IMHO, the basic rules for data-driven model optimization applies here as well.
So my advise would be: before trying to use any such scheme, make sure your measurement of model performance is precise enough to allow meaningful distinction between the models (or cases, if you boost).
Yet most people who do cross-validation to tune hyperparameters just choose the value of the hyperparameter that minimizes cross-validation error.
I suspect that far more people do so than have sufficient data to apply such a scheme reliably. See e.g. @DikranMarsupial's paper for a start:
G. C. Cawley and N. L. C. Talbot, Over-fitting in model selection and subsequent selection bias in performance evaluation, Journal of Machine Learning Research, 2010. Research, vol. 11, pp. 2079-2107, July 2010. (pdf)
In what circumstances might we expect a cross-validation-error-weighted average model to perform better than a single "best" model?
I think this question is ill-posed as it is: by definition I'd say that you cannot top the best model. It doesn't matter how that model is derived.
I guess a better question may be in which situations we have to expect that
on the one hand, we cannot rely on getting a sufficiently good single model, but
on the other hand cross-validation is reliable enough to allow us to derive good weights for the averaging.
I'd guess that this margin is rather narrow (if it actually exists in practice): Averaging (weighted or not) can only help to reduce errors caused by model instability. Good performance estimates need sufficient sample size. So situations with high-dimensional data and sample sizes large enough to allow good performance estimation while not yet reaching the sample size sufficient to produce stable models would be the candidate. I'd go for situations that actually require complex modes, because otherwise reducing model complexity/typical regularization would be used.
I'd also guess that the weighting needs a whole lot of care to avoid overfitting (compare to the problems of boosting). | Does it make sense to do CV-error-weighted model averaging?
I argue that an average model with weights as functions of cross-validation scores makes sense.
Yes, but...
... only if you can be sure that your cross-validation measurement of the performance is g |
54,426 | Introductory Text for GAM | Buja et al., (1989) give a nice overview of GAMs in the context of other additive nonparametric smoothing models. If I recall correctly, most of their examples can be easily done in R using existing packages.
Buja, A., Hastie, T., and Tibshirani, R. (1989). Linear Smoothers and Additive Models. The Annals of Statistics, 17(2):453–510. | Introductory Text for GAM | Buja et al., (1989) give a nice overview of GAMs in the context of other additive nonparametric smoothing models. If I recall correctly, most of their examples can be easily done in R using existing p | Introductory Text for GAM
Buja et al., (1989) give a nice overview of GAMs in the context of other additive nonparametric smoothing models. If I recall correctly, most of their examples can be easily done in R using existing packages.
Buja, A., Hastie, T., and Tibshirani, R. (1989). Linear Smoothers and Additive Models. The Annals of Statistics, 17(2):453–510. | Introductory Text for GAM
Buja et al., (1989) give a nice overview of GAMs in the context of other additive nonparametric smoothing models. If I recall correctly, most of their examples can be easily done in R using existing p |
54,427 | Introductory Text for GAM | Hastie & Tibshirani's original textbook is still a great read IMO: Hastie, T. & Tibshirani, R. (1990) Generalized Additive Models, Chapman & Hall. I personally found it much easier to follow than Simon Wood's text, even if the latter is more up-to-date. | Introductory Text for GAM | Hastie & Tibshirani's original textbook is still a great read IMO: Hastie, T. & Tibshirani, R. (1990) Generalized Additive Models, Chapman & Hall. I personally found it much easier to follow than Simo | Introductory Text for GAM
Hastie & Tibshirani's original textbook is still a great read IMO: Hastie, T. & Tibshirani, R. (1990) Generalized Additive Models, Chapman & Hall. I personally found it much easier to follow than Simon Wood's text, even if the latter is more up-to-date. | Introductory Text for GAM
Hastie & Tibshirani's original textbook is still a great read IMO: Hastie, T. & Tibshirani, R. (1990) Generalized Additive Models, Chapman & Hall. I personally found it much easier to follow than Simo |
54,428 | Introductory Text for GAM | I thought Michael Clark's overview is excellent:
https://m-clark.github.io/generalized-additive-models/
A great short (very short) introduction to get you started. | Introductory Text for GAM | I thought Michael Clark's overview is excellent:
https://m-clark.github.io/generalized-additive-models/
A great short (very short) introduction to get you started. | Introductory Text for GAM
I thought Michael Clark's overview is excellent:
https://m-clark.github.io/generalized-additive-models/
A great short (very short) introduction to get you started. | Introductory Text for GAM
I thought Michael Clark's overview is excellent:
https://m-clark.github.io/generalized-additive-models/
A great short (very short) introduction to get you started. |
54,429 | Introductory Text for GAM | I came across this introductory video on GAM that I thought was helpful and easy to follow:
https://www.youtube.com/watch?v=nXDYapfalt4 | Introductory Text for GAM | I came across this introductory video on GAM that I thought was helpful and easy to follow:
https://www.youtube.com/watch?v=nXDYapfalt4 | Introductory Text for GAM
I came across this introductory video on GAM that I thought was helpful and easy to follow:
https://www.youtube.com/watch?v=nXDYapfalt4 | Introductory Text for GAM
I came across this introductory video on GAM that I thought was helpful and easy to follow:
https://www.youtube.com/watch?v=nXDYapfalt4 |
54,430 | Introductory Text for GAM | Not a text but I found this to be a great resource for GAMs using R: https://noamross.github.io/gams-in-r-course/
It leads you through the nature of fitting a GAM without getting into the maths involved. | Introductory Text for GAM | Not a text but I found this to be a great resource for GAMs using R: https://noamross.github.io/gams-in-r-course/
It leads you through the nature of fitting a GAM without getting into the maths involv | Introductory Text for GAM
Not a text but I found this to be a great resource for GAMs using R: https://noamross.github.io/gams-in-r-course/
It leads you through the nature of fitting a GAM without getting into the maths involved. | Introductory Text for GAM
Not a text but I found this to be a great resource for GAMs using R: https://noamross.github.io/gams-in-r-course/
It leads you through the nature of fitting a GAM without getting into the maths involv |
54,431 | Introductory Text for GAM | See chapter 3 in "Mixed effects models and extensions in ecology with R.
A Zuur, EN Ieno, N Walker, AA Saveliev, GM Smith"
Ch3 provides a really basic intro to GAM in both the GAM and MGCV packages. In further chapters the book covers a lot of the things you might do with GAM and goes into GAMM, and zero-inflated GAM.
Its a great intro and also helpful for advanced applications. | Introductory Text for GAM | See chapter 3 in "Mixed effects models and extensions in ecology with R.
A Zuur, EN Ieno, N Walker, AA Saveliev, GM Smith"
Ch3 provides a really basic intro to GAM in both the GAM and MGCV packages. | Introductory Text for GAM
See chapter 3 in "Mixed effects models and extensions in ecology with R.
A Zuur, EN Ieno, N Walker, AA Saveliev, GM Smith"
Ch3 provides a really basic intro to GAM in both the GAM and MGCV packages. In further chapters the book covers a lot of the things you might do with GAM and goes into GAMM, and zero-inflated GAM.
Its a great intro and also helpful for advanced applications. | Introductory Text for GAM
See chapter 3 in "Mixed effects models and extensions in ecology with R.
A Zuur, EN Ieno, N Walker, AA Saveliev, GM Smith"
Ch3 provides a really basic intro to GAM in both the GAM and MGCV packages. |
54,432 | How can I estimate model predicted means (a.k.a. marginal means, lsmeans, or EM means) from a GEE model fitted in R? | The emmeans package now provides estimated marginal means for GEE models:
library(geepack)
library(emmeans)
warp.gee <- geeglm(breaks ~ tension, id=wool, family=gaussian, data=warpbreaks)
emmeans(warp.gee, ~tension)
tension lsmean SE df asymp.LCL asymp.UCL
L 36.38889 5.774705 Inf 25.07067 47.70710
M 26.38889 1.689200 Inf 23.07812 29.69966
H 21.66667 2.042753 Inf 17.66294 25.67039
plot(emmeans(warp.gee, ~tension), horizontal=FALSE, ylab="Estimated mean") | How can I estimate model predicted means (a.k.a. marginal means, lsmeans, or EM means) from a GEE mo | The emmeans package now provides estimated marginal means for GEE models:
library(geepack)
library(emmeans)
warp.gee <- geeglm(breaks ~ tension, id=wool, family=gaussian, data=warpbreaks)
emmeans(warp | How can I estimate model predicted means (a.k.a. marginal means, lsmeans, or EM means) from a GEE model fitted in R?
The emmeans package now provides estimated marginal means for GEE models:
library(geepack)
library(emmeans)
warp.gee <- geeglm(breaks ~ tension, id=wool, family=gaussian, data=warpbreaks)
emmeans(warp.gee, ~tension)
tension lsmean SE df asymp.LCL asymp.UCL
L 36.38889 5.774705 Inf 25.07067 47.70710
M 26.38889 1.689200 Inf 23.07812 29.69966
H 21.66667 2.042753 Inf 17.66294 25.67039
plot(emmeans(warp.gee, ~tension), horizontal=FALSE, ylab="Estimated mean") | How can I estimate model predicted means (a.k.a. marginal means, lsmeans, or EM means) from a GEE mo
The emmeans package now provides estimated marginal means for GEE models:
library(geepack)
library(emmeans)
warp.gee <- geeglm(breaks ~ tension, id=wool, family=gaussian, data=warpbreaks)
emmeans(warp |
54,433 | How can I estimate model predicted means (a.k.a. marginal means, lsmeans, or EM means) from a GEE model fitted in R? | The LSmeans function in the doBy package may be helpful.
Here is a simple modification of an example in the vignette.
library(doBy)
library(geepack)
warp.gee <- geeglm(breaks ~ tension, id=wool, family=gaussian, data=warpbreaks)
LSmeans(warp.gee,effect="tension") | How can I estimate model predicted means (a.k.a. marginal means, lsmeans, or EM means) from a GEE mo | The LSmeans function in the doBy package may be helpful.
Here is a simple modification of an example in the vignette.
library(doBy)
library(geepack)
warp.gee <- geeglm(breaks ~ tension, id=wool, famil | How can I estimate model predicted means (a.k.a. marginal means, lsmeans, or EM means) from a GEE model fitted in R?
The LSmeans function in the doBy package may be helpful.
Here is a simple modification of an example in the vignette.
library(doBy)
library(geepack)
warp.gee <- geeglm(breaks ~ tension, id=wool, family=gaussian, data=warpbreaks)
LSmeans(warp.gee,effect="tension") | How can I estimate model predicted means (a.k.a. marginal means, lsmeans, or EM means) from a GEE mo
The LSmeans function in the doBy package may be helpful.
Here is a simple modification of an example in the vignette.
library(doBy)
library(geepack)
warp.gee <- geeglm(breaks ~ tension, id=wool, famil |
54,434 | How can I estimate model predicted means (a.k.a. marginal means, lsmeans, or EM means) from a GEE model fitted in R? | The spind package offers a predict function similar to other uses of predict:
https://www.rdocumentation.org/packages/spind/versions/2.1.3/topics/predict.GEE
# load packages
library(geepack)
library(spind)
n <- nrow(warpbreaks) # number of cases
trainIndex <- sample(1:n, n*.60) # random subset of 60% of cases (training)
testIndex <- setdiff(1:n, trainIndex) # rest of cases not in training (testing)
# model fit
warp.gee.fit <- geeglm(breaks ~ tension, id=wool, family=gaussian, data=warpbreaks[trainIndex,])
# predict test cases
predict(warp.gee.fit, warpbreaks[testIndex,])
``` | How can I estimate model predicted means (a.k.a. marginal means, lsmeans, or EM means) from a GEE mo | The spind package offers a predict function similar to other uses of predict:
https://www.rdocumentation.org/packages/spind/versions/2.1.3/topics/predict.GEE
# load packages
library(geepack)
library(s | How can I estimate model predicted means (a.k.a. marginal means, lsmeans, or EM means) from a GEE model fitted in R?
The spind package offers a predict function similar to other uses of predict:
https://www.rdocumentation.org/packages/spind/versions/2.1.3/topics/predict.GEE
# load packages
library(geepack)
library(spind)
n <- nrow(warpbreaks) # number of cases
trainIndex <- sample(1:n, n*.60) # random subset of 60% of cases (training)
testIndex <- setdiff(1:n, trainIndex) # rest of cases not in training (testing)
# model fit
warp.gee.fit <- geeglm(breaks ~ tension, id=wool, family=gaussian, data=warpbreaks[trainIndex,])
# predict test cases
predict(warp.gee.fit, warpbreaks[testIndex,])
``` | How can I estimate model predicted means (a.k.a. marginal means, lsmeans, or EM means) from a GEE mo
The spind package offers a predict function similar to other uses of predict:
https://www.rdocumentation.org/packages/spind/versions/2.1.3/topics/predict.GEE
# load packages
library(geepack)
library(s |
54,435 | How to handle unseen features in a Naive Bayes classifier? | Typically one would use Laplace smoothing, essentially adding an artificial observation of every feature for every class. This is done to avoid the issue of having never observed a feature in one class causing a zero that propagates. This is also called a uniform prior.
For a feature never seen ever in any training data, the "uniform prior" means everything will have the same probability (hence uniform without data), and so it will have no impact on which class you select.
In terms of making the decision for your classifier, this would have the equivalent result of just throwing away the novel feature! So that is what you should do. Technically it would change the probability slightly to keep it, but Naive Bayes doesn't give good probabilities in the fist place, so its not worth worrying about.
However, I would not like to do that, since I am trying to calculate the actual probability score associated with classes. Probabilities should take a hit when there are unseen features, but I am not sure how to do that mathematically.
This is a good intuition and correct. But in general, we can't do much when we encounter unobserved features as we intrinsically have no knowledge about them! All you can really do is pick a prior belief and run with that when you don't have data.
If you truly want good probabilities, start looking at logistic regression. Its not perfect either, but the probabilities are much more reasonable than what Naive Bayes will give you. | How to handle unseen features in a Naive Bayes classifier? | Typically one would use Laplace smoothing, essentially adding an artificial observation of every feature for every class. This is done to avoid the issue of having never observed a feature in one cla | How to handle unseen features in a Naive Bayes classifier?
Typically one would use Laplace smoothing, essentially adding an artificial observation of every feature for every class. This is done to avoid the issue of having never observed a feature in one class causing a zero that propagates. This is also called a uniform prior.
For a feature never seen ever in any training data, the "uniform prior" means everything will have the same probability (hence uniform without data), and so it will have no impact on which class you select.
In terms of making the decision for your classifier, this would have the equivalent result of just throwing away the novel feature! So that is what you should do. Technically it would change the probability slightly to keep it, but Naive Bayes doesn't give good probabilities in the fist place, so its not worth worrying about.
However, I would not like to do that, since I am trying to calculate the actual probability score associated with classes. Probabilities should take a hit when there are unseen features, but I am not sure how to do that mathematically.
This is a good intuition and correct. But in general, we can't do much when we encounter unobserved features as we intrinsically have no knowledge about them! All you can really do is pick a prior belief and run with that when you don't have data.
If you truly want good probabilities, start looking at logistic regression. Its not perfect either, but the probabilities are much more reasonable than what Naive Bayes will give you. | How to handle unseen features in a Naive Bayes classifier?
Typically one would use Laplace smoothing, essentially adding an artificial observation of every feature for every class. This is done to avoid the issue of having never observed a feature in one cla |
54,436 | Island Hopping with Metropolis Algorithm | 1. The problem is not about ergodicity
No, this is not related to ergodicity. In the chain without cycling around, one can still move from any island to any other island, and (provided that there are differences in the populations!) the chain is not periodic (because one sometimes stays put), so the chain is ergodic, hence it has a unique stationary distribution. The problem is that the stationary distribution is wrong, i.e., detailed balance is satisfied but not with the intended distribution.
2. Metropolis-Hastings
Your idea is correct. In Metropolis-Hastings, one modifies the acceptance probability to take into account the asymmetry in proposal distributions: the proposal is accepted with probability $\min(r,1)$, where
\begin{equation}
r=\frac{p(x')}{p(x)}\,\frac{g(x'\rightarrow x)}{g(x \rightarrow x')}.
\end{equation}
where $p$ is the target distribution, $x$ is the old state (island), $x'$ is the proposed state and $g(y\rightarrow z)$ is the probability (density in the continuous case) of proposing $z$ when the chain is at $y$. In your cyclic chain $g$ is always $1/2$, thus the latter factor vanishes and we are left with the plain Metropolis algorithm.
In the new chain, $g$ is 1 when moving away from either boundary island. In all other cases $g=0.5$. Thus, the latter factor in the acceptance probability must be taken into account whenever either the old island or the new island is a boundary island (the factor is $0.5$ or $2$, respectively).
3. Implementation
Only the relevant modified parts (proposal and acceptance) attached. This computes directly the ratio of $g$s for the special cases it differs from $1$.
proposal_ratio = 1 #The ratio of proposal probabilities, change for special cases
# proposal distribution is (-1, +1) cordinates, takng into account the list boundaries
if cur_pos<len(A)-1 and cur_pos>0:
proposed_positions = [cur_pos-1, cur_pos+1]
elif cur_pos == len(A)-1:
proposed_positions =[cur_pos-1]
proposal_ratio = 0.5
elif cur_pos == 0 :
proposed_positions = [cur_pos+1]
proposal_ratio = 0.5
proposed_pos = random.choice(proposed_positions)
if proposed_pos == 0 or proposed_pos == len(A)-1:
proposal_ratio = 2
# decide whether you will move
r = (float(A[proposed_pos])/A[cur_pos]) * proposal_ratio
if r>=1: # definitely move
cur_pos = proposed_pos
else: # move with prob r
u = random.random() # sample from uniform distribution from 0 to 1
if r > u: # move to proposed
cur_pos = proposed_pos
else: # stay at cur_pos
pass | Island Hopping with Metropolis Algorithm | 1. The problem is not about ergodicity
No, this is not related to ergodicity. In the chain without cycling around, one can still move from any island to any other island, and (provided that there are | Island Hopping with Metropolis Algorithm
1. The problem is not about ergodicity
No, this is not related to ergodicity. In the chain without cycling around, one can still move from any island to any other island, and (provided that there are differences in the populations!) the chain is not periodic (because one sometimes stays put), so the chain is ergodic, hence it has a unique stationary distribution. The problem is that the stationary distribution is wrong, i.e., detailed balance is satisfied but not with the intended distribution.
2. Metropolis-Hastings
Your idea is correct. In Metropolis-Hastings, one modifies the acceptance probability to take into account the asymmetry in proposal distributions: the proposal is accepted with probability $\min(r,1)$, where
\begin{equation}
r=\frac{p(x')}{p(x)}\,\frac{g(x'\rightarrow x)}{g(x \rightarrow x')}.
\end{equation}
where $p$ is the target distribution, $x$ is the old state (island), $x'$ is the proposed state and $g(y\rightarrow z)$ is the probability (density in the continuous case) of proposing $z$ when the chain is at $y$. In your cyclic chain $g$ is always $1/2$, thus the latter factor vanishes and we are left with the plain Metropolis algorithm.
In the new chain, $g$ is 1 when moving away from either boundary island. In all other cases $g=0.5$. Thus, the latter factor in the acceptance probability must be taken into account whenever either the old island or the new island is a boundary island (the factor is $0.5$ or $2$, respectively).
3. Implementation
Only the relevant modified parts (proposal and acceptance) attached. This computes directly the ratio of $g$s for the special cases it differs from $1$.
proposal_ratio = 1 #The ratio of proposal probabilities, change for special cases
# proposal distribution is (-1, +1) cordinates, takng into account the list boundaries
if cur_pos<len(A)-1 and cur_pos>0:
proposed_positions = [cur_pos-1, cur_pos+1]
elif cur_pos == len(A)-1:
proposed_positions =[cur_pos-1]
proposal_ratio = 0.5
elif cur_pos == 0 :
proposed_positions = [cur_pos+1]
proposal_ratio = 0.5
proposed_pos = random.choice(proposed_positions)
if proposed_pos == 0 or proposed_pos == len(A)-1:
proposal_ratio = 2
# decide whether you will move
r = (float(A[proposed_pos])/A[cur_pos]) * proposal_ratio
if r>=1: # definitely move
cur_pos = proposed_pos
else: # move with prob r
u = random.random() # sample from uniform distribution from 0 to 1
if r > u: # move to proposed
cur_pos = proposed_pos
else: # stay at cur_pos
pass | Island Hopping with Metropolis Algorithm
1. The problem is not about ergodicity
No, this is not related to ergodicity. In the chain without cycling around, one can still move from any island to any other island, and (provided that there are |
54,437 | scatterplot smoothing in r with big dataset: different methods | It's actually efficient and accurate to smooth the response with a moving-window mean: this can be done on the entire dataset with a fast Fourier transform in a fraction of a second. For plotting purposes, consider subsampling both the raw data and the smooth. You can further smooth the subsampled smooth. This will be more reliable than just smoothing the subsampled data.
Control over the strength of smoothing is achieved in several ways, adding flexibility to this approach:
A larger window increases the smooth.
Values in the window can be weighted to create a continuous smooth.
The lowess parameters for smoothing the subsampled smooth can be adjusted.
Example
First let's generate some interesting data. They are stored in two parallel arrays, times and x (the binary response).
set.seed(17)
n <- 300000
times <- cumsum(sort(rgamma(n, 2)))
times <- times/max(times) * 25
x <- 1/(1 + exp(-seq(-1,1,length.out=n)^2/2 - rnorm(n, -1/2, 1))) > 1/2
Here is the running mean applied to the full dataset. A fairly sizable window half-width (of $1172$) is used; this can be increased for stronger smoothing. The kernel has a Gaussian shape to make the smooth reasonably continuous. The algorithm is fully exposed: here you see the kernel explicitly constructed and convolved with the data to produce the smoothed array y.
k <- min(ceiling(n/256), n/2) # Window size
kernel <- c(dnorm(seq(0, 3, length.out=k)))
kernel <- c(kernel, rep(0, n - 2*length(kernel) + 1), rev(kernel[-1]))
kernel <- kernel / sum(kernel)
y <- Re(convolve(x, kernel))
Let's subsample the data at intervals of a fraction of the kernel half-width to assure nothing gets overlooked:
j <- floor(seq(1, n, k/3)) # Indexes to subsample
In the example j has only $768$ elements representing all $300,000$ original values.
The rest of the code plots the subsampled raw data, the subsampled smooth (in gray), a lowess smooth of the subsampled smooth (in red), and a lowess smooth of the subsampled data (in blue). The last, although very easy to compute, will be much more variable than the recommended approach because it is based on a tiny fraction of the data.
plot(times[j], x[j], col="#00000040", xlab="x", ylab="y")
a <- times[j]; b <- y[j] # Subsampled data
lines(a, b, col="Gray")
f <- 1/6 # Strength of the lowess smooths
lines(lowess(a, f=f)$y, lowess(b, f=f)$y, col="Red", lwd=2)
lines(lowess(times[j], f=f)$y, lowess(x[j], f=f)$y, col="Blue")
The red line (lowess smooth of the subsampled windowed mean) is a very accurate representation of the function used to generate the data. The blue line (lowess smooth of the subsampled data) exhibits spurious variability. | scatterplot smoothing in r with big dataset: different methods | It's actually efficient and accurate to smooth the response with a moving-window mean: this can be done on the entire dataset with a fast Fourier transform in a fraction of a second. For plotting pur | scatterplot smoothing in r with big dataset: different methods
It's actually efficient and accurate to smooth the response with a moving-window mean: this can be done on the entire dataset with a fast Fourier transform in a fraction of a second. For plotting purposes, consider subsampling both the raw data and the smooth. You can further smooth the subsampled smooth. This will be more reliable than just smoothing the subsampled data.
Control over the strength of smoothing is achieved in several ways, adding flexibility to this approach:
A larger window increases the smooth.
Values in the window can be weighted to create a continuous smooth.
The lowess parameters for smoothing the subsampled smooth can be adjusted.
Example
First let's generate some interesting data. They are stored in two parallel arrays, times and x (the binary response).
set.seed(17)
n <- 300000
times <- cumsum(sort(rgamma(n, 2)))
times <- times/max(times) * 25
x <- 1/(1 + exp(-seq(-1,1,length.out=n)^2/2 - rnorm(n, -1/2, 1))) > 1/2
Here is the running mean applied to the full dataset. A fairly sizable window half-width (of $1172$) is used; this can be increased for stronger smoothing. The kernel has a Gaussian shape to make the smooth reasonably continuous. The algorithm is fully exposed: here you see the kernel explicitly constructed and convolved with the data to produce the smoothed array y.
k <- min(ceiling(n/256), n/2) # Window size
kernel <- c(dnorm(seq(0, 3, length.out=k)))
kernel <- c(kernel, rep(0, n - 2*length(kernel) + 1), rev(kernel[-1]))
kernel <- kernel / sum(kernel)
y <- Re(convolve(x, kernel))
Let's subsample the data at intervals of a fraction of the kernel half-width to assure nothing gets overlooked:
j <- floor(seq(1, n, k/3)) # Indexes to subsample
In the example j has only $768$ elements representing all $300,000$ original values.
The rest of the code plots the subsampled raw data, the subsampled smooth (in gray), a lowess smooth of the subsampled smooth (in red), and a lowess smooth of the subsampled data (in blue). The last, although very easy to compute, will be much more variable than the recommended approach because it is based on a tiny fraction of the data.
plot(times[j], x[j], col="#00000040", xlab="x", ylab="y")
a <- times[j]; b <- y[j] # Subsampled data
lines(a, b, col="Gray")
f <- 1/6 # Strength of the lowess smooths
lines(lowess(a, f=f)$y, lowess(b, f=f)$y, col="Red", lwd=2)
lines(lowess(times[j], f=f)$y, lowess(x[j], f=f)$y, col="Blue")
The red line (lowess smooth of the subsampled windowed mean) is a very accurate representation of the function used to generate the data. The blue line (lowess smooth of the subsampled data) exhibits spurious variability. | scatterplot smoothing in r with big dataset: different methods
It's actually efficient and accurate to smooth the response with a moving-window mean: this can be done on the entire dataset with a fast Fourier transform in a fraction of a second. For plotting pur |
54,438 | scatterplot smoothing in r with big dataset: different methods | I found the locfit function from the locfit package to be a good solution - it is quick and made it easy to plot by group (https://cran.r-project.org/web/packages/locfit/locfit.pdf). The scale, alpha, deg, kern, kt, acri and basis parameters control the amount of smoothing. I used it with geom_smooth: geom_smooth(method='locfit', method.args = list(deg=1, alpha=0.3)) | scatterplot smoothing in r with big dataset: different methods | I found the locfit function from the locfit package to be a good solution - it is quick and made it easy to plot by group (https://cran.r-project.org/web/packages/locfit/locfit.pdf). The scale, alpha, | scatterplot smoothing in r with big dataset: different methods
I found the locfit function from the locfit package to be a good solution - it is quick and made it easy to plot by group (https://cran.r-project.org/web/packages/locfit/locfit.pdf). The scale, alpha, deg, kern, kt, acri and basis parameters control the amount of smoothing. I used it with geom_smooth: geom_smooth(method='locfit', method.args = list(deg=1, alpha=0.3)) | scatterplot smoothing in r with big dataset: different methods
I found the locfit function from the locfit package to be a good solution - it is quick and made it easy to plot by group (https://cran.r-project.org/web/packages/locfit/locfit.pdf). The scale, alpha, |
54,439 | How to calculate SD of sample for one new observation? | Yes, it's possible. Taken from this post, the updated mean and standard deviation (SD) can be calculated as follows:
$$
\bar{X}_{n} = \frac{1}{n}\left(X_{n} + (n - 1)\bar{X}_{n-1}\right)
$$
Where $n$ is the sample size (including the new observation), $X_{n}$ the value of the new observation $\bar{X}_{n-1}$ is the mean of the $n-1$ first observations.
For the standard deviation, we have:
$$
s_{n} = \sqrt{\frac{n-2}{n-1}s_{n-1}^{2}+\frac{1}{n}\left(X_{n} - \bar{X}_{n-1}\right)^{2}}
$$
Where $s_{n-1}^{2}$ denotes the variance of the $n-1$ first observations.
Using the numbers $n=25, \bar{X}_{n-1}=74, s_{n-1}^{2}=8.9^{2}=79.21, X_{n}=64$, we get:
$$
\begin{align*}
\bar{X}_{n} &= 73.6 \\
s_{n} &= 8.939216
\end{align*}
$$ | How to calculate SD of sample for one new observation? | Yes, it's possible. Taken from this post, the updated mean and standard deviation (SD) can be calculated as follows:
$$
\bar{X}_{n} = \frac{1}{n}\left(X_{n} + (n - 1)\bar{X}_{n-1}\right)
$$
Where $n$ | How to calculate SD of sample for one new observation?
Yes, it's possible. Taken from this post, the updated mean and standard deviation (SD) can be calculated as follows:
$$
\bar{X}_{n} = \frac{1}{n}\left(X_{n} + (n - 1)\bar{X}_{n-1}\right)
$$
Where $n$ is the sample size (including the new observation), $X_{n}$ the value of the new observation $\bar{X}_{n-1}$ is the mean of the $n-1$ first observations.
For the standard deviation, we have:
$$
s_{n} = \sqrt{\frac{n-2}{n-1}s_{n-1}^{2}+\frac{1}{n}\left(X_{n} - \bar{X}_{n-1}\right)^{2}}
$$
Where $s_{n-1}^{2}$ denotes the variance of the $n-1$ first observations.
Using the numbers $n=25, \bar{X}_{n-1}=74, s_{n-1}^{2}=8.9^{2}=79.21, X_{n}=64$, we get:
$$
\begin{align*}
\bar{X}_{n} &= 73.6 \\
s_{n} &= 8.939216
\end{align*}
$$ | How to calculate SD of sample for one new observation?
Yes, it's possible. Taken from this post, the updated mean and standard deviation (SD) can be calculated as follows:
$$
\bar{X}_{n} = \frac{1}{n}\left(X_{n} + (n - 1)\bar{X}_{n-1}\right)
$$
Where $n$ |
54,440 | Poisson probability of observing at least one zero out of k independent cases | The answer depends on your question.
Here's how I've interpreted it:
You're got 5 chromosomes, you're interested in the event that any one of them is zero independent of the rest of them, and you're calling this the "total probability". You'd like to know what combination of individual rates ensures that this total probability is zero or negligible.
The key is what you mean by negligible.
If you are willing to accept a 1% chance that none of them are zero, then your answer is lambda=6.3, with all chromosomes having a uniform rate.
> sum(dpois(x=0,lambda=rep(6.3,5))) # uniform rate for all 5 chromosomes
[1] 0.009182 # less than 1% total probability
For a 5% chance, the answer is lambda=4.7.
The individual rates, however, could be smaller or larger, provided they contribute to keep the total within tolerance. For example:
> sum(dpois(x=0,lambda=c(6,6,7,8,8))) # non-uniform rates
[1] 0.00654 # also less than 1% total probability
Solution (uniform rates):
Define a function that does the calculation. Here I've assumed all rates are equal, though you can set lambda directly to rate and pass in any vector of rates.
all_chrom_zero <- function(rate) {
sum(dpois(x=0,lambda=rep(rate,5)))
}
Then use sapply to run it over a list of trial rates, and choose the smallest rate that comes in under your confidence threshold -- in this case 1%
tol <- 0.01
wt<-seq(0,10,0.1) # candidate rates
min(wt[which(sapply(wt,all_chrom_zero)<0.01)])
[1] 6.3
The above code has been structured to take a matrix without too much difficulty (using sapply). So you could do away with the uniform assumption entirely and actually run your code to simulate over various 5-tuples of rates.
This would be a plus if your biological model has information about the distribution of the rates of the individual chromosomes. If not, you could draw random samples from a Gaussian distribution for each chromosome's rate, each centred at the uniform solution, and run the calculations over these, or if you like brute force calculations, you could (just for fun!) also run the calculation over a 5D grid and look for the "equiprobable" curves at various tolerance levels.
Background for sapply
Using sapply allows one to avoid explicit looping in R and is quite powerful for building simulations. If you want a bit of background, I've referenced some useful primers in an answer here. | Poisson probability of observing at least one zero out of k independent cases | The answer depends on your question.
Here's how I've interpreted it:
You're got 5 chromosomes, you're interested in the event that any one of them is zero independent of the rest of them, and you're | Poisson probability of observing at least one zero out of k independent cases
The answer depends on your question.
Here's how I've interpreted it:
You're got 5 chromosomes, you're interested in the event that any one of them is zero independent of the rest of them, and you're calling this the "total probability". You'd like to know what combination of individual rates ensures that this total probability is zero or negligible.
The key is what you mean by negligible.
If you are willing to accept a 1% chance that none of them are zero, then your answer is lambda=6.3, with all chromosomes having a uniform rate.
> sum(dpois(x=0,lambda=rep(6.3,5))) # uniform rate for all 5 chromosomes
[1] 0.009182 # less than 1% total probability
For a 5% chance, the answer is lambda=4.7.
The individual rates, however, could be smaller or larger, provided they contribute to keep the total within tolerance. For example:
> sum(dpois(x=0,lambda=c(6,6,7,8,8))) # non-uniform rates
[1] 0.00654 # also less than 1% total probability
Solution (uniform rates):
Define a function that does the calculation. Here I've assumed all rates are equal, though you can set lambda directly to rate and pass in any vector of rates.
all_chrom_zero <- function(rate) {
sum(dpois(x=0,lambda=rep(rate,5)))
}
Then use sapply to run it over a list of trial rates, and choose the smallest rate that comes in under your confidence threshold -- in this case 1%
tol <- 0.01
wt<-seq(0,10,0.1) # candidate rates
min(wt[which(sapply(wt,all_chrom_zero)<0.01)])
[1] 6.3
The above code has been structured to take a matrix without too much difficulty (using sapply). So you could do away with the uniform assumption entirely and actually run your code to simulate over various 5-tuples of rates.
This would be a plus if your biological model has information about the distribution of the rates of the individual chromosomes. If not, you could draw random samples from a Gaussian distribution for each chromosome's rate, each centred at the uniform solution, and run the calculations over these, or if you like brute force calculations, you could (just for fun!) also run the calculation over a 5D grid and look for the "equiprobable" curves at various tolerance levels.
Background for sapply
Using sapply allows one to avoid explicit looping in R and is quite powerful for building simulations. If you want a bit of background, I've referenced some useful primers in an answer here. | Poisson probability of observing at least one zero out of k independent cases
The answer depends on your question.
Here's how I've interpreted it:
You're got 5 chromosomes, you're interested in the event that any one of them is zero independent of the rest of them, and you're |
54,441 | Poisson probability of observing at least one zero out of k independent cases | ## Just repeating what you did for completion
> wt <- c(2.2,6.4,3.4,10.2,4.45)
> p <- ppois(q=0, lambda=wt)
>
> sum(ppois(q=0,lambda=wt))
[1] 0.1575537
>
> tol <- .Machine$double.eps # 2.220446e-16
> wt <- seq(0,250, by=0.01)
> wt[which( ppois(q=0, lambda=wt) < tol )[1]]
[1] 36.05
So when the poisson rate is about 36, the probability that the random variable is 0 is minuscule. Of course you can consider different tolerance levels (I used the smallest number that my computer's R could handle, but that is pretty extreme)
This is a pretty ad hoc way of going about it, and you can probably formalize it with a hypothesis test. | Poisson probability of observing at least one zero out of k independent cases | ## Just repeating what you did for completion
> wt <- c(2.2,6.4,3.4,10.2,4.45)
> p <- ppois(q=0, lambda=wt)
>
> sum(ppois(q=0,lambda=wt))
[1] 0.1575537
>
> tol <- .Machine$double.eps # 2.220446e- | Poisson probability of observing at least one zero out of k independent cases
## Just repeating what you did for completion
> wt <- c(2.2,6.4,3.4,10.2,4.45)
> p <- ppois(q=0, lambda=wt)
>
> sum(ppois(q=0,lambda=wt))
[1] 0.1575537
>
> tol <- .Machine$double.eps # 2.220446e-16
> wt <- seq(0,250, by=0.01)
> wt[which( ppois(q=0, lambda=wt) < tol )[1]]
[1] 36.05
So when the poisson rate is about 36, the probability that the random variable is 0 is minuscule. Of course you can consider different tolerance levels (I used the smallest number that my computer's R could handle, but that is pretty extreme)
This is a pretty ad hoc way of going about it, and you can probably formalize it with a hypothesis test. | Poisson probability of observing at least one zero out of k independent cases
## Just repeating what you did for completion
> wt <- c(2.2,6.4,3.4,10.2,4.45)
> p <- ppois(q=0, lambda=wt)
>
> sum(ppois(q=0,lambda=wt))
[1] 0.1575537
>
> tol <- .Machine$double.eps # 2.220446e- |
54,442 | How can one construct a cumulative probability distribution function from 2 others? | The question asks for the expected time to complete both of two independent tasks. Call these times $X_1$ and $X_2$: they are random variables supported on $[0,\infty)$.
Let $F_i$ be the cumulative distribution functions (CDF) of the $X_i$:
$$F_i(x) = \Pr(X_i\le x).$$
The time to complete both tasks is $Y =\max(X_1,X_2)$. Its CDF is given by
$$\eqalign{
F_Y(y) = \Pr(Y\le y) &= \Pr(X_1\le y\text{ and }X_2\le y) \\&= \Pr(X_1\le y)\Pr(X_2\le y) \\&= F_1(y)F_2(y).}$$
All equalities arise from definitions: of the CDF, of $Y$, and of independence.
Assuming both variables $X_1$ and $X_2$ are absolutely continuous, one way to obtain the expectation of $Y$ is to integrate over the joint distribution of $(X_1,X_2)$, as suggested in the question. There are much easier ways--as explained below--but to show it can be done this way, let's split the integral into one integral over $\{(x_1,x_2)\,|\, x_1\ge x_2\}$ and another over $\{(x_1,x_2)\,|\, x_1\lt x_2\}$, because in the first case $\max(x_1,x_2)=x_1$ and in the second case $\max(x_1,x_2)=x_2$:
$$\eqalign{
\mathbb{E}[Y] &= \mathbb{E}[\max(X_1,X_2)] = \iint_{\mathbb{R}^2} \max(x_1,x_2) dF_1(x_1)dF_2(x_2)\\
&= \int_\mathbb{R}\int_{-\infty}^{x_1} x_1dF_2(x_2)dF_1(x_1)
+ \int_\mathbb{R}\int_{-\infty}^{x_2} x_2dF_1(x_1)dF_2(x_2) \\
&= \int_\mathbb{R}x_1 F_2(x_1)dF_1(x_1)+\int_\mathbb{R}x_2 F_1(x_2)dF_1(x_2) \\
&= \int_\mathbb{R}y\left(F_2(y)f_1(y) + F_1(y)f_2(y)\right)dy
} $$
(writing $dF_i(y) = f_i(y)dy$).
However, this formula is easier to obtain by noting that the expectation of $Y$ can equally well be found via a single integral from the product rule
$$d\left(F_1(y)F_2(y)\right) = \left( F_2(y)f_1(y) + F_1(y)f_2(y)\right)dy,$$
whence
$$\mathbb{E}(Y) = \int_\mathbb{R}y \left(d F_y(y)\right) = \int_\mathbb{R} y\left( F_2(y)f_1(y) + F_1(y)f_2(y)\right)dy.$$
An even more general and simpler expression for the expectation obtains by integrating the survival function $1-F_Y$, because these variables have nonnegative support:
$$\mathbb{E}(Y) = \int_0^\infty(1 - F_Y(y))dy = \int_0^\infty(1 - F_1(y)F_2(y))dy.$$
This stands up to units analysis: the units in the integrand are probability (from $1 - F_1(y)F_2(y)$) times units of $Y$ (from the $dy$ term), whence the integral is in the units of $Y$. | How can one construct a cumulative probability distribution function from 2 others? | The question asks for the expected time to complete both of two independent tasks. Call these times $X_1$ and $X_2$: they are random variables supported on $[0,\infty)$.
Let $F_i$ be the cumulative d | How can one construct a cumulative probability distribution function from 2 others?
The question asks for the expected time to complete both of two independent tasks. Call these times $X_1$ and $X_2$: they are random variables supported on $[0,\infty)$.
Let $F_i$ be the cumulative distribution functions (CDF) of the $X_i$:
$$F_i(x) = \Pr(X_i\le x).$$
The time to complete both tasks is $Y =\max(X_1,X_2)$. Its CDF is given by
$$\eqalign{
F_Y(y) = \Pr(Y\le y) &= \Pr(X_1\le y\text{ and }X_2\le y) \\&= \Pr(X_1\le y)\Pr(X_2\le y) \\&= F_1(y)F_2(y).}$$
All equalities arise from definitions: of the CDF, of $Y$, and of independence.
Assuming both variables $X_1$ and $X_2$ are absolutely continuous, one way to obtain the expectation of $Y$ is to integrate over the joint distribution of $(X_1,X_2)$, as suggested in the question. There are much easier ways--as explained below--but to show it can be done this way, let's split the integral into one integral over $\{(x_1,x_2)\,|\, x_1\ge x_2\}$ and another over $\{(x_1,x_2)\,|\, x_1\lt x_2\}$, because in the first case $\max(x_1,x_2)=x_1$ and in the second case $\max(x_1,x_2)=x_2$:
$$\eqalign{
\mathbb{E}[Y] &= \mathbb{E}[\max(X_1,X_2)] = \iint_{\mathbb{R}^2} \max(x_1,x_2) dF_1(x_1)dF_2(x_2)\\
&= \int_\mathbb{R}\int_{-\infty}^{x_1} x_1dF_2(x_2)dF_1(x_1)
+ \int_\mathbb{R}\int_{-\infty}^{x_2} x_2dF_1(x_1)dF_2(x_2) \\
&= \int_\mathbb{R}x_1 F_2(x_1)dF_1(x_1)+\int_\mathbb{R}x_2 F_1(x_2)dF_1(x_2) \\
&= \int_\mathbb{R}y\left(F_2(y)f_1(y) + F_1(y)f_2(y)\right)dy
} $$
(writing $dF_i(y) = f_i(y)dy$).
However, this formula is easier to obtain by noting that the expectation of $Y$ can equally well be found via a single integral from the product rule
$$d\left(F_1(y)F_2(y)\right) = \left( F_2(y)f_1(y) + F_1(y)f_2(y)\right)dy,$$
whence
$$\mathbb{E}(Y) = \int_\mathbb{R}y \left(d F_y(y)\right) = \int_\mathbb{R} y\left( F_2(y)f_1(y) + F_1(y)f_2(y)\right)dy.$$
An even more general and simpler expression for the expectation obtains by integrating the survival function $1-F_Y$, because these variables have nonnegative support:
$$\mathbb{E}(Y) = \int_0^\infty(1 - F_Y(y))dy = \int_0^\infty(1 - F_1(y)F_2(y))dy.$$
This stands up to units analysis: the units in the integrand are probability (from $1 - F_1(y)F_2(y)$) times units of $Y$ (from the $dy$ term), whence the integral is in the units of $Y$. | How can one construct a cumulative probability distribution function from 2 others?
The question asks for the expected time to complete both of two independent tasks. Call these times $X_1$ and $X_2$: they are random variables supported on $[0,\infty)$.
Let $F_i$ be the cumulative d |
54,443 | Synthetic Control Method | Let's start with a standard regression setup where you are trying to estimate the effect in a fixed-effect model when you only have a single country that received "treatment" (entering the Euro). This strategy assumes that, conditional on observables, the average of the other countries serves as a reasonable counterfactual to what your treated country would have done if it hadn't received the treatment (ie stayed out of the euro). There is likely bias in this estimate as the treatment wasn't random, but probably related to factors that also affect the outcome you care about.
SCM says that instead of using all the other units, let's find a weighted average of them that looks really similar during the pre-treatment. SCM defines "similar" as having a very similar path of your outcome variable pre-treatment while having similar pre-treatment values for covariates that appear to be related to your outcome variable. The "treatment effect" is then the difference between the actual path of your treated unit and the path of the weighted average post-treatment (the weights are fixed from the pre-treatment optimization). Inference is done by permutation tests (if we invented "fake" treatments and estimated SCM on the other units that weren't actually treated, what would be typical "effects" that we would see). SCM will consistently estimate effects in some settings where the FE model would yield biased estimates (Abadie et al. show it for a general factor model, which allows for non-parallel trends between treatment and non-treated units).
The main points SCM needs are:
Lots of pre-treatment periods. The bias in the SCM estimate goes to zero as the number of pre-treatment period grows large in relation to the "error" in the model. Since usually SCM is used on aggregate data, the "error" is not so much sampling error but model mis-specification (inability of the synthetic control be a good counterfactual). This is sort of similar to root-N-consistency with other estimators.
The unobserved factors that you worry about should have had some effect on the outcome during the pre-treatment period. This is an assumption that one needs to argue holds in your context.
The synthetic control needs to match well the treated unit during the pre-treatment period. You can gauge this by looking at the fake permutations. Generally you need your treated unit to be in the convex hull of observations during pre-treatment. | Synthetic Control Method | Let's start with a standard regression setup where you are trying to estimate the effect in a fixed-effect model when you only have a single country that received "treatment" (entering the Euro). This | Synthetic Control Method
Let's start with a standard regression setup where you are trying to estimate the effect in a fixed-effect model when you only have a single country that received "treatment" (entering the Euro). This strategy assumes that, conditional on observables, the average of the other countries serves as a reasonable counterfactual to what your treated country would have done if it hadn't received the treatment (ie stayed out of the euro). There is likely bias in this estimate as the treatment wasn't random, but probably related to factors that also affect the outcome you care about.
SCM says that instead of using all the other units, let's find a weighted average of them that looks really similar during the pre-treatment. SCM defines "similar" as having a very similar path of your outcome variable pre-treatment while having similar pre-treatment values for covariates that appear to be related to your outcome variable. The "treatment effect" is then the difference between the actual path of your treated unit and the path of the weighted average post-treatment (the weights are fixed from the pre-treatment optimization). Inference is done by permutation tests (if we invented "fake" treatments and estimated SCM on the other units that weren't actually treated, what would be typical "effects" that we would see). SCM will consistently estimate effects in some settings where the FE model would yield biased estimates (Abadie et al. show it for a general factor model, which allows for non-parallel trends between treatment and non-treated units).
The main points SCM needs are:
Lots of pre-treatment periods. The bias in the SCM estimate goes to zero as the number of pre-treatment period grows large in relation to the "error" in the model. Since usually SCM is used on aggregate data, the "error" is not so much sampling error but model mis-specification (inability of the synthetic control be a good counterfactual). This is sort of similar to root-N-consistency with other estimators.
The unobserved factors that you worry about should have had some effect on the outcome during the pre-treatment period. This is an assumption that one needs to argue holds in your context.
The synthetic control needs to match well the treated unit during the pre-treatment period. You can gauge this by looking at the fake permutations. Generally you need your treated unit to be in the convex hull of observations during pre-treatment. | Synthetic Control Method
Let's start with a standard regression setup where you are trying to estimate the effect in a fixed-effect model when you only have a single country that received "treatment" (entering the Euro). This |
54,444 | Synthetic Control Method | The impact of being a member of euro on growth rate vs. not being a member is the difference of the growth rate from being a member and the growth rate if the country is not a member. This difference is calculated over different periods after joining the eurozone. The problem is that the growth rate if the country is not a member is not observed. It is a "counterfactual," so it has to be estimated.
For this estimation, the synthetic control algorithm builds a control as a weighted average of countries that did not join the eurozone (the "pool of donors"). How are the weights calculated? By minimizing a Euclidean distance between variables X of the synthetic control and the treated unit, subject to the constraints for the weights (each weight belongs to the interval [0,1] and the sum of all weights is 1). Variables X are presumably not affected by the treatment (joining the euro zone). This minimization problem has a quadratic objective function with linear constraints (linear quadratic minimization problem).
Thus we can say that the synthetic control is similar to the treated unit in the sense that both have similar variables X (in a Euclidean metric sense). But as the pool of donors did not receive the treatment (did not join the eurozone), the growth rate of the synthetic control after joining the euro is an estimation of the counterfactual. | Synthetic Control Method | The impact of being a member of euro on growth rate vs. not being a member is the difference of the growth rate from being a member and the growth rate if the country is not a member. This difference | Synthetic Control Method
The impact of being a member of euro on growth rate vs. not being a member is the difference of the growth rate from being a member and the growth rate if the country is not a member. This difference is calculated over different periods after joining the eurozone. The problem is that the growth rate if the country is not a member is not observed. It is a "counterfactual," so it has to be estimated.
For this estimation, the synthetic control algorithm builds a control as a weighted average of countries that did not join the eurozone (the "pool of donors"). How are the weights calculated? By minimizing a Euclidean distance between variables X of the synthetic control and the treated unit, subject to the constraints for the weights (each weight belongs to the interval [0,1] and the sum of all weights is 1). Variables X are presumably not affected by the treatment (joining the euro zone). This minimization problem has a quadratic objective function with linear constraints (linear quadratic minimization problem).
Thus we can say that the synthetic control is similar to the treated unit in the sense that both have similar variables X (in a Euclidean metric sense). But as the pool of donors did not receive the treatment (did not join the eurozone), the growth rate of the synthetic control after joining the euro is an estimation of the counterfactual. | Synthetic Control Method
The impact of being a member of euro on growth rate vs. not being a member is the difference of the growth rate from being a member and the growth rate if the country is not a member. This difference |
54,445 | Can McNemar's test be improved upon by adjustments for zeros like those in a sign test? | E.L. Lehmann, J.P. Romano. Testing Statistical Hypotheses. 3rd ed. Springer, 2005. P. 136:
P(-), P(+) and P(0) denote the probabilities of preference for product
[A over B, B over A, or A=B, tie], ... The hypothesis to be tested H0: P(+)=P(-)
... The problem reduces to that of testing the hypothesis P=1/2 in a
binomial distribution with n-z [n = sample size, z = number of ties]
trials ... The unbiased test is obtained therefore by disregarding the
number of cases with no preference (ties), and applying the sign test
to the remaining data.
The power of the test depends strongly on P(0) ... For large P(0), the
number n-z of trials in the conditional [on z] binomial distribution
can be expected to be small, and the test will thus have little power
... A sufficiently high value of P(0), regardless of the value of
P(+)/P(-), implies that the population as a whole is largely
indifferent with respect to the product.
As an alternative treatment of ties, it is sometimes proposed to
assign each tie at random (with probability 1/2 each) to either plus
or minus ... The hypothesis H0 becomes P(+)+1/2P(0)=1/2 ... This test
can be viewed also as a randomized test ... and it is unbiased for
testing H0 in its original form ... [But] Since the test involves
randomization other than on the boundaries of the rejection region, it
is less powerful than the [original test disregarding ties]..., so that the
random breaking of ties results in a loss of power. | Can McNemar's test be improved upon by adjustments for zeros like those in a sign test? | E.L. Lehmann, J.P. Romano. Testing Statistical Hypotheses. 3rd ed. Springer, 2005. P. 136:
P(-), P(+) and P(0) denote the probabilities of preference for product
[A over B, B over A, or A=B, tie], | Can McNemar's test be improved upon by adjustments for zeros like those in a sign test?
E.L. Lehmann, J.P. Romano. Testing Statistical Hypotheses. 3rd ed. Springer, 2005. P. 136:
P(-), P(+) and P(0) denote the probabilities of preference for product
[A over B, B over A, or A=B, tie], ... The hypothesis to be tested H0: P(+)=P(-)
... The problem reduces to that of testing the hypothesis P=1/2 in a
binomial distribution with n-z [n = sample size, z = number of ties]
trials ... The unbiased test is obtained therefore by disregarding the
number of cases with no preference (ties), and applying the sign test
to the remaining data.
The power of the test depends strongly on P(0) ... For large P(0), the
number n-z of trials in the conditional [on z] binomial distribution
can be expected to be small, and the test will thus have little power
... A sufficiently high value of P(0), regardless of the value of
P(+)/P(-), implies that the population as a whole is largely
indifferent with respect to the product.
As an alternative treatment of ties, it is sometimes proposed to
assign each tie at random (with probability 1/2 each) to either plus
or minus ... The hypothesis H0 becomes P(+)+1/2P(0)=1/2 ... This test
can be viewed also as a randomized test ... and it is unbiased for
testing H0 in its original form ... [But] Since the test involves
randomization other than on the boundaries of the rejection region, it
is less powerful than the [original test disregarding ties]..., so that the
random breaking of ties results in a loss of power. | Can McNemar's test be improved upon by adjustments for zeros like those in a sign test?
E.L. Lehmann, J.P. Romano. Testing Statistical Hypotheses. 3rd ed. Springer, 2005. P. 136:
P(-), P(+) and P(0) denote the probabilities of preference for product
[A over B, B over A, or A=B, tie], |
54,446 | Can McNemar's test be improved upon by adjustments for zeros like those in a sign test? | I don't see how this would be helpful, or even possible. McNemar's test only uses the discordant pairs. The Wikipedia page states:
The McNemar test statistic is:
$$
\chi^2 = {(b-c)^2 \over b+c}.
$$
I have a lengthy explanation of McNemar's test here: What is the difference between McNemar's test and the chi-squared test, and how do you know when to use each? The whole post may be of value, but borrowing the example there:
\begin{array}{rrrrrr}
& &{\rm After} & & & \\
& &{\rm No} &{\rm Yes} & &{\rm total} \\
{\rm Before}&{\rm No} &1157 &35 & &1192 \\
&{\rm Yes} &220 &13 & &233 \\
& & & & & \\
&{\rm total} &1377 &48 & &1425 \\
\end{array}
McNemar's test is the binomial test of $220/(220+35)$; the concordant pairs (i.e., $1157$, and $13$) don't show up. | Can McNemar's test be improved upon by adjustments for zeros like those in a sign test? | I don't see how this would be helpful, or even possible. McNemar's test only uses the discordant pairs. The Wikipedia page states:
The McNemar test statistic is:
$$
\chi^2 = {(b-c)^2 \over | Can McNemar's test be improved upon by adjustments for zeros like those in a sign test?
I don't see how this would be helpful, or even possible. McNemar's test only uses the discordant pairs. The Wikipedia page states:
The McNemar test statistic is:
$$
\chi^2 = {(b-c)^2 \over b+c}.
$$
I have a lengthy explanation of McNemar's test here: What is the difference between McNemar's test and the chi-squared test, and how do you know when to use each? The whole post may be of value, but borrowing the example there:
\begin{array}{rrrrrr}
& &{\rm After} & & & \\
& &{\rm No} &{\rm Yes} & &{\rm total} \\
{\rm Before}&{\rm No} &1157 &35 & &1192 \\
&{\rm Yes} &220 &13 & &233 \\
& & & & & \\
&{\rm total} &1377 &48 & &1425 \\
\end{array}
McNemar's test is the binomial test of $220/(220+35)$; the concordant pairs (i.e., $1157$, and $13$) don't show up. | Can McNemar's test be improved upon by adjustments for zeros like those in a sign test?
I don't see how this would be helpful, or even possible. McNemar's test only uses the discordant pairs. The Wikipedia page states:
The McNemar test statistic is:
$$
\chi^2 = {(b-c)^2 \over |
54,447 | MCMC Modelling - can this even be solved? | I'll record my modelling thoughts as I create it:
import pymc as pm
How can I recreate this data? Well, N_T cars enter. Do I know N_T? No, so it's a random variable. For simplicity, I'll say its a discrete uniform with max 1000 (you can change this to, say, a Poisson)
N_T = pm.DiscreteUniform('N_T', 0, 1000)
Of these N_T cars that enter, I know a car will enter A with probability p. Thus the number of cars through A is a binomial rv. But do I know p for certain? Probably not, you mentioned it was a historical average, so we should model this stochastically (using the beta-binomial model):
p_A = pm.Beta('p_A', 50, 50) #centered at 0.5, but with some variance. This depends on your historical data.
N_A = pm.Binomial('N_A', N_T, p_A)
Next, only C percent of cars are recorded by A's observer. Let's just assume that C is completely accurate, i.e. there is no uncertainty about what C is:
C=0.95
Then the number of cars recorded is again a binomial, with N_A trials. Furthermore, suppose observer A record 20 cars.
obs_A = pm.Binomial('obs', N_A, C, observed=True, value = 20)
Let's fire the mcmc canon:
mcmc = pm.MCMC( [obs_A, N_A, p_A, N_T] )
mcmc.sample(10000, 5000)
from pymc.Matplot import plot as mcplot
mcplot(mcmc)
And I get this posterior for N_T: | MCMC Modelling - can this even be solved? | I'll record my modelling thoughts as I create it:
import pymc as pm
How can I recreate this data? Well, N_T cars enter. Do I know N_T? No, so it's a random variable. For simplicity, I'll say its a di | MCMC Modelling - can this even be solved?
I'll record my modelling thoughts as I create it:
import pymc as pm
How can I recreate this data? Well, N_T cars enter. Do I know N_T? No, so it's a random variable. For simplicity, I'll say its a discrete uniform with max 1000 (you can change this to, say, a Poisson)
N_T = pm.DiscreteUniform('N_T', 0, 1000)
Of these N_T cars that enter, I know a car will enter A with probability p. Thus the number of cars through A is a binomial rv. But do I know p for certain? Probably not, you mentioned it was a historical average, so we should model this stochastically (using the beta-binomial model):
p_A = pm.Beta('p_A', 50, 50) #centered at 0.5, but with some variance. This depends on your historical data.
N_A = pm.Binomial('N_A', N_T, p_A)
Next, only C percent of cars are recorded by A's observer. Let's just assume that C is completely accurate, i.e. there is no uncertainty about what C is:
C=0.95
Then the number of cars recorded is again a binomial, with N_A trials. Furthermore, suppose observer A record 20 cars.
obs_A = pm.Binomial('obs', N_A, C, observed=True, value = 20)
Let's fire the mcmc canon:
mcmc = pm.MCMC( [obs_A, N_A, p_A, N_T] )
mcmc.sample(10000, 5000)
from pymc.Matplot import plot as mcplot
mcplot(mcmc)
And I get this posterior for N_T: | MCMC Modelling - can this even be solved?
I'll record my modelling thoughts as I create it:
import pymc as pm
How can I recreate this data? Well, N_T cars enter. Do I know N_T? No, so it's a random variable. For simplicity, I'll say its a di |
54,448 | One sample median test: Wilcoxon, sign test or chi squared | One-sample sign test tests that the median in the population equals the value.
One-sample Wilcoxon test tests that the distribution in the population is symmetric around the value. More technically, that the sum of two randomly chosen deviations from the value has equal probability to occure positive or negative. Note that rejecting this null hypothesis does not preclude the value to be the mean or median of the population. The rejection implies two possible reasons: either the distribution is symmetric about some other value or the distribution is not symmetric at all.
So if we do assume symmetric shape of the population distribution then the Wilcoxon tests that the mean (=median) in the population equals the value (it is this test then is the nonparametric alternative to one-sample t-test which assumes normality). If you assume the symmetry and hence you test for mean (=median), then Wilcoxon is more powerful, as a median test, then the more universal sign test above. | One sample median test: Wilcoxon, sign test or chi squared | One-sample sign test tests that the median in the population equals the value.
One-sample Wilcoxon test tests that the distribution in the population is symmetric around the value. More technically, t | One sample median test: Wilcoxon, sign test or chi squared
One-sample sign test tests that the median in the population equals the value.
One-sample Wilcoxon test tests that the distribution in the population is symmetric around the value. More technically, that the sum of two randomly chosen deviations from the value has equal probability to occure positive or negative. Note that rejecting this null hypothesis does not preclude the value to be the mean or median of the population. The rejection implies two possible reasons: either the distribution is symmetric about some other value or the distribution is not symmetric at all.
So if we do assume symmetric shape of the population distribution then the Wilcoxon tests that the mean (=median) in the population equals the value (it is this test then is the nonparametric alternative to one-sample t-test which assumes normality). If you assume the symmetry and hence you test for mean (=median), then Wilcoxon is more powerful, as a median test, then the more universal sign test above. | One sample median test: Wilcoxon, sign test or chi squared
One-sample sign test tests that the median in the population equals the value.
One-sample Wilcoxon test tests that the distribution in the population is symmetric around the value. More technically, t |
54,449 | R-squared to compare forecasting techniques | In-sample fit such as $R^2$ is even more frowned upon as a measure of model quality in forecasting than in other statistical subdisciplines, for all the well-known reasons (if you make your model more and more complex, you will get better and better in-sample fits... but ever worse out-of-sample forecast accuracy). If at all, people will rather use information criteria, such as AIC or BIC.
However, the gold standard in forecast model assessment and model selection is using a hold-out sample. Suppose you have $n$ historical data points. Remove the last $k$ (say, $k=10$, depending on your data and the forecast horizon you actually are interested in). Fit your model (exponential smoothing, whatever) on the first $n-k$ observations, use the fitted model to forecast into the last $k$ observations, and compare this forecast with the hold-out actuals. Do this for a couple of models and choose the one with the best performance on the hold-out. Finally, refit this model using all $n$ observations to create your actual forecast.
There are many, many different error measures you can use in the comparison step, and they will not all yield the same result. Look at multiple measures and see whether the results are roughly consistent. This page explains some forecast error measures. And the entire free open source forecasting textbook is an extremely good source of information for forecasting. | R-squared to compare forecasting techniques | In-sample fit such as $R^2$ is even more frowned upon as a measure of model quality in forecasting than in other statistical subdisciplines, for all the well-known reasons (if you make your model more | R-squared to compare forecasting techniques
In-sample fit such as $R^2$ is even more frowned upon as a measure of model quality in forecasting than in other statistical subdisciplines, for all the well-known reasons (if you make your model more and more complex, you will get better and better in-sample fits... but ever worse out-of-sample forecast accuracy). If at all, people will rather use information criteria, such as AIC or BIC.
However, the gold standard in forecast model assessment and model selection is using a hold-out sample. Suppose you have $n$ historical data points. Remove the last $k$ (say, $k=10$, depending on your data and the forecast horizon you actually are interested in). Fit your model (exponential smoothing, whatever) on the first $n-k$ observations, use the fitted model to forecast into the last $k$ observations, and compare this forecast with the hold-out actuals. Do this for a couple of models and choose the one with the best performance on the hold-out. Finally, refit this model using all $n$ observations to create your actual forecast.
There are many, many different error measures you can use in the comparison step, and they will not all yield the same result. Look at multiple measures and see whether the results are roughly consistent. This page explains some forecast error measures. And the entire free open source forecasting textbook is an extremely good source of information for forecasting. | R-squared to compare forecasting techniques
In-sample fit such as $R^2$ is even more frowned upon as a measure of model quality in forecasting than in other statistical subdisciplines, for all the well-known reasons (if you make your model more |
54,450 | When to divide data into training & test set in logistic regression? | I do not think you need to divide the set if you are interested in the significance of a coefficient and not in prediction. Cross validation is used to judge the prediction error outside the sample used to estimate the model. Typically, the objective will be to tune some parameter that is not being estimated from the data.
For example, if you were interested in prediction, I would advise you to use regularized logistic regression. This is similar to logistic regression, except for the fact that coefficients (as a whole) are biased towards 0. The level of bias is determined by a penalty parameter that is typically fine tuned via cross validation. The idea is to choose the penalty parameter that minimizes the out of sample error (which is measured via cross validation.) When building a predictive model, it is acceptable (and desirable) to introduce some bias into the coefficients if said bias causes a much larger drop in the variance of the prediction (hence, resulting in a better model for predictive purposes.)
What you are trying to do is inference. You want an unbiased estimate of a coefficient (supposedly to judge the effect that changing one variable may have on another). The best way to obtain this is to have a well specified model and a sample as large as possible. Hence, I would not split the sample. If you are interested in sampling variation, you should try a bootstrap or a jacknife procedure instead.
EDIT:
Short version: You want an unbiased model. Cross validation can help you find a good predictive model, which are often biased. Hence, I do not think cross validation is helpful in this situation. | When to divide data into training & test set in logistic regression? | I do not think you need to divide the set if you are interested in the significance of a coefficient and not in prediction. Cross validation is used to judge the prediction error outside the sample u | When to divide data into training & test set in logistic regression?
I do not think you need to divide the set if you are interested in the significance of a coefficient and not in prediction. Cross validation is used to judge the prediction error outside the sample used to estimate the model. Typically, the objective will be to tune some parameter that is not being estimated from the data.
For example, if you were interested in prediction, I would advise you to use regularized logistic regression. This is similar to logistic regression, except for the fact that coefficients (as a whole) are biased towards 0. The level of bias is determined by a penalty parameter that is typically fine tuned via cross validation. The idea is to choose the penalty parameter that minimizes the out of sample error (which is measured via cross validation.) When building a predictive model, it is acceptable (and desirable) to introduce some bias into the coefficients if said bias causes a much larger drop in the variance of the prediction (hence, resulting in a better model for predictive purposes.)
What you are trying to do is inference. You want an unbiased estimate of a coefficient (supposedly to judge the effect that changing one variable may have on another). The best way to obtain this is to have a well specified model and a sample as large as possible. Hence, I would not split the sample. If you are interested in sampling variation, you should try a bootstrap or a jacknife procedure instead.
EDIT:
Short version: You want an unbiased model. Cross validation can help you find a good predictive model, which are often biased. Hence, I do not think cross validation is helpful in this situation. | When to divide data into training & test set in logistic regression?
I do not think you need to divide the set if you are interested in the significance of a coefficient and not in prediction. Cross validation is used to judge the prediction error outside the sample u |
54,451 | When to divide data into training & test set in logistic regression? | (1) Split sample is likely not the conventional way to approach this problem. Obviously conventions differ by fields of research and subject area. But I don't think it is unreasonable to say that bootstrapping for optimism would be the standard here, and I think you would have to justify in some detail if you were planning on using alternative methods.
(2) You're right, you might most probably don't need to validate model if you're only planning on looking at the association/coefficients. But you should know that the coefficients (and their p-values) are only valid for the pre-specified model. If you've included splines, variable selection etc. these values are inflated and might well have limited meaning. The validation process attempts to estimate the over-fitting of the model - the degree of optimism. It validates the model building process, not the model. If there is no model building - only a pre-specified model - not that useful for you. If there is model building - not unhelpful to have some estimate of how much it lead to over-fitting. | When to divide data into training & test set in logistic regression? | (1) Split sample is likely not the conventional way to approach this problem. Obviously conventions differ by fields of research and subject area. But I don't think it is unreasonable to say that boot | When to divide data into training & test set in logistic regression?
(1) Split sample is likely not the conventional way to approach this problem. Obviously conventions differ by fields of research and subject area. But I don't think it is unreasonable to say that bootstrapping for optimism would be the standard here, and I think you would have to justify in some detail if you were planning on using alternative methods.
(2) You're right, you might most probably don't need to validate model if you're only planning on looking at the association/coefficients. But you should know that the coefficients (and their p-values) are only valid for the pre-specified model. If you've included splines, variable selection etc. these values are inflated and might well have limited meaning. The validation process attempts to estimate the over-fitting of the model - the degree of optimism. It validates the model building process, not the model. If there is no model building - only a pre-specified model - not that useful for you. If there is model building - not unhelpful to have some estimate of how much it lead to over-fitting. | When to divide data into training & test set in logistic regression?
(1) Split sample is likely not the conventional way to approach this problem. Obviously conventions differ by fields of research and subject area. But I don't think it is unreasonable to say that boot |
54,452 | When to divide data into training & test set in logistic regression? | Why not use cross validation, maybe with a higher X, like 10X. LOOCV might also be interesting but that could go really slowly.
You could alternatively do some kind of more fancy custom CV where you leave one of the 420 positive events out, and the same proportion of the negative events (1/420 of them to preserve the relative proportion?) out at a given round. You would then have 420 CV iterations to calculate stats on, and you only give up training on a single positive sample at each round. That way you can get away with smaller training/testing splits. You could modify that to have fewer CV iterations if 420 would be too slow, maybe leave out 5 positives at a time, and 5/420 negatives? | When to divide data into training & test set in logistic regression? | Why not use cross validation, maybe with a higher X, like 10X. LOOCV might also be interesting but that could go really slowly.
You could alternatively do some kind of more fancy custom CV where you | When to divide data into training & test set in logistic regression?
Why not use cross validation, maybe with a higher X, like 10X. LOOCV might also be interesting but that could go really slowly.
You could alternatively do some kind of more fancy custom CV where you leave one of the 420 positive events out, and the same proportion of the negative events (1/420 of them to preserve the relative proportion?) out at a given round. You would then have 420 CV iterations to calculate stats on, and you only give up training on a single positive sample at each round. That way you can get away with smaller training/testing splits. You could modify that to have fewer CV iterations if 420 would be too slow, maybe leave out 5 positives at a time, and 5/420 negatives? | When to divide data into training & test set in logistic regression?
Why not use cross validation, maybe with a higher X, like 10X. LOOCV might also be interesting but that could go really slowly.
You could alternatively do some kind of more fancy custom CV where you |
54,453 | Fitting data sample to a distribution | The bins in your histogram are a bit too wide (the default has too few bins), making it hard to discern the shape clearly.
The QQ-plot suggests that a finite mixture of perhaps two or three components, possibly of something right skew, perhaps like gamma distributions. | Fitting data sample to a distribution | The bins in your histogram are a bit too wide (the default has too few bins), making it hard to discern the shape clearly.
The QQ-plot suggests that a finite mixture of perhaps two or three component | Fitting data sample to a distribution
The bins in your histogram are a bit too wide (the default has too few bins), making it hard to discern the shape clearly.
The QQ-plot suggests that a finite mixture of perhaps two or three components, possibly of something right skew, perhaps like gamma distributions. | Fitting data sample to a distribution
The bins in your histogram are a bit too wide (the default has too few bins), making it hard to discern the shape clearly.
The QQ-plot suggests that a finite mixture of perhaps two or three component |
54,454 | Fitting data sample to a distribution | Have you tried a gamma distribution? That is a little more flexible than a log-normal and typically has thinner tails, which appear to be where your fit is lacking.
If that doesn't work, you might want to click through this list on wikipedia. There are many choices for distributions with positive support, and of course each one has it's pluses and minuses depending on your application. It looks like you're doing something with financial data, so the Pareto distribution may also be worth exploring. That tends to have thick tails though so maybe try the gamma first.
A piece of advice though: you're not going to find a closed-form distribution that fits your data perfectly. Different distributions will miss in different ways, so you'll need to ask yourself exactly how good and where in the distribution the fit has to be. The log-normal fit in your pictures may not actually be that bad depending on your priorities. If you're doing regression analysis, then maybe it works. If you're modeling tail-risk, then maybe not. Also, do you really need a closed form distribution? If distribution fit qua fit is your goal, then maybe the empirical distribution is the way to go. | Fitting data sample to a distribution | Have you tried a gamma distribution? That is a little more flexible than a log-normal and typically has thinner tails, which appear to be where your fit is lacking.
If that doesn't work, you might wa | Fitting data sample to a distribution
Have you tried a gamma distribution? That is a little more flexible than a log-normal and typically has thinner tails, which appear to be where your fit is lacking.
If that doesn't work, you might want to click through this list on wikipedia. There are many choices for distributions with positive support, and of course each one has it's pluses and minuses depending on your application. It looks like you're doing something with financial data, so the Pareto distribution may also be worth exploring. That tends to have thick tails though so maybe try the gamma first.
A piece of advice though: you're not going to find a closed-form distribution that fits your data perfectly. Different distributions will miss in different ways, so you'll need to ask yourself exactly how good and where in the distribution the fit has to be. The log-normal fit in your pictures may not actually be that bad depending on your priorities. If you're doing regression analysis, then maybe it works. If you're modeling tail-risk, then maybe not. Also, do you really need a closed form distribution? If distribution fit qua fit is your goal, then maybe the empirical distribution is the way to go. | Fitting data sample to a distribution
Have you tried a gamma distribution? That is a little more flexible than a log-normal and typically has thinner tails, which appear to be where your fit is lacking.
If that doesn't work, you might wa |
54,455 | What are pros and cons of empirical Bayesian methods? | So we are clear, the idea is that I have data $Y \sim f(Y \mid \theta)$ and have a prior $\theta \sim \pi(\theta \mid \eta)$. Then the joint is
$$
J(Y, \theta \mid \eta) = f(Y\mid \theta)\pi(\theta\mid \eta)
$$
and the marginal of $Y$ is
$$
m(Y\mid\eta)=\int f(Y\mid\theta) \pi(\theta\mid\eta) \ d\theta.
$$
The empirical Bayes approach, rather than specifying the value of $\eta$ or placing a prior on $\eta$, estimates
$$
\hat \eta = \arg \max_\eta m(Y\mid \eta).
$$
Then, we draw inferences about $\theta$ from the "posterior" $\pi(\theta \mid Y, \hat \eta)$.
This describes parametric empirical Bayes. Maybe someone else can describe the situation for nonparametric empirical Bayes; I haven't dealt with it personally. The primary alternative to EB is to place a prior on $\eta$.
Some Pros
The procedure is, in principle, automatic. No work need to be done in eliciting a prior on $\eta$. Contrast this with choosing $\eta$ according to your prior knowledge about $\theta$, or using a hyperprior $\eta \sim \lambda(\eta \mid \gamma)$ (which will require specifying a value of $\gamma$). Subjectivity is always creeping in with these alternative approaches.
In practice, it can be very annoying to have to specify a prior. It can cause a lot of work on the part of the scientist. Empirical Bayes can shift the workload to the computer.
Related to 1., I've found that this can provide some stabilization of our results. Normally I would try to place a prior on $\eta$, but if my prior is too vague or out of line with the data, I find you can get some strange results. I'm more likely to get a sane answer with empirical Bayes. (Note: This is my personal experience with the models I've worked with; it is easy to imagine EB overfitting for the same reason ML results in overfitting).
Some Cons
It is not always easy to implement. What you can get away with in implementation depends on what problem you are looking at - if you are in an ML setting and are doing some variational approximation for inference you can often do some approximate EB, but if you are doing MCMC it can be quite difficult to implement in a computationally attractive way. Under MCMC you can try to fake things with a stochastic search algorithm, but as far as I know the theory behind this hasn't really been done.
By plugging in fixed point estimate $\hat \eta$ in place of $\eta$ and drawing inference from $\pi(\theta \mid Y, \hat \eta)$ as though we had specified $\hat \eta$ from the beginning, we are neglecting our uncertainty about $\eta$. There are ways to try to fix this, but mostly people just hope that it doesn't make a big difference. But if it really didn't make a difference, why not just put a hyperprior on $\eta$? This is especially suspect because the amount of information in the data about $\eta$ is often quite small.
It isn't clear what exactly we are doing from a statistical perspective. It isn't really Bayesian; at best, it is an approximation to Bayesian analysis. Hypothetically, if there was a prior on $\eta$ and it was tightly concentrated, then EB would be an approximation to fully Bayesian inference, but this typically isn't the case. So what the heck is this procedure doing? It seems to me that if I'm using this I'm usually either being a fake Bayesian or I have some reason to believe that the frequentist properties of the method are good. The principled Bayesian approach would be to put a prior on $\eta$, and this can work better in practice.
Hope that helps. I actually like EB quite a bit as a method for finding procedures and evaluating them according to their frequentist properties when I'm wearing my statistics hat. It gives frequentists a nice tool for constructing methods with "sharing of information" in hierarchical models. Occasionally the properties of EB estimators are provably good (e.g. the Stein shrinkage estimator can be derived from an EB standpoint). In ML, of course, you often just don't really care where procedures come from and just use whatever works. | What are pros and cons of empirical Bayesian methods? | So we are clear, the idea is that I have data $Y \sim f(Y \mid \theta)$ and have a prior $\theta \sim \pi(\theta \mid \eta)$. Then the joint is
$$
J(Y, \theta \mid \eta) = f(Y\mid \theta)\pi(\theta\mi | What are pros and cons of empirical Bayesian methods?
So we are clear, the idea is that I have data $Y \sim f(Y \mid \theta)$ and have a prior $\theta \sim \pi(\theta \mid \eta)$. Then the joint is
$$
J(Y, \theta \mid \eta) = f(Y\mid \theta)\pi(\theta\mid \eta)
$$
and the marginal of $Y$ is
$$
m(Y\mid\eta)=\int f(Y\mid\theta) \pi(\theta\mid\eta) \ d\theta.
$$
The empirical Bayes approach, rather than specifying the value of $\eta$ or placing a prior on $\eta$, estimates
$$
\hat \eta = \arg \max_\eta m(Y\mid \eta).
$$
Then, we draw inferences about $\theta$ from the "posterior" $\pi(\theta \mid Y, \hat \eta)$.
This describes parametric empirical Bayes. Maybe someone else can describe the situation for nonparametric empirical Bayes; I haven't dealt with it personally. The primary alternative to EB is to place a prior on $\eta$.
Some Pros
The procedure is, in principle, automatic. No work need to be done in eliciting a prior on $\eta$. Contrast this with choosing $\eta$ according to your prior knowledge about $\theta$, or using a hyperprior $\eta \sim \lambda(\eta \mid \gamma)$ (which will require specifying a value of $\gamma$). Subjectivity is always creeping in with these alternative approaches.
In practice, it can be very annoying to have to specify a prior. It can cause a lot of work on the part of the scientist. Empirical Bayes can shift the workload to the computer.
Related to 1., I've found that this can provide some stabilization of our results. Normally I would try to place a prior on $\eta$, but if my prior is too vague or out of line with the data, I find you can get some strange results. I'm more likely to get a sane answer with empirical Bayes. (Note: This is my personal experience with the models I've worked with; it is easy to imagine EB overfitting for the same reason ML results in overfitting).
Some Cons
It is not always easy to implement. What you can get away with in implementation depends on what problem you are looking at - if you are in an ML setting and are doing some variational approximation for inference you can often do some approximate EB, but if you are doing MCMC it can be quite difficult to implement in a computationally attractive way. Under MCMC you can try to fake things with a stochastic search algorithm, but as far as I know the theory behind this hasn't really been done.
By plugging in fixed point estimate $\hat \eta$ in place of $\eta$ and drawing inference from $\pi(\theta \mid Y, \hat \eta)$ as though we had specified $\hat \eta$ from the beginning, we are neglecting our uncertainty about $\eta$. There are ways to try to fix this, but mostly people just hope that it doesn't make a big difference. But if it really didn't make a difference, why not just put a hyperprior on $\eta$? This is especially suspect because the amount of information in the data about $\eta$ is often quite small.
It isn't clear what exactly we are doing from a statistical perspective. It isn't really Bayesian; at best, it is an approximation to Bayesian analysis. Hypothetically, if there was a prior on $\eta$ and it was tightly concentrated, then EB would be an approximation to fully Bayesian inference, but this typically isn't the case. So what the heck is this procedure doing? It seems to me that if I'm using this I'm usually either being a fake Bayesian or I have some reason to believe that the frequentist properties of the method are good. The principled Bayesian approach would be to put a prior on $\eta$, and this can work better in practice.
Hope that helps. I actually like EB quite a bit as a method for finding procedures and evaluating them according to their frequentist properties when I'm wearing my statistics hat. It gives frequentists a nice tool for constructing methods with "sharing of information" in hierarchical models. Occasionally the properties of EB estimators are provably good (e.g. the Stein shrinkage estimator can be derived from an EB standpoint). In ML, of course, you often just don't really care where procedures come from and just use whatever works. | What are pros and cons of empirical Bayesian methods?
So we are clear, the idea is that I have data $Y \sim f(Y \mid \theta)$ and have a prior $\theta \sim \pi(\theta \mid \eta)$. Then the joint is
$$
J(Y, \theta \mid \eta) = f(Y\mid \theta)\pi(\theta\mi |
54,456 | Entropy of generalized distributions? | Typical Shannon entropy, on discrete set of probabilities, needs to be positive, as it is average of non-negative numbers, i.e.
$$\sum_i p_i \left(\tfrac{1}{p_i}\right).$$
Differential entropy need not to be positive. It is
$$\int p(x) \log\left(\tfrac{1}{p(x)}\right) dx,$$
which does not need to be positive. $p(x)$ is probability density, so it can be greater than one, making $\log(\tfrac{1}{p(x)})$ negative. In fact differential entropy can be viewed as Shannon entropy, where we do limit for infinitesimally small boxes and subtract $\log(1/\epsilon)$ (i.e. box size), otherwise the limit diverges:
$$
\lim_{\epsilon\to\infty} \sum_i p_{[i\epsilon, (i+1)\epsilon]}
\log\left(\tfrac{1}{p_{[i\epsilon, (i+1)\epsilon]}}\right)
$$
$$
\approx
\lim_{\epsilon\to\infty}
\sum_{i} p(i \epsilon)\epsilon
\log\left(\tfrac{1}{p(i \epsilon)\epsilon}\right)
$$
$$
=
\lim_{\epsilon\to\infty}
\left(\sum_{i} p(i \epsilon)\epsilon
\log\left(\tfrac{1}{p(i \epsilon)}\right)
+ \log(1/\epsilon) \right)
$$
$$
=
\int_x
p(x)
\log\left(\tfrac{1}{p(x)}\right) dx
+ \lim_{\epsilon\to\infty}\log(1/\epsilon)
$$
For Dirac delta differential entropy is $-\infty$, so you are right. | Entropy of generalized distributions? | Typical Shannon entropy, on discrete set of probabilities, needs to be positive, as it is average of non-negative numbers, i.e.
$$\sum_i p_i \left(\tfrac{1}{p_i}\right).$$
Differential entropy need no | Entropy of generalized distributions?
Typical Shannon entropy, on discrete set of probabilities, needs to be positive, as it is average of non-negative numbers, i.e.
$$\sum_i p_i \left(\tfrac{1}{p_i}\right).$$
Differential entropy need not to be positive. It is
$$\int p(x) \log\left(\tfrac{1}{p(x)}\right) dx,$$
which does not need to be positive. $p(x)$ is probability density, so it can be greater than one, making $\log(\tfrac{1}{p(x)})$ negative. In fact differential entropy can be viewed as Shannon entropy, where we do limit for infinitesimally small boxes and subtract $\log(1/\epsilon)$ (i.e. box size), otherwise the limit diverges:
$$
\lim_{\epsilon\to\infty} \sum_i p_{[i\epsilon, (i+1)\epsilon]}
\log\left(\tfrac{1}{p_{[i\epsilon, (i+1)\epsilon]}}\right)
$$
$$
\approx
\lim_{\epsilon\to\infty}
\sum_{i} p(i \epsilon)\epsilon
\log\left(\tfrac{1}{p(i \epsilon)\epsilon}\right)
$$
$$
=
\lim_{\epsilon\to\infty}
\left(\sum_{i} p(i \epsilon)\epsilon
\log\left(\tfrac{1}{p(i \epsilon)}\right)
+ \log(1/\epsilon) \right)
$$
$$
=
\int_x
p(x)
\log\left(\tfrac{1}{p(x)}\right) dx
+ \lim_{\epsilon\to\infty}\log(1/\epsilon)
$$
For Dirac delta differential entropy is $-\infty$, so you are right. | Entropy of generalized distributions?
Typical Shannon entropy, on discrete set of probabilities, needs to be positive, as it is average of non-negative numbers, i.e.
$$\sum_i p_i \left(\tfrac{1}{p_i}\right).$$
Differential entropy need no |
54,457 | What can be inferred from a 95% confidence interval on a correlation coefficient? | All you can say is the sample Pearson's correlation coefficient (r) in contained in the interval from 0.24 to 0.78. You are 95% confident that you will detect a significantly different correlation when testing values outside this interval. What this means is that variable X has some degree of positive linear relationship to variable Y in your sample. (I hesitate to use qualitative descriptors of this "strength" of the relationship because: 1) this is somewhat an outdated way to think of it, 2) what may be a strong correlation in one discipline may be weak in another, and 3) I have no idea of the sample size used to calculate the correlation coefficient.) If this experiment were conducted several independent times, with random sampling over the same population, then 95% (in the long run) will contain the population parameter, rho. | What can be inferred from a 95% confidence interval on a correlation coefficient? | All you can say is the sample Pearson's correlation coefficient (r) in contained in the interval from 0.24 to 0.78. You are 95% confident that you will detect a significantly different correlation whe | What can be inferred from a 95% confidence interval on a correlation coefficient?
All you can say is the sample Pearson's correlation coefficient (r) in contained in the interval from 0.24 to 0.78. You are 95% confident that you will detect a significantly different correlation when testing values outside this interval. What this means is that variable X has some degree of positive linear relationship to variable Y in your sample. (I hesitate to use qualitative descriptors of this "strength" of the relationship because: 1) this is somewhat an outdated way to think of it, 2) what may be a strong correlation in one discipline may be weak in another, and 3) I have no idea of the sample size used to calculate the correlation coefficient.) If this experiment were conducted several independent times, with random sampling over the same population, then 95% (in the long run) will contain the population parameter, rho. | What can be inferred from a 95% confidence interval on a correlation coefficient?
All you can say is the sample Pearson's correlation coefficient (r) in contained in the interval from 0.24 to 0.78. You are 95% confident that you will detect a significantly different correlation whe |
54,458 | What can be inferred from a 95% confidence interval on a correlation coefficient? | I make this comment from the perspective of someone who is analytical but who is not an expert in statistics. One of the reasons for doing a linear regression is to get an answer to the question as to whether the values of two variables, x and y, are independent of each other. Alternatively, the data set may contain evidence of some linkage between them. If the confidence interval of "r" CONTAINS zero, that suggests that x and y are unrelated and that the calculated regression equation is of no value. If the confidence interval on "r" DOES NOT CONTAIN zero, there is a reason to believe there is reason to suspect that the value of x is somehow linked to the value of y. In this case, if you are building a statistical or mathematical model that includes both x and y as variables, you might want to include something that represents this linkage...it might improve the predictiveness of the model.
As a caveat, because I am not a statistics expert, I could have this wrong. | What can be inferred from a 95% confidence interval on a correlation coefficient? | I make this comment from the perspective of someone who is analytical but who is not an expert in statistics. One of the reasons for doing a linear regression is to get an answer to the question as | What can be inferred from a 95% confidence interval on a correlation coefficient?
I make this comment from the perspective of someone who is analytical but who is not an expert in statistics. One of the reasons for doing a linear regression is to get an answer to the question as to whether the values of two variables, x and y, are independent of each other. Alternatively, the data set may contain evidence of some linkage between them. If the confidence interval of "r" CONTAINS zero, that suggests that x and y are unrelated and that the calculated regression equation is of no value. If the confidence interval on "r" DOES NOT CONTAIN zero, there is a reason to believe there is reason to suspect that the value of x is somehow linked to the value of y. In this case, if you are building a statistical or mathematical model that includes both x and y as variables, you might want to include something that represents this linkage...it might improve the predictiveness of the model.
As a caveat, because I am not a statistics expert, I could have this wrong. | What can be inferred from a 95% confidence interval on a correlation coefficient?
I make this comment from the perspective of someone who is analytical but who is not an expert in statistics. One of the reasons for doing a linear regression is to get an answer to the question as |
54,459 | How to compare two non-normally distributed samples with very different sizes? (Mann-Whitney vs Randomization/Bootstrap) | I am not a big expert on statistical testing, but the approach you are considering decidedly does not make sense. Imagine that the groups are indeed identical (i.e. null hypothesis is true). Then you will observe p<0.05 in exactly 5% of the cases, and e.g. p<0.01 in 1% of the cases (those would be false positives). So following your logic, you would reject the null.
I am not aware of any problems with Wilcoxon-Mann-Whitney test in case of different numbers of observations. So one option you have is to run the ranksum test as usual, without any further complications.
However, if you do feel concerned about the very different $N$, you can try a simple permutation test: pool both groups together (obtaining $81+5110=5191$ numbers) and randomly select $81$ values as group A and all the rest as group B. Then take the difference between the means (or medians) of A and B (let's call it $\mu$), and repeat this many many times. This will give you a distribution $p(\mu)$. At the same time for your actual groups X and Y you have some fixed empirical value of $\mu^*$. Now you can check if $\mu^*$ lies in the 95% percentile interval of $p(\mu)$. If it does not, you can reject the null with p<0.05. | How to compare two non-normally distributed samples with very different sizes? (Mann-Whitney vs Rand | I am not a big expert on statistical testing, but the approach you are considering decidedly does not make sense. Imagine that the groups are indeed identical (i.e. null hypothesis is true). Then you | How to compare two non-normally distributed samples with very different sizes? (Mann-Whitney vs Randomization/Bootstrap)
I am not a big expert on statistical testing, but the approach you are considering decidedly does not make sense. Imagine that the groups are indeed identical (i.e. null hypothesis is true). Then you will observe p<0.05 in exactly 5% of the cases, and e.g. p<0.01 in 1% of the cases (those would be false positives). So following your logic, you would reject the null.
I am not aware of any problems with Wilcoxon-Mann-Whitney test in case of different numbers of observations. So one option you have is to run the ranksum test as usual, without any further complications.
However, if you do feel concerned about the very different $N$, you can try a simple permutation test: pool both groups together (obtaining $81+5110=5191$ numbers) and randomly select $81$ values as group A and all the rest as group B. Then take the difference between the means (or medians) of A and B (let's call it $\mu$), and repeat this many many times. This will give you a distribution $p(\mu)$. At the same time for your actual groups X and Y you have some fixed empirical value of $\mu^*$. Now you can check if $\mu^*$ lies in the 95% percentile interval of $p(\mu)$. If it does not, you can reject the null with p<0.05. | How to compare two non-normally distributed samples with very different sizes? (Mann-Whitney vs Rand
I am not a big expert on statistical testing, but the approach you are considering decidedly does not make sense. Imagine that the groups are indeed identical (i.e. null hypothesis is true). Then you |
54,460 | How to compare two non-normally distributed samples with very different sizes? (Mann-Whitney vs Randomization/Bootstrap) | Your approach does not make sense. The usual Wilcoxon-test will answer you with high power. Your approach looses this advantage. It may however be reasonable to be afrait of too much power, because even irrelevant differences will show up significant, which would in fact distract the scientist interested in a relevant qualitative statement of a test. | How to compare two non-normally distributed samples with very different sizes? (Mann-Whitney vs Rand | Your approach does not make sense. The usual Wilcoxon-test will answer you with high power. Your approach looses this advantage. It may however be reasonable to be afrait of too much power, because ev | How to compare two non-normally distributed samples with very different sizes? (Mann-Whitney vs Randomization/Bootstrap)
Your approach does not make sense. The usual Wilcoxon-test will answer you with high power. Your approach looses this advantage. It may however be reasonable to be afrait of too much power, because even irrelevant differences will show up significant, which would in fact distract the scientist interested in a relevant qualitative statement of a test. | How to compare two non-normally distributed samples with very different sizes? (Mann-Whitney vs Rand
Your approach does not make sense. The usual Wilcoxon-test will answer you with high power. Your approach looses this advantage. It may however be reasonable to be afrait of too much power, because ev |
54,461 | What is the name of a plot of - log 10 of p values in multiple testing? | Geneticists call this a "Manhattan plot". Usually the bars are thicker, with no gap in between, so it looks (sort of kind of) like the New York skyline. | What is the name of a plot of - log 10 of p values in multiple testing? | Geneticists call this a "Manhattan plot". Usually the bars are thicker, with no gap in between, so it looks (sort of kind of) like the New York skyline. | What is the name of a plot of - log 10 of p values in multiple testing?
Geneticists call this a "Manhattan plot". Usually the bars are thicker, with no gap in between, so it looks (sort of kind of) like the New York skyline. | What is the name of a plot of - log 10 of p values in multiple testing?
Geneticists call this a "Manhattan plot". Usually the bars are thicker, with no gap in between, so it looks (sort of kind of) like the New York skyline. |
54,462 | What is the name of a plot of - log 10 of p values in multiple testing? | A plot of vertical lines is often called a "Needle Plot". See Graphics with R (section 3.7.3) or SAS doc.
Normally there is a shared baseline for the needles, making it conceptually like a bar chart. Your code suggests a baseline of 0 segments(seq(sigs), 0, seq(sigs), -log(sigs, base=10)), but your image looks more like lines centered at 0, segments(seq(sigs), log(sigs, base=10), seq(sigs), -log(sigs, base=10)). | What is the name of a plot of - log 10 of p values in multiple testing? | A plot of vertical lines is often called a "Needle Plot". See Graphics with R (section 3.7.3) or SAS doc.
Normally there is a shared baseline for the needles, making it conceptually like a bar chart. | What is the name of a plot of - log 10 of p values in multiple testing?
A plot of vertical lines is often called a "Needle Plot". See Graphics with R (section 3.7.3) or SAS doc.
Normally there is a shared baseline for the needles, making it conceptually like a bar chart. Your code suggests a baseline of 0 segments(seq(sigs), 0, seq(sigs), -log(sigs, base=10)), but your image looks more like lines centered at 0, segments(seq(sigs), log(sigs, base=10), seq(sigs), -log(sigs, base=10)). | What is the name of a plot of - log 10 of p values in multiple testing?
A plot of vertical lines is often called a "Needle Plot". See Graphics with R (section 3.7.3) or SAS doc.
Normally there is a shared baseline for the needles, making it conceptually like a bar chart. |
54,463 | Convert a categorical variable to a numerical variable prior to regression | 1) Why do you want to convert race into numbers? I'm assuming you want to do something like a regression model, is that correct? I'm going to assume you're asking how to handle "categorical data" (categories like different races) in regression.
So, you want numerical variables, and you could just assign a number to each race. But, if you choose White=1, Black=2, Asian=3 then does it really make sense that the distance between White's and Black's is exactly half the distance between White's and Asian's? And, is that ordering even correct? Probably not.
Instead, what you do is create dummy variables. Let's say you have just those three races. Then, you create two dummy variables: White, Black. You could also use White, Asian or Black, Asian; the key is that you always create one fewer dummy variables then categories. Now, the White variable is 1 if the individual is white and is 0 otherwise, and the Black variable is 1 if the individual is black and is 0 otherwise. If you now fit a regression model, the coefficient for White tells you the average difference between asians and whites (note that the Asian dummy variable was not used, so asians become the baseline we compare to). The coefficient for Black tells you the average difference between asians and blacks.
Note: If you're using software to fit your regression model, you probably don't have to worry about all this. You just tell your software that the variable is categorical, and it handles all these details.
2) You don't need to worry about this, at least if you're doing a regression. Running the regression model will tell you coefficients for each variable as well as their standard errors, and that information tells you which variables are most important. If you want help interpreting those coefficients, that's a whole new topic. | Convert a categorical variable to a numerical variable prior to regression | 1) Why do you want to convert race into numbers? I'm assuming you want to do something like a regression model, is that correct? I'm going to assume you're asking how to handle "categorical data" (c | Convert a categorical variable to a numerical variable prior to regression
1) Why do you want to convert race into numbers? I'm assuming you want to do something like a regression model, is that correct? I'm going to assume you're asking how to handle "categorical data" (categories like different races) in regression.
So, you want numerical variables, and you could just assign a number to each race. But, if you choose White=1, Black=2, Asian=3 then does it really make sense that the distance between White's and Black's is exactly half the distance between White's and Asian's? And, is that ordering even correct? Probably not.
Instead, what you do is create dummy variables. Let's say you have just those three races. Then, you create two dummy variables: White, Black. You could also use White, Asian or Black, Asian; the key is that you always create one fewer dummy variables then categories. Now, the White variable is 1 if the individual is white and is 0 otherwise, and the Black variable is 1 if the individual is black and is 0 otherwise. If you now fit a regression model, the coefficient for White tells you the average difference between asians and whites (note that the Asian dummy variable was not used, so asians become the baseline we compare to). The coefficient for Black tells you the average difference between asians and blacks.
Note: If you're using software to fit your regression model, you probably don't have to worry about all this. You just tell your software that the variable is categorical, and it handles all these details.
2) You don't need to worry about this, at least if you're doing a regression. Running the regression model will tell you coefficients for each variable as well as their standard errors, and that information tells you which variables are most important. If you want help interpreting those coefficients, that's a whole new topic. | Convert a categorical variable to a numerical variable prior to regression
1) Why do you want to convert race into numbers? I'm assuming you want to do something like a regression model, is that correct? I'm going to assume you're asking how to handle "categorical data" (c |
54,464 | Convert a categorical variable to a numerical variable prior to regression | Answer for your questions:
1) how do I convert the race into numbers to make my model more accurate?
-> I think answer lies in which tool you are using for analysis. Most of the tool have facility to convert attributes/factor in appropriate inputs. To explain your first question you can refer following link:
You can find your answer precisely here: http://www.ats.ucla.edu/stat/r/dae/logit.htm
It's self-explanatory article on admission based on GPA and ranks.
I am just recreating example from there. Tool used in this blog is R, freeware statistical analysis tool.
Data would look like this:
## admit gre gpa rank
## 1 0 380 3.61 3
## 2 1 660 3.67 3
## 3 1 800 4.00 1
## 4 1 640 3.19 4
## 5 0 520 2.93 4
## 6 1 760 3.00 2
Admit is output, 1 means student got admission. Now lets make rank as category:
mydata$rank <- factor(mydata$rank)
You can use other input into factor/category using above method. Now we will prepare a regression model for above table.
mylogit <- glm(admit ~ gre + gpa + rank, data = mydata, family = "binomial")
Above function will prepare a logistic regression model where we are checking whether admission depends on GRE,GPA or rank. Using summary function you get see the results.
summary(mylogit)
2) How do I find which factor make the most contribution to estimate students final GPA, so I can put more weight on it?
-> You don't have to give weight before hand, regression table will give you weight (co-efficient) for each input along with its statistical significance.
I hope I have cleared your answer. | Convert a categorical variable to a numerical variable prior to regression | Answer for your questions:
1) how do I convert the race into numbers to make my model more accurate?
-> I think answer lies in which tool you are using for analysis. Most of the tool have facility to | Convert a categorical variable to a numerical variable prior to regression
Answer for your questions:
1) how do I convert the race into numbers to make my model more accurate?
-> I think answer lies in which tool you are using for analysis. Most of the tool have facility to convert attributes/factor in appropriate inputs. To explain your first question you can refer following link:
You can find your answer precisely here: http://www.ats.ucla.edu/stat/r/dae/logit.htm
It's self-explanatory article on admission based on GPA and ranks.
I am just recreating example from there. Tool used in this blog is R, freeware statistical analysis tool.
Data would look like this:
## admit gre gpa rank
## 1 0 380 3.61 3
## 2 1 660 3.67 3
## 3 1 800 4.00 1
## 4 1 640 3.19 4
## 5 0 520 2.93 4
## 6 1 760 3.00 2
Admit is output, 1 means student got admission. Now lets make rank as category:
mydata$rank <- factor(mydata$rank)
You can use other input into factor/category using above method. Now we will prepare a regression model for above table.
mylogit <- glm(admit ~ gre + gpa + rank, data = mydata, family = "binomial")
Above function will prepare a logistic regression model where we are checking whether admission depends on GRE,GPA or rank. Using summary function you get see the results.
summary(mylogit)
2) How do I find which factor make the most contribution to estimate students final GPA, so I can put more weight on it?
-> You don't have to give weight before hand, regression table will give you weight (co-efficient) for each input along with its statistical significance.
I hope I have cleared your answer. | Convert a categorical variable to a numerical variable prior to regression
Answer for your questions:
1) how do I convert the race into numbers to make my model more accurate?
-> I think answer lies in which tool you are using for analysis. Most of the tool have facility to |
54,465 | Which neural network is better? | Hint: The main difference between using one network for all the diseases is that in that case, the hidden layer might learn some features that can be shared between the disease-predicting outputs.
Why do you think training individual models will take more time? | Which neural network is better? | Hint: The main difference between using one network for all the diseases is that in that case, the hidden layer might learn some features that can be shared between the disease-predicting outputs.
Why | Which neural network is better?
Hint: The main difference between using one network for all the diseases is that in that case, the hidden layer might learn some features that can be shared between the disease-predicting outputs.
Why do you think training individual models will take more time? | Which neural network is better?
Hint: The main difference between using one network for all the diseases is that in that case, the hidden layer might learn some features that can be shared between the disease-predicting outputs.
Why |
54,466 | Which neural network is better? | If the diseases are mutually exclusive (i.e. the probability of having more than one disease at the same time is negligible), e.g. for differential diagnosis, then using a single network with a softmax activation function in the output layer would be a good idea. That would make the shared hidden units more effective, if some combination of hidden units suggested that disease A was likely, that would force the outputs for the other diseases closer to zero.
Personally I try not to worry about training time (provided it remains feasible) as it is better to get a good answer slowly than a bad answer quickly. This is especially true in a medical context where errors may have severe consequences. | Which neural network is better? | If the diseases are mutually exclusive (i.e. the probability of having more than one disease at the same time is negligible), e.g. for differential diagnosis, then using a single network with a softma | Which neural network is better?
If the diseases are mutually exclusive (i.e. the probability of having more than one disease at the same time is negligible), e.g. for differential diagnosis, then using a single network with a softmax activation function in the output layer would be a good idea. That would make the shared hidden units more effective, if some combination of hidden units suggested that disease A was likely, that would force the outputs for the other diseases closer to zero.
Personally I try not to worry about training time (provided it remains feasible) as it is better to get a good answer slowly than a bad answer quickly. This is especially true in a medical context where errors may have severe consequences. | Which neural network is better?
If the diseases are mutually exclusive (i.e. the probability of having more than one disease at the same time is negligible), e.g. for differential diagnosis, then using a single network with a softma |
54,467 | Which neural network is better? | Depends on how your data's distributed and if your outcomes are correlated (or not).
Why not try out both methods and convince yourself which one is better and why. There's no free lunch and sometimes the results can be completely different from your intuitions / hypothesis.
Here are some general hints:
Make sure you get your experiment methodology right and split your data into training+validation / testing or cross-validation / testing.
Log everything: all (hyper)parameters and loss evolution (convergence) for all experiments. Trial and error. Cause and effect. So change only one thing at a time.
You might also want to look into things like dropout and observe & test it's effects (look into ensembles) on the network.
The independent networks need not be fully independent.
Finally:
Here are some questions you might want to ask yourself after all the effort: is there any relationship between the various outcomes (diseases) or not? If not, is there any point to have shared parameters with other NN models? What is the optimal architecture then? If this were a real world setting how could I make sure that I could add new diseases easily to the system? Perhaps there might be groups of diseases that correlate well? How could I find these groups by using a single network? What are the features that correlate with these groups?
This is probably more than you need for the homework, the point I am trying to make is that you need to have some hypothesis, test it, then repeat.
Nobody can tell you anything for sure. Until you test it, it's all speculation.
Good luck and enjoy the process. I hope my answer will motivate you to try and find out more by yourself. | Which neural network is better? | Depends on how your data's distributed and if your outcomes are correlated (or not).
Why not try out both methods and convince yourself which one is better and why. There's no free lunch and sometime | Which neural network is better?
Depends on how your data's distributed and if your outcomes are correlated (or not).
Why not try out both methods and convince yourself which one is better and why. There's no free lunch and sometimes the results can be completely different from your intuitions / hypothesis.
Here are some general hints:
Make sure you get your experiment methodology right and split your data into training+validation / testing or cross-validation / testing.
Log everything: all (hyper)parameters and loss evolution (convergence) for all experiments. Trial and error. Cause and effect. So change only one thing at a time.
You might also want to look into things like dropout and observe & test it's effects (look into ensembles) on the network.
The independent networks need not be fully independent.
Finally:
Here are some questions you might want to ask yourself after all the effort: is there any relationship between the various outcomes (diseases) or not? If not, is there any point to have shared parameters with other NN models? What is the optimal architecture then? If this were a real world setting how could I make sure that I could add new diseases easily to the system? Perhaps there might be groups of diseases that correlate well? How could I find these groups by using a single network? What are the features that correlate with these groups?
This is probably more than you need for the homework, the point I am trying to make is that you need to have some hypothesis, test it, then repeat.
Nobody can tell you anything for sure. Until you test it, it's all speculation.
Good luck and enjoy the process. I hope my answer will motivate you to try and find out more by yourself. | Which neural network is better?
Depends on how your data's distributed and if your outcomes are correlated (or not).
Why not try out both methods and convince yourself which one is better and why. There's no free lunch and sometime |
54,468 | Hypothesis testing for a correlation that is zero or negative | Strange that no direct answer to the original question has been given (even though @Nick Stauner and @Glen_b nicely elaborated on possibly superior alternatives). The wikipedia article discusses various methods, including the following, which is probably the most direct answer.
A one-sided hypothesis test on a correlation can be performed via t as a test statistic. Here,
t $= r\sqrt\frac{{n-2}}{1-r^2}$
with the critical value found via $t_{\alpha,n-2}$ (in the more common two-sided case, only $\alpha$ is changed). So for your $H_0$ that r is smaller than 0, the test rejects if the t resulting from plugging your n and r into the above formula is smaller than the critical value determined by your n and desired $\alpha$.
(Admittedly, even this does not precisely answer the question in the sense that a correlation of exactly 0 is filed on the wrong side.)
Alternatively, a permutation test can be performed (see the wiki article). | Hypothesis testing for a correlation that is zero or negative | Strange that no direct answer to the original question has been given (even though @Nick Stauner and @Glen_b nicely elaborated on possibly superior alternatives). The wikipedia article discusses vario | Hypothesis testing for a correlation that is zero or negative
Strange that no direct answer to the original question has been given (even though @Nick Stauner and @Glen_b nicely elaborated on possibly superior alternatives). The wikipedia article discusses various methods, including the following, which is probably the most direct answer.
A one-sided hypothesis test on a correlation can be performed via t as a test statistic. Here,
t $= r\sqrt\frac{{n-2}}{1-r^2}$
with the critical value found via $t_{\alpha,n-2}$ (in the more common two-sided case, only $\alpha$ is changed). So for your $H_0$ that r is smaller than 0, the test rejects if the t resulting from plugging your n and r into the above formula is smaller than the critical value determined by your n and desired $\alpha$.
(Admittedly, even this does not precisely answer the question in the sense that a correlation of exactly 0 is filed on the wrong side.)
Alternatively, a permutation test can be performed (see the wiki article). | Hypothesis testing for a correlation that is zero or negative
Strange that no direct answer to the original question has been given (even though @Nick Stauner and @Glen_b nicely elaborated on possibly superior alternatives). The wikipedia article discusses vario |
54,469 | Hypothesis testing for a correlation that is zero or negative | You might achieve what you're really after (if it's not exactly what you've asked, which is interesting in its own right; +1 and welcome to CV!) rather simply by fitting a confidence interval (CI) around the correlation (I see @Glen_b suggested this in a comment too). If your correlation is significantly negative, a 95% CI would exclude positive values (and zero) with 95% confidence, which is usually enough for many statistical applications (e.g., in the social sciences, from whence I come brandishing a PhD). See also: When are confidence intervals useful?
I don't know if it's legit to just keep increasing (or decreasing) your confidence levels until your upper bound exceeds zero, but I'm curious enough myself that I'll offer this idea, risk a little rep, and eagerly await any critical comments the community might have for us. I.e., I don't see why you couldn't just take the confidence level at which your correlation estimate's CI touches zero as your estimate of $1-p$ for a test of whether your estimate is on the proper side of zero, but also below the other, more extreme bound...which means I still haven't answered your question exactly. Still, even if your estimate is above zero, you could calculate the level of confidence with which you can say future samples from the same distribution would exhibit correlations that are also above zero and below the upper bound of your CI...
This idea is due in part to my general preference for CIs over significance tests, which itself is due partly to a recent book (Cumming, 2012) I haven't actually read, to be honest—I've heard some pretty credible praise from those who have though—enough to recommend it myself, whether that's wise or otherwise. Speaking of "credible", if you like the CI idea, you might also consider calculating credible intervals—the Bayesian approach to estimating the probability given the fixed data of a random population parameter value being within the interval, as opposed to the CI's probability of the random data given a fixed population parameter...but I'm no Bayesian (yet), so I can't speak to that, or even be certain that I've described the credible interval interpretation with precise accuracy. You may prefer to see these questions:
What, precisely, is a confidence interval?
Possible dupe of ^: What does a confidence interval (vs. a credible interval) actually express?
Clarification on interpreting confidence intervals?
Interpreting a confidence interval.
Confidence intervals when using Bayes' theorem
What's the difference between a confidence interval and a credible interval?
Should I report credible intervals instead of confidence intervals?
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals
As you can see, there's a lot of confusion about these matters, and many ways of explaining them.
Reference
Cumming, G. (2012). Understanding the new statistics: Effect sizes, confidence intervals, and meta-analysis. New York: Routledge. | Hypothesis testing for a correlation that is zero or negative | You might achieve what you're really after (if it's not exactly what you've asked, which is interesting in its own right; +1 and welcome to CV!) rather simply by fitting a confidence interval (CI) aro | Hypothesis testing for a correlation that is zero or negative
You might achieve what you're really after (if it's not exactly what you've asked, which is interesting in its own right; +1 and welcome to CV!) rather simply by fitting a confidence interval (CI) around the correlation (I see @Glen_b suggested this in a comment too). If your correlation is significantly negative, a 95% CI would exclude positive values (and zero) with 95% confidence, which is usually enough for many statistical applications (e.g., in the social sciences, from whence I come brandishing a PhD). See also: When are confidence intervals useful?
I don't know if it's legit to just keep increasing (or decreasing) your confidence levels until your upper bound exceeds zero, but I'm curious enough myself that I'll offer this idea, risk a little rep, and eagerly await any critical comments the community might have for us. I.e., I don't see why you couldn't just take the confidence level at which your correlation estimate's CI touches zero as your estimate of $1-p$ for a test of whether your estimate is on the proper side of zero, but also below the other, more extreme bound...which means I still haven't answered your question exactly. Still, even if your estimate is above zero, you could calculate the level of confidence with which you can say future samples from the same distribution would exhibit correlations that are also above zero and below the upper bound of your CI...
This idea is due in part to my general preference for CIs over significance tests, which itself is due partly to a recent book (Cumming, 2012) I haven't actually read, to be honest—I've heard some pretty credible praise from those who have though—enough to recommend it myself, whether that's wise or otherwise. Speaking of "credible", if you like the CI idea, you might also consider calculating credible intervals—the Bayesian approach to estimating the probability given the fixed data of a random population parameter value being within the interval, as opposed to the CI's probability of the random data given a fixed population parameter...but I'm no Bayesian (yet), so I can't speak to that, or even be certain that I've described the credible interval interpretation with precise accuracy. You may prefer to see these questions:
What, precisely, is a confidence interval?
Possible dupe of ^: What does a confidence interval (vs. a credible interval) actually express?
Clarification on interpreting confidence intervals?
Interpreting a confidence interval.
Confidence intervals when using Bayes' theorem
What's the difference between a confidence interval and a credible interval?
Should I report credible intervals instead of confidence intervals?
Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals
As you can see, there's a lot of confusion about these matters, and many ways of explaining them.
Reference
Cumming, G. (2012). Understanding the new statistics: Effect sizes, confidence intervals, and meta-analysis. New York: Routledge. | Hypothesis testing for a correlation that is zero or negative
You might achieve what you're really after (if it's not exactly what you've asked, which is interesting in its own right; +1 and welcome to CV!) rather simply by fitting a confidence interval (CI) aro |
54,470 | Hypothesis testing for a correlation that is zero or negative | The simplest way to do so (for Pearson correlation) is to use Fisher's z-transformation.
Let r be the correlation in question.
Let n be the sample size used to acquire the correlation.
tanh is the hyperbolic tangent
atanh or $\tanh^{-1}$ is the inverse hyperbolic tangent.
Let z = atanh(r), then z is normally distributed with variance $\frac{1}{n-3}$`
Using this, you can construct a confidence interval
$ C.I.(\rho) = \tanh\left(\tanh^{-1}(\rho) \pm q \cdot \frac{1}{\sqrt{n-3}}\right) $, where $q$ is the value that describes the level of confidence you want (i.e., the value you would read from a normal distribution table (e.g., 1.96 for 95% confidence)),
If zero is in the confidence interval, then you would fail to reject the null hypothesis that the correlation is zero. Also, note that you cannot use this for correlations of $\pm 1$ because if they are one for data that is truly continuous, then you only need 3 data points to determine that.
For one-sided values, simply use the z-score you'd use for a 1-sided p-value for it, and then transform it back and see if your correlation is within the range of that interval.
Edit: You can use a 1-sided test using the same values. Also, I changed sample values $r$ to theoretical values $\rho$, since that's a more appropriate use of confidence intervals.
Source: http://en.wikipedia.org/wiki/Fisher_transformation | Hypothesis testing for a correlation that is zero or negative | The simplest way to do so (for Pearson correlation) is to use Fisher's z-transformation.
Let r be the correlation in question.
Let n be the sample size used to acquire the correlation.
tanh is the hyp | Hypothesis testing for a correlation that is zero or negative
The simplest way to do so (for Pearson correlation) is to use Fisher's z-transformation.
Let r be the correlation in question.
Let n be the sample size used to acquire the correlation.
tanh is the hyperbolic tangent
atanh or $\tanh^{-1}$ is the inverse hyperbolic tangent.
Let z = atanh(r), then z is normally distributed with variance $\frac{1}{n-3}$`
Using this, you can construct a confidence interval
$ C.I.(\rho) = \tanh\left(\tanh^{-1}(\rho) \pm q \cdot \frac{1}{\sqrt{n-3}}\right) $, where $q$ is the value that describes the level of confidence you want (i.e., the value you would read from a normal distribution table (e.g., 1.96 for 95% confidence)),
If zero is in the confidence interval, then you would fail to reject the null hypothesis that the correlation is zero. Also, note that you cannot use this for correlations of $\pm 1$ because if they are one for data that is truly continuous, then you only need 3 data points to determine that.
For one-sided values, simply use the z-score you'd use for a 1-sided p-value for it, and then transform it back and see if your correlation is within the range of that interval.
Edit: You can use a 1-sided test using the same values. Also, I changed sample values $r$ to theoretical values $\rho$, since that's a more appropriate use of confidence intervals.
Source: http://en.wikipedia.org/wiki/Fisher_transformation | Hypothesis testing for a correlation that is zero or negative
The simplest way to do so (for Pearson correlation) is to use Fisher's z-transformation.
Let r be the correlation in question.
Let n be the sample size used to acquire the correlation.
tanh is the hyp |
54,471 | What could it mean to "Rotate" a distribution? [duplicate] | The direct analogy is pretty clear:
To make it simple we'll assume it's for a continuous random variable on $(a,b)$. Without loss of generality, let $c=b-a$ and consider the corresponding variable on $(0,c)$; call that random variable $X$.
Now imagine a very thin rod of length $c$, whose density (mass per element of length) is variable in the x-direction (along its length) and consider that the rod happens to have the same material-density as a function of $x$ as the random variable has probability density as a function of $x$.
Then then second moment of inertia of the rod is the variance of $X$.
And hence what it 'means' to rotate a distribution is clear enough - it's quite literally rotating the 'rod' whose density represents probability-density. Variance is how 'hard' it would be to rotate the rod (low variance means 'easy to spin', high variance means it takes more push to spin it ... and stop it, if you spin it).
Think about what inertia (how hard it is to spin) reflects here, which is simply how close the mass is to the mean. The closer the mass is to the mean the easier it is to spin. If you made a physical object whose physical density represents the probability density and the random variable had low variance, the corresponding object would be easy to spin, because most of the mass would be close to the mean - both inertia and variance are how close the mass is to the mean, in a particular (and directly analogous) sense.
You don't actually 'spin' a probability density and imagine that to be physically difficult, any more than electricity is wet because of the water analogy. To expect that level of correspondence is to miss the point of such analogies (the aspects that correspond, correspond, but not every consequence of the correspondence in one realm carries over with it).
The point of saying the 'rod is hard to spin' is to give a pretty direct sense of what high variance is telling you about density. But to insist that the probability density itself spin is to miss the point. | What could it mean to "Rotate" a distribution? [duplicate] | The direct analogy is pretty clear:
To make it simple we'll assume it's for a continuous random variable on $(a,b)$. Without loss of generality, let $c=b-a$ and consider the corresponding variable on | What could it mean to "Rotate" a distribution? [duplicate]
The direct analogy is pretty clear:
To make it simple we'll assume it's for a continuous random variable on $(a,b)$. Without loss of generality, let $c=b-a$ and consider the corresponding variable on $(0,c)$; call that random variable $X$.
Now imagine a very thin rod of length $c$, whose density (mass per element of length) is variable in the x-direction (along its length) and consider that the rod happens to have the same material-density as a function of $x$ as the random variable has probability density as a function of $x$.
Then then second moment of inertia of the rod is the variance of $X$.
And hence what it 'means' to rotate a distribution is clear enough - it's quite literally rotating the 'rod' whose density represents probability-density. Variance is how 'hard' it would be to rotate the rod (low variance means 'easy to spin', high variance means it takes more push to spin it ... and stop it, if you spin it).
Think about what inertia (how hard it is to spin) reflects here, which is simply how close the mass is to the mean. The closer the mass is to the mean the easier it is to spin. If you made a physical object whose physical density represents the probability density and the random variable had low variance, the corresponding object would be easy to spin, because most of the mass would be close to the mean - both inertia and variance are how close the mass is to the mean, in a particular (and directly analogous) sense.
You don't actually 'spin' a probability density and imagine that to be physically difficult, any more than electricity is wet because of the water analogy. To expect that level of correspondence is to miss the point of such analogies (the aspects that correspond, correspond, but not every consequence of the correspondence in one realm carries over with it).
The point of saying the 'rod is hard to spin' is to give a pretty direct sense of what high variance is telling you about density. But to insist that the probability density itself spin is to miss the point. | What could it mean to "Rotate" a distribution? [duplicate]
The direct analogy is pretty clear:
To make it simple we'll assume it's for a continuous random variable on $(a,b)$. Without loss of generality, let $c=b-a$ and consider the corresponding variable on |
54,472 | What could it mean to "Rotate" a distribution? [duplicate] | If you have greater variance, then you need to expend a greater effort (meaning, spend more money collecting the data) to obtain a given level of precision. The precision for $\bar X$ is of course is the inverse standard deviation, $\sqrt{n}/\sigma$. | What could it mean to "Rotate" a distribution? [duplicate] | If you have greater variance, then you need to expend a greater effort (meaning, spend more money collecting the data) to obtain a given level of precision. The precision for $\bar X$ is of course is | What could it mean to "Rotate" a distribution? [duplicate]
If you have greater variance, then you need to expend a greater effort (meaning, spend more money collecting the data) to obtain a given level of precision. The precision for $\bar X$ is of course is the inverse standard deviation, $\sqrt{n}/\sigma$. | What could it mean to "Rotate" a distribution? [duplicate]
If you have greater variance, then you need to expend a greater effort (meaning, spend more money collecting the data) to obtain a given level of precision. The precision for $\bar X$ is of course is |
54,473 | What could it mean to "Rotate" a distribution? [duplicate] | In signals, variance is essentially a measure of energy. Of course in this case they are related directly, not inversely. | What could it mean to "Rotate" a distribution? [duplicate] | In signals, variance is essentially a measure of energy. Of course in this case they are related directly, not inversely. | What could it mean to "Rotate" a distribution? [duplicate]
In signals, variance is essentially a measure of energy. Of course in this case they are related directly, not inversely. | What could it mean to "Rotate" a distribution? [duplicate]
In signals, variance is essentially a measure of energy. Of course in this case they are related directly, not inversely. |
54,474 | Is this statement about the conditional expectation of a sum true? | Suppose that $X$ and $Y$ are integrable. The conditional expectation $\mathrm{E}[X\mid Z]$ is a random variable that satisfies
$$
\int_A \mathrm{E}[X\mid Z]\,dP = \int_A X\,dP
$$
for every $A$ in $\sigma(Z)$, the sigma-field generated by $Z$, and is $\sigma(Z)$-measurable. Hence,
$$
\int_A \left(\mathrm{E}[X\mid Z] + \mathrm{E}[Y\mid Z]\right)\, dP = \int_A \mathrm{E}[X\mid Z]\,dP + \int_A\mathrm{E}[Y\mid Z]\, dP
$$
$$
= \int_A X\,dP + \int_A Y\,dP = \int_A \left(X+Y\right) \,dP \, .
$$
It follows that
$$
\mathrm{E}[X\mid Z] + \mathrm{E}[Y\mid Z] = \mathrm{E}[X+Y\mid Z]
$$
with probability one. | Is this statement about the conditional expectation of a sum true? | Suppose that $X$ and $Y$ are integrable. The conditional expectation $\mathrm{E}[X\mid Z]$ is a random variable that satisfies
$$
\int_A \mathrm{E}[X\mid Z]\,dP = \int_A X\,dP
$$
for every $A$ in $\ | Is this statement about the conditional expectation of a sum true?
Suppose that $X$ and $Y$ are integrable. The conditional expectation $\mathrm{E}[X\mid Z]$ is a random variable that satisfies
$$
\int_A \mathrm{E}[X\mid Z]\,dP = \int_A X\,dP
$$
for every $A$ in $\sigma(Z)$, the sigma-field generated by $Z$, and is $\sigma(Z)$-measurable. Hence,
$$
\int_A \left(\mathrm{E}[X\mid Z] + \mathrm{E}[Y\mid Z]\right)\, dP = \int_A \mathrm{E}[X\mid Z]\,dP + \int_A\mathrm{E}[Y\mid Z]\, dP
$$
$$
= \int_A X\,dP + \int_A Y\,dP = \int_A \left(X+Y\right) \,dP \, .
$$
It follows that
$$
\mathrm{E}[X\mid Z] + \mathrm{E}[Y\mid Z] = \mathrm{E}[X+Y\mid Z]
$$
with probability one. | Is this statement about the conditional expectation of a sum true?
Suppose that $X$ and $Y$ are integrable. The conditional expectation $\mathrm{E}[X\mid Z]$ is a random variable that satisfies
$$
\int_A \mathrm{E}[X\mid Z]\,dP = \int_A X\,dP
$$
for every $A$ in $\ |
54,475 | Jarque-Bera normality test in R | You may have misunderstood something about hypothesis testing or maybe about goodness-of-fit tests, or perhaps specifically about the "Jarque-Bera" test*.
Note that you reject when the p-value is small, when happens when the skewness and kurtosis differ from their expected values under normality.
The test statistic is of the form (from page 1 of Bowman and Shenton's paper):
$$\frac{n}{6} S^2 + \frac{n}{24} (K-3)^2\,,$$
where $S$ is the sample skewness and $K$ is the sample kurtosis (i.e. $K-3$ is 'excess kurtosis')
The null hypothesis is of normality, and rejection of the hypothesis (because of a significant p-value) leads to the conclusion that the distribution from which the data came is non-normal.
The test is specifically looking for skewness and kurtosis that is different from that of the normal (it squares the standardized deviations and sums them) and will tend to be significant when skewness and kurtosis deviating from the values at the normal are present.
Which is to say - when you get a significant test statistic with this test, it's explicitly because the sample skewness or kurtosis (or both) are different from what you expect to see with a sample from normal distribution.
Take care, however -- the asymptotic approximation on which the test is based comes in only very slowly (see the image near the bottom of this answer; also see here and here for some additional points). I wouldn't rely on it without simulating the distribution of the test statistic unless $n$ is a good deal larger than say 100.
Here's an example of the joint distribution in normal samples at n=30 (simulated values):
-- as you see, not at all close to bivariate normal.
*(The development of the test precedes their 1980 paper; it shouldn't be named for them. D'Agostino & Pearson (1973), and then Bowman & Shenton (1975) were there well before for example, and the latter discussed the relevant issues in more detail (including the slow convergence and the shape of the joint distribution in small samples - though their diagrams seem as if they may contain an error), but one can readily see that the idea of basing a goodness of fit test on skewness and kurtosis together comes even earlier than those prior papers.) | Jarque-Bera normality test in R | You may have misunderstood something about hypothesis testing or maybe about goodness-of-fit tests, or perhaps specifically about the "Jarque-Bera" test*.
Note that you reject when the p-value is sma | Jarque-Bera normality test in R
You may have misunderstood something about hypothesis testing or maybe about goodness-of-fit tests, or perhaps specifically about the "Jarque-Bera" test*.
Note that you reject when the p-value is small, when happens when the skewness and kurtosis differ from their expected values under normality.
The test statistic is of the form (from page 1 of Bowman and Shenton's paper):
$$\frac{n}{6} S^2 + \frac{n}{24} (K-3)^2\,,$$
where $S$ is the sample skewness and $K$ is the sample kurtosis (i.e. $K-3$ is 'excess kurtosis')
The null hypothesis is of normality, and rejection of the hypothesis (because of a significant p-value) leads to the conclusion that the distribution from which the data came is non-normal.
The test is specifically looking for skewness and kurtosis that is different from that of the normal (it squares the standardized deviations and sums them) and will tend to be significant when skewness and kurtosis deviating from the values at the normal are present.
Which is to say - when you get a significant test statistic with this test, it's explicitly because the sample skewness or kurtosis (or both) are different from what you expect to see with a sample from normal distribution.
Take care, however -- the asymptotic approximation on which the test is based comes in only very slowly (see the image near the bottom of this answer; also see here and here for some additional points). I wouldn't rely on it without simulating the distribution of the test statistic unless $n$ is a good deal larger than say 100.
Here's an example of the joint distribution in normal samples at n=30 (simulated values):
-- as you see, not at all close to bivariate normal.
*(The development of the test precedes their 1980 paper; it shouldn't be named for them. D'Agostino & Pearson (1973), and then Bowman & Shenton (1975) were there well before for example, and the latter discussed the relevant issues in more detail (including the slow convergence and the shape of the joint distribution in small samples - though their diagrams seem as if they may contain an error), but one can readily see that the idea of basing a goodness of fit test on skewness and kurtosis together comes even earlier than those prior papers.) | Jarque-Bera normality test in R
You may have misunderstood something about hypothesis testing or maybe about goodness-of-fit tests, or perhaps specifically about the "Jarque-Bera" test*.
Note that you reject when the p-value is sma |
54,476 | Kaplan Meier survival estimate at t=infinite | When different from zero at the largest event time (i.e. when the largest observation is censored), the Kaplan-Meier estimator is usually undefined from that point on. There exist methods for completing the Kaplan-Meier estimator (for example, see here). In any case, the underlying survival function decreases to zero, $S(\infty) = 0$ even if $\hat{S}(\text{largest event time}) \neq 0$. Models that allow for $S(\infty) \neq 0$ are called cure models.
EDIT related to your comment below
The first sentence of this paper, which shows how to fit a cure model, says "In survival analysis, it is usually assumed that if complete follow-up were possible for all individuals, each would eventually experience the event of interest." That is, $S(\infty) = 0$, by assumption. Almost any other paper on cure models starts in the same way. | Kaplan Meier survival estimate at t=infinite | When different from zero at the largest event time (i.e. when the largest observation is censored), the Kaplan-Meier estimator is usually undefined from that point on. There exist methods for completi | Kaplan Meier survival estimate at t=infinite
When different from zero at the largest event time (i.e. when the largest observation is censored), the Kaplan-Meier estimator is usually undefined from that point on. There exist methods for completing the Kaplan-Meier estimator (for example, see here). In any case, the underlying survival function decreases to zero, $S(\infty) = 0$ even if $\hat{S}(\text{largest event time}) \neq 0$. Models that allow for $S(\infty) \neq 0$ are called cure models.
EDIT related to your comment below
The first sentence of this paper, which shows how to fit a cure model, says "In survival analysis, it is usually assumed that if complete follow-up were possible for all individuals, each would eventually experience the event of interest." That is, $S(\infty) = 0$, by assumption. Almost any other paper on cure models starts in the same way. | Kaplan Meier survival estimate at t=infinite
When different from zero at the largest event time (i.e. when the largest observation is censored), the Kaplan-Meier estimator is usually undefined from that point on. There exist methods for completi |
54,477 | Kaplan Meier survival estimate at t=infinite | The Kaplan-Meier method computes the actual observed percent survival at each time a subject dies in your experiment. It describes your data, taking into account censoring. No theoretical model. No extrapolation.
Any assumption about survival at infinite times needs to be based on a model, so is beyond Kaplan and Meier.
Of course, it is hard to imagine any model of survival that doesn't end up at zero survival at long time points :) | Kaplan Meier survival estimate at t=infinite | The Kaplan-Meier method computes the actual observed percent survival at each time a subject dies in your experiment. It describes your data, taking into account censoring. No theoretical model. No ex | Kaplan Meier survival estimate at t=infinite
The Kaplan-Meier method computes the actual observed percent survival at each time a subject dies in your experiment. It describes your data, taking into account censoring. No theoretical model. No extrapolation.
Any assumption about survival at infinite times needs to be based on a model, so is beyond Kaplan and Meier.
Of course, it is hard to imagine any model of survival that doesn't end up at zero survival at long time points :) | Kaplan Meier survival estimate at t=infinite
The Kaplan-Meier method computes the actual observed percent survival at each time a subject dies in your experiment. It describes your data, taking into account censoring. No theoretical model. No ex |
54,478 | Using the normal equations to calculate coefficients in multiple linear regression | @MichaelMayer has it right. Try the following:
> y <- c(1,2,3,4,5)
> x0 <- c(1,1,1,1,1) # vector of ones representing the intercept
> x1 <- c(1,2,3,4,5)
> x2 <- c(1,4,5,7,9)
> Y <- as.matrix(y)
> X <- as.matrix(cbind(x0,x1,x2))
> beta = solve(t(X) %*% X) %*% (t(X) %*% Y) ;
Update: Even with the above changes you will still get slightly different results when you use lm due to roundoff error. If the estimated intercept and coefficient for x2 are nonzero, this is no longer visible.
x1 <- c(1,2,3,4,5)
x2 <- c(1,4,5,7,9)
y <- x1 + x2 + rnorm(5,mean=0,sd=0.3);
Y <- as.matrix(y);
X <- as.matrix(cbind(1,x1,x2));
beta = solve(t(X) %*% X) %*% (t(X) %*% Y) ; beta
model <- lm(y~1+x1+x2) ; model$coefficients
Output is:
[,1]
-0.2948504
x1 0.8081534
x2 1.1741777
(Intercept) x1 x2
-0.2948504 0.8081534 1.1741777
For more info, see: https://en.wikipedia.org/wiki/Machine_epsilon | Using the normal equations to calculate coefficients in multiple linear regression | @MichaelMayer has it right. Try the following:
> y <- c(1,2,3,4,5)
> x0 <- c(1,1,1,1,1) # vector of ones representing the intercept
> x1 <- c(1,2,3,4,5)
> x2 <- c(1,4,5,7,9)
> Y <- as.matrix(y)
> X | Using the normal equations to calculate coefficients in multiple linear regression
@MichaelMayer has it right. Try the following:
> y <- c(1,2,3,4,5)
> x0 <- c(1,1,1,1,1) # vector of ones representing the intercept
> x1 <- c(1,2,3,4,5)
> x2 <- c(1,4,5,7,9)
> Y <- as.matrix(y)
> X <- as.matrix(cbind(x0,x1,x2))
> beta = solve(t(X) %*% X) %*% (t(X) %*% Y) ;
Update: Even with the above changes you will still get slightly different results when you use lm due to roundoff error. If the estimated intercept and coefficient for x2 are nonzero, this is no longer visible.
x1 <- c(1,2,3,4,5)
x2 <- c(1,4,5,7,9)
y <- x1 + x2 + rnorm(5,mean=0,sd=0.3);
Y <- as.matrix(y);
X <- as.matrix(cbind(1,x1,x2));
beta = solve(t(X) %*% X) %*% (t(X) %*% Y) ; beta
model <- lm(y~1+x1+x2) ; model$coefficients
Output is:
[,1]
-0.2948504
x1 0.8081534
x2 1.1741777
(Intercept) x1 x2
-0.2948504 0.8081534 1.1741777
For more info, see: https://en.wikipedia.org/wiki/Machine_epsilon | Using the normal equations to calculate coefficients in multiple linear regression
@MichaelMayer has it right. Try the following:
> y <- c(1,2,3,4,5)
> x0 <- c(1,1,1,1,1) # vector of ones representing the intercept
> x1 <- c(1,2,3,4,5)
> x2 <- c(1,4,5,7,9)
> Y <- as.matrix(y)
> X |
54,479 | Meaning of covariance matrix row sums | The sum of all the elements of the covariance matrix is the variance of the sum of the random variables involved:
$$\operatorname {Var} \left(\sum_i^nX_i\right) = \sum_i^n\operatorname {Var}(X_i) + 2\sum_{i\neq j}\operatorname {Cov}(X_i,X_j)$$
The sum of row $i$ is
$$\operatorname {Var}(X_i) + \sum_{i\neq j}\operatorname {Cov}(X_i,X_j)$$
To the degree that you are willing to "split the difference" or "share the blame equally" between the two random variables that form each pair (and so "allocate" one $\operatorname {Cov}(X_i,X_j)$ to $X_i$ and the other to $X_j$), then (responding to comments) by dividing this sum by the sum of all the elements of the covariance matrix (i.e. by the Variance of the sum of rv's), you could argue that this ratio is indeed a measure of how much each r.v. contributes in relative terms to the variance of the sum. | Meaning of covariance matrix row sums | The sum of all the elements of the covariance matrix is the variance of the sum of the random variables involved:
$$\operatorname {Var} \left(\sum_i^nX_i\right) = \sum_i^n\operatorname {Var}(X_i) + 2\ | Meaning of covariance matrix row sums
The sum of all the elements of the covariance matrix is the variance of the sum of the random variables involved:
$$\operatorname {Var} \left(\sum_i^nX_i\right) = \sum_i^n\operatorname {Var}(X_i) + 2\sum_{i\neq j}\operatorname {Cov}(X_i,X_j)$$
The sum of row $i$ is
$$\operatorname {Var}(X_i) + \sum_{i\neq j}\operatorname {Cov}(X_i,X_j)$$
To the degree that you are willing to "split the difference" or "share the blame equally" between the two random variables that form each pair (and so "allocate" one $\operatorname {Cov}(X_i,X_j)$ to $X_i$ and the other to $X_j$), then (responding to comments) by dividing this sum by the sum of all the elements of the covariance matrix (i.e. by the Variance of the sum of rv's), you could argue that this ratio is indeed a measure of how much each r.v. contributes in relative terms to the variance of the sum. | Meaning of covariance matrix row sums
The sum of all the elements of the covariance matrix is the variance of the sum of the random variables involved:
$$\operatorname {Var} \left(\sum_i^nX_i\right) = \sum_i^n\operatorname {Var}(X_i) + 2\ |
54,480 | How to find the number of clusters in 1d data and the mean of each | Don't run clustering (such as k-means) on 1-dimensional data.
Why: 1-dimensional data can be sorted. Algorithms that exploit sorting are much more efficient than algorithms that do not exploit this.
Look at classic statistics
And forget about buzzwords such as "data mining" and "clustering"!
For your task, I recommend you use kernel density estimation. This is a well-proven technique from statistics, and very flexible. To cluster your data, look for maxima and minima in the density estimation to split your data. It's fast, and has a much stronger theoretical background than cluster analysis.
When to use cluster analysis
Essentially, use cluster analysis, when your data is so large and complex you cannot use classic statistical modeling anymore. When you have too many variables and too complex processes to model them. When density estimation no longer works. When you can no longer visualize the data.
Even in 2d data, don't do cluster analysis. Visualize your data, and manually mark your clusters. Methods such as k-means will produce a k-cluster result no matter what; even when there are no clusters in your data set! Because they blindly optimize some mathematical equation, without reality-checking it. If you manually cluster your data, your results will be much more meaningful. | How to find the number of clusters in 1d data and the mean of each | Don't run clustering (such as k-means) on 1-dimensional data.
Why: 1-dimensional data can be sorted. Algorithms that exploit sorting are much more efficient than algorithms that do not exploit this.
L | How to find the number of clusters in 1d data and the mean of each
Don't run clustering (such as k-means) on 1-dimensional data.
Why: 1-dimensional data can be sorted. Algorithms that exploit sorting are much more efficient than algorithms that do not exploit this.
Look at classic statistics
And forget about buzzwords such as "data mining" and "clustering"!
For your task, I recommend you use kernel density estimation. This is a well-proven technique from statistics, and very flexible. To cluster your data, look for maxima and minima in the density estimation to split your data. It's fast, and has a much stronger theoretical background than cluster analysis.
When to use cluster analysis
Essentially, use cluster analysis, when your data is so large and complex you cannot use classic statistical modeling anymore. When you have too many variables and too complex processes to model them. When density estimation no longer works. When you can no longer visualize the data.
Even in 2d data, don't do cluster analysis. Visualize your data, and manually mark your clusters. Methods such as k-means will produce a k-cluster result no matter what; even when there are no clusters in your data set! Because they blindly optimize some mathematical equation, without reality-checking it. If you manually cluster your data, your results will be much more meaningful. | How to find the number of clusters in 1d data and the mean of each
Don't run clustering (such as k-means) on 1-dimensional data.
Why: 1-dimensional data can be sorted. Algorithms that exploit sorting are much more efficient than algorithms that do not exploit this.
L |
54,481 | How to find the number of clusters in 1d data and the mean of each | You can just estimate the probability density function of price. If they are a mixture of normal distributions, hopefully you will observe several peaks in your mixture of Gaussian kernels. It can be implemented easily with Python, and I believe there are packages for other languages as well.
from scipy.stats import gaussian_kde
import matplotlib.pyplot as plt
density = gaussian_kde(x) # x: list of price
xgrid = np.linspace(x.min(), x.max(), Num_Price)
plt.plot(xgrid, density(xgrid))
plt.show() | How to find the number of clusters in 1d data and the mean of each | You can just estimate the probability density function of price. If they are a mixture of normal distributions, hopefully you will observe several peaks in your mixture of Gaussian kernels. It can be | How to find the number of clusters in 1d data and the mean of each
You can just estimate the probability density function of price. If they are a mixture of normal distributions, hopefully you will observe several peaks in your mixture of Gaussian kernels. It can be implemented easily with Python, and I believe there are packages for other languages as well.
from scipy.stats import gaussian_kde
import matplotlib.pyplot as plt
density = gaussian_kde(x) # x: list of price
xgrid = np.linspace(x.min(), x.max(), Num_Price)
plt.plot(xgrid, density(xgrid))
plt.show() | How to find the number of clusters in 1d data and the mean of each
You can just estimate the probability density function of price. If they are a mixture of normal distributions, hopefully you will observe several peaks in your mixture of Gaussian kernels. It can be |
54,482 | How to find the number of clusters in 1d data and the mean of each | The XMeans algorithm can be used to estimate the total number of clusters directly from the data, without human guidance. The Weka package has a Java implementation. An expectation maximization algorithm can also be used to automatically estimate the total number of clusters as well. There is a Weka implementation of that also. In addition, there is at least one individual contributor version (i.e., not sanctioned by mathworks.com) for performing Gaussian Mixture Model clustering in MATLAB also, if you prefer to do your analysis that way instead of in Java. | How to find the number of clusters in 1d data and the mean of each | The XMeans algorithm can be used to estimate the total number of clusters directly from the data, without human guidance. The Weka package has a Java implementation. An expectation maximization algo | How to find the number of clusters in 1d data and the mean of each
The XMeans algorithm can be used to estimate the total number of clusters directly from the data, without human guidance. The Weka package has a Java implementation. An expectation maximization algorithm can also be used to automatically estimate the total number of clusters as well. There is a Weka implementation of that also. In addition, there is at least one individual contributor version (i.e., not sanctioned by mathworks.com) for performing Gaussian Mixture Model clustering in MATLAB also, if you prefer to do your analysis that way instead of in Java. | How to find the number of clusters in 1d data and the mean of each
The XMeans algorithm can be used to estimate the total number of clusters directly from the data, without human guidance. The Weka package has a Java implementation. An expectation maximization algo |
54,483 | How to check for interaction of all pairs of variables, in linear regression in R? | The help for formula tells you how to do this. You should really read that. It's:
y ~ (x1 + x2 + ...)^2 | How to check for interaction of all pairs of variables, in linear regression in R? | The help for formula tells you how to do this. You should really read that. It's:
y ~ (x1 + x2 + ...)^2 | How to check for interaction of all pairs of variables, in linear regression in R?
The help for formula tells you how to do this. You should really read that. It's:
y ~ (x1 + x2 + ...)^2 | How to check for interaction of all pairs of variables, in linear regression in R?
The help for formula tells you how to do this. You should really read that. It's:
y ~ (x1 + x2 + ...)^2 |
54,484 | Frequency of time series in R | Most likely you have two seasonal periods: 48 (number of intervals per day) and 48x5 (number of intervals per week assuming a 5-day week).
The tbats() function from the forecast package in R will handle multiple seasonal periods. For example (where x is the data):
library(forecast)
x <- msts(x, seasonal.periods=c(48, 48*5))
fit <- tbats(x)
fc <- forecast(fit, h=48*5)
plot(fc)
Alternatively (and the only easy option if there are missing data) is to use Fourier terms for the seasonal periods and ARMA errors to handle any remaining serial correlation. The ARIMA functions in R do not automatically handle multiple seasonal periods, but the following R code should work:
x <- ts(x, frequency=48)
seas1 <- fourier(x, K=3)
seas2 <- fourier(ts(x, freq=48*5), K=3)
fit <- auto.arima(x, xreg=cbind(seas1,seas2))
seas1.f <- fourierf(x, K=3, h=48*5)
seas2.f <- fourierf(ts(x, freq=48*5), K=3, h=48*5)
fc <- forecast(fit, xreg=cbind(seas1.f,seas2.f))
The number of Fourier terms (arbitrarily set to 3 for both seasonal periods in the above code) can be selected by minimizing the AIC. | Frequency of time series in R | Most likely you have two seasonal periods: 48 (number of intervals per day) and 48x5 (number of intervals per week assuming a 5-day week).
The tbats() function from the forecast package in R will hand | Frequency of time series in R
Most likely you have two seasonal periods: 48 (number of intervals per day) and 48x5 (number of intervals per week assuming a 5-day week).
The tbats() function from the forecast package in R will handle multiple seasonal periods. For example (where x is the data):
library(forecast)
x <- msts(x, seasonal.periods=c(48, 48*5))
fit <- tbats(x)
fc <- forecast(fit, h=48*5)
plot(fc)
Alternatively (and the only easy option if there are missing data) is to use Fourier terms for the seasonal periods and ARMA errors to handle any remaining serial correlation. The ARIMA functions in R do not automatically handle multiple seasonal periods, but the following R code should work:
x <- ts(x, frequency=48)
seas1 <- fourier(x, K=3)
seas2 <- fourier(ts(x, freq=48*5), K=3)
fit <- auto.arima(x, xreg=cbind(seas1,seas2))
seas1.f <- fourierf(x, K=3, h=48*5)
seas2.f <- fourierf(ts(x, freq=48*5), K=3, h=48*5)
fc <- forecast(fit, xreg=cbind(seas1.f,seas2.f))
The number of Fourier terms (arbitrarily set to 3 for both seasonal periods in the above code) can be selected by minimizing the AIC. | Frequency of time series in R
Most likely you have two seasonal periods: 48 (number of intervals per day) and 48x5 (number of intervals per week assuming a 5-day week).
The tbats() function from the forecast package in R will hand |
54,485 | How to interpret the significance code? | You still have a fair amount of studying to do (that can be a good thing).
First the way that you fit your model has 1 dependent variable and 15 independent, unless the definition of those terms have changed. Confusion like this is one of the reasons that I wish the entire field would ban the terms "independent variable" and "dependent variable".
If you compute a single number from an ROC curve it is generally the area under the curve (or some function of that) and does not represent the percentage correct. The percent correct depends on what cutoff value you use. If you look at the percent correct each way for every reasonable cut-off value and plot those then that is the ROC curve. An area under the curve (AUC) of 1 would mean perfect prediction and an AUC of 0.5 means that your model is equivalent to tossing a coin (not good), yours is in between, so it is better than a coin, but not yet perfect. To get the percent correct you need to decide on a cutoff value (but even that will be deceiving since you are testing with the same data used to create the model and a strict cut-off is less informative than the actual predictions).
The significance codes (*, etc.) are just categorizations of the p-values.
Which terms to keep in the model depends on the science behind the data and the question that you want to answer. Remember that the values in the output only measure the effect of that term conditioned on all the other variables being in the model, remove one term and all the p-values on the other terms can change (and change quite significantly).
Keep studying. | How to interpret the significance code? | You still have a fair amount of studying to do (that can be a good thing).
First the way that you fit your model has 1 dependent variable and 15 independent, unless the definition of those terms have | How to interpret the significance code?
You still have a fair amount of studying to do (that can be a good thing).
First the way that you fit your model has 1 dependent variable and 15 independent, unless the definition of those terms have changed. Confusion like this is one of the reasons that I wish the entire field would ban the terms "independent variable" and "dependent variable".
If you compute a single number from an ROC curve it is generally the area under the curve (or some function of that) and does not represent the percentage correct. The percent correct depends on what cutoff value you use. If you look at the percent correct each way for every reasonable cut-off value and plot those then that is the ROC curve. An area under the curve (AUC) of 1 would mean perfect prediction and an AUC of 0.5 means that your model is equivalent to tossing a coin (not good), yours is in between, so it is better than a coin, but not yet perfect. To get the percent correct you need to decide on a cutoff value (but even that will be deceiving since you are testing with the same data used to create the model and a strict cut-off is less informative than the actual predictions).
The significance codes (*, etc.) are just categorizations of the p-values.
Which terms to keep in the model depends on the science behind the data and the question that you want to answer. Remember that the values in the output only measure the effect of that term conditioned on all the other variables being in the model, remove one term and all the p-values on the other terms can change (and change quite significantly).
Keep studying. | How to interpret the significance code?
You still have a fair amount of studying to do (that can be a good thing).
First the way that you fit your model has 1 dependent variable and 15 independent, unless the definition of those terms have |
54,486 | Estimating hidden transfers of market share | Write your system explicitly for time $t$ as ("$L$" for "loss", as a positive quantity, and "$G$" for "gain")
$$ A_t - A_{t-1} = - L^A_{t} + G_{t}^{B\rightarrow A}+G_{t}^{C\rightarrow A}$$
$$ B_t - B_{t-1} = - L^B_{t} + G_{t-1}^{A\rightarrow B}+G_{t}^{C\rightarrow B}$$
$$ C_t - C_{t-1} = - L^C_{t} + G_{t}^{A\rightarrow C}+G_{t}^{B\rightarrow C}$$
The following three relations hold exactly:
$$ L^A_{t} = G_{t}^{A\rightarrow B} + G_{t}^{A\rightarrow C} $$
$$ L^B_{t} = G_{t}^{B\rightarrow A} + G_{t}^{B\rightarrow C} $$
$$ L^C_{t} = G_{t}^{C\rightarrow A} + G_{t}^{C\rightarrow B} $$
If you substitute in the first three you obtain
$$ A_t - A_{t-1} = - G_{t}^{A\rightarrow B} - G_{t}^{A\rightarrow C} + G_{t}^{B\rightarrow A}+G_{t}^{C\rightarrow A}$$
$$ B_t - B_{t-1} = - G_{t}^{B\rightarrow A} - G_{t}^{B\rightarrow C} + G_{t}^{A\rightarrow B}+G_{t}^{C\rightarrow B}$$
$$ C_t - C_{t-1} = - G_{t}^{C\rightarrow A} - G_{t}^{C\rightarrow B} + G_{t}^{A\rightarrow C}+G_{t}^{B\rightarrow C}$$
You have $6$ unknown quantities to estimate per time period. There is just not enough information to do that. So you need assumptions that will impose structure (=restrictions) on the situation, and will permit you to estimate something. What? Let's say you assume that there is a relatively stable "churn" from one company to another, as a linear function of their market share in the previous period. This assumption brings in a set of unknown coefficients to be estimated (which will then give you an estimate of "hidden transfers of market share"). Write $G_{t}^{A\rightarrow B} = a_bA_{t-1}$ (market share lost from $A$ to $B$ as a linear function of $A$'s market share in period $t-1$).
Your equations will become
$$ A_t - A_{t-1} = - a_bA_{t-1} - a_cA_{t-1} + b_aB_{t-1}+c_aC_{t-1} $$
$$ B_t - B_{t-1} = - b_aB_{t-1} - b_cB_{t-1} + a_bA_{t-1}+c_bC_{t-1}$$
$$ C_t - C_{t-1} = - c_aC_{t-1} - c_bC_{t-1} + a_cA_{t-1}+ b_cB_{t-1}$$
We have turned a set of mathematical identities into a model. It is doubtful that this model will hold exactly for each $t$, so you should add a stochastic error term. Rearranging we obtain a first-order Vector Autoregression (VAR):
$$ \left[ \begin{matrix}
A_t \\
B_t \\
C_t \\
\end{matrix} \right] = \left [\begin{matrix}
1-a_b-a_c & b_a & c_a \\
a_b & 1-b_a-b_c & c_b \\
a_c & b_c & 1-c_a-c_b \\
\end{matrix} \right] \left[ \begin{matrix}
A_{t-1} \\
B_{t-1} \\
C_{t-1} \\
\end{matrix} \right]+ \left[ \begin{matrix}
u^A_{t} \\
u^B_{t} \\
u^C_{t} \\
\end{matrix} \right] $$
or, to homogenize notation,
$$ \left[ \begin{matrix}
A_t \\
B_t \\
C_t \\
\end{matrix} \right] = \left [\begin{matrix}
\gamma_{11} & \gamma_{12} & \gamma_{13} \\
\gamma_{21} & \gamma_{22} & \gamma_{23} \\
\gamma_{31} & \gamma_{32} & \gamma_{33} \\
\end{matrix} \right] \left[ \begin{matrix}
A_{t-1} \\
B_{t-1} \\
C_{t-1} \\
\end{matrix} \right]+ \left[ \begin{matrix}
u^A_{t} \\
u^B_{t} \\
u^C_{t} \\
\end{matrix} \right] $$
subject to the equality restrictions
$$ \begin{matrix}
\gamma_{11} + \gamma_{21} + \gamma_{31} =1 \\
\gamma_{12} + \gamma_{22} + \gamma_{32} =1 \\
\gamma_{13} + \gamma_{23} + \gamma_{33} =1 \\
\end{matrix} $$
So you have essentially $6$ unknown coefficients and a sample of $T-1$ observations (for each company).
Note that these restrictions imply the "add up to unity" restriction $A_t+B_t+C_t =1$ for each $t$, so this last one does not impose any additional structure on the unknown coefficients -but it does imply a relation between the error terms, namely that $u^A_{t} + u^B_{t} +u^C_{t} =0$. Any additional assumptions on the three error terms should either come from knowledge of the specific real world phenomenon under study, and/or through a statistical specification search.
Then, an estimation for a hidden transfer of market share will be, for example
$$\hat G_{t}^{A\rightarrow B} = \hat \gamma_{21}A_{t-1}$$
etc.
Of course you may find that such a model does not fit your data sample well - for example you expect that all estimated coefficients should be positive and smaller than or equal to unity, but the estimation procedure may not give you that. But this is what we do: we come up with specification hypotheses and we test them against the data - "success" is never guaranteed. Then you should try to come up with a different model. | Estimating hidden transfers of market share | Write your system explicitly for time $t$ as ("$L$" for "loss", as a positive quantity, and "$G$" for "gain")
$$ A_t - A_{t-1} = - L^A_{t} + G_{t}^{B\rightarrow A}+G_{t}^{C\rightarrow A}$$
$$ B_t - B_ | Estimating hidden transfers of market share
Write your system explicitly for time $t$ as ("$L$" for "loss", as a positive quantity, and "$G$" for "gain")
$$ A_t - A_{t-1} = - L^A_{t} + G_{t}^{B\rightarrow A}+G_{t}^{C\rightarrow A}$$
$$ B_t - B_{t-1} = - L^B_{t} + G_{t-1}^{A\rightarrow B}+G_{t}^{C\rightarrow B}$$
$$ C_t - C_{t-1} = - L^C_{t} + G_{t}^{A\rightarrow C}+G_{t}^{B\rightarrow C}$$
The following three relations hold exactly:
$$ L^A_{t} = G_{t}^{A\rightarrow B} + G_{t}^{A\rightarrow C} $$
$$ L^B_{t} = G_{t}^{B\rightarrow A} + G_{t}^{B\rightarrow C} $$
$$ L^C_{t} = G_{t}^{C\rightarrow A} + G_{t}^{C\rightarrow B} $$
If you substitute in the first three you obtain
$$ A_t - A_{t-1} = - G_{t}^{A\rightarrow B} - G_{t}^{A\rightarrow C} + G_{t}^{B\rightarrow A}+G_{t}^{C\rightarrow A}$$
$$ B_t - B_{t-1} = - G_{t}^{B\rightarrow A} - G_{t}^{B\rightarrow C} + G_{t}^{A\rightarrow B}+G_{t}^{C\rightarrow B}$$
$$ C_t - C_{t-1} = - G_{t}^{C\rightarrow A} - G_{t}^{C\rightarrow B} + G_{t}^{A\rightarrow C}+G_{t}^{B\rightarrow C}$$
You have $6$ unknown quantities to estimate per time period. There is just not enough information to do that. So you need assumptions that will impose structure (=restrictions) on the situation, and will permit you to estimate something. What? Let's say you assume that there is a relatively stable "churn" from one company to another, as a linear function of their market share in the previous period. This assumption brings in a set of unknown coefficients to be estimated (which will then give you an estimate of "hidden transfers of market share"). Write $G_{t}^{A\rightarrow B} = a_bA_{t-1}$ (market share lost from $A$ to $B$ as a linear function of $A$'s market share in period $t-1$).
Your equations will become
$$ A_t - A_{t-1} = - a_bA_{t-1} - a_cA_{t-1} + b_aB_{t-1}+c_aC_{t-1} $$
$$ B_t - B_{t-1} = - b_aB_{t-1} - b_cB_{t-1} + a_bA_{t-1}+c_bC_{t-1}$$
$$ C_t - C_{t-1} = - c_aC_{t-1} - c_bC_{t-1} + a_cA_{t-1}+ b_cB_{t-1}$$
We have turned a set of mathematical identities into a model. It is doubtful that this model will hold exactly for each $t$, so you should add a stochastic error term. Rearranging we obtain a first-order Vector Autoregression (VAR):
$$ \left[ \begin{matrix}
A_t \\
B_t \\
C_t \\
\end{matrix} \right] = \left [\begin{matrix}
1-a_b-a_c & b_a & c_a \\
a_b & 1-b_a-b_c & c_b \\
a_c & b_c & 1-c_a-c_b \\
\end{matrix} \right] \left[ \begin{matrix}
A_{t-1} \\
B_{t-1} \\
C_{t-1} \\
\end{matrix} \right]+ \left[ \begin{matrix}
u^A_{t} \\
u^B_{t} \\
u^C_{t} \\
\end{matrix} \right] $$
or, to homogenize notation,
$$ \left[ \begin{matrix}
A_t \\
B_t \\
C_t \\
\end{matrix} \right] = \left [\begin{matrix}
\gamma_{11} & \gamma_{12} & \gamma_{13} \\
\gamma_{21} & \gamma_{22} & \gamma_{23} \\
\gamma_{31} & \gamma_{32} & \gamma_{33} \\
\end{matrix} \right] \left[ \begin{matrix}
A_{t-1} \\
B_{t-1} \\
C_{t-1} \\
\end{matrix} \right]+ \left[ \begin{matrix}
u^A_{t} \\
u^B_{t} \\
u^C_{t} \\
\end{matrix} \right] $$
subject to the equality restrictions
$$ \begin{matrix}
\gamma_{11} + \gamma_{21} + \gamma_{31} =1 \\
\gamma_{12} + \gamma_{22} + \gamma_{32} =1 \\
\gamma_{13} + \gamma_{23} + \gamma_{33} =1 \\
\end{matrix} $$
So you have essentially $6$ unknown coefficients and a sample of $T-1$ observations (for each company).
Note that these restrictions imply the "add up to unity" restriction $A_t+B_t+C_t =1$ for each $t$, so this last one does not impose any additional structure on the unknown coefficients -but it does imply a relation between the error terms, namely that $u^A_{t} + u^B_{t} +u^C_{t} =0$. Any additional assumptions on the three error terms should either come from knowledge of the specific real world phenomenon under study, and/or through a statistical specification search.
Then, an estimation for a hidden transfer of market share will be, for example
$$\hat G_{t}^{A\rightarrow B} = \hat \gamma_{21}A_{t-1}$$
etc.
Of course you may find that such a model does not fit your data sample well - for example you expect that all estimated coefficients should be positive and smaller than or equal to unity, but the estimation procedure may not give you that. But this is what we do: we come up with specification hypotheses and we test them against the data - "success" is never guaranteed. Then you should try to come up with a different model. | Estimating hidden transfers of market share
Write your system explicitly for time $t$ as ("$L$" for "loss", as a positive quantity, and "$G$" for "gain")
$$ A_t - A_{t-1} = - L^A_{t} + G_{t}^{B\rightarrow A}+G_{t}^{C\rightarrow A}$$
$$ B_t - B_ |
54,487 | Concentration bounds on a sequence of (0,1)-experiments | One of the question's queries is the probability that "the next step will be the last". The OP states this probability as $Pr[Y_{t+1} \geq (p+\delta) \cdot (t+1)|Y_t < (p+\delta) \cdot t]$, but this is not a correct representation. And this is because we require that the process must stop the first time it reaches the relevant threshold. Left unstopped, the process may cross the threshold at time $t$,then fall below the threshold that holds for period $t+1$, and then cross the next threshold again, etc. Namely we may have $Y_t < (p+\delta) \cdot t$, but $Y_{t-1} > (p+\delta) \cdot (t-1)$, in which case $Y_t$ (and of course $Y_{t+1}$) won't be observed since that process will have stopped at $Y_{t-1}$.
To compact notation, denote $h_t$ the threshold of step $t$. Then the appropriate statement of the required probability is
$$ Pr[Y_{t+1} \geq h_{t+1}\mid Y_{t+1}\; \text{is observed}]\equiv P^s_{t+1} $$
So first we need to calculate the continuation probability of the process.
Counting from $1$, the probability that $Y_2$ will be observed is equal to the probability that at the first step, the Bernoulli trial will give a zero value: $Pr[Y_{2} \;\text{is observed}] = Pr(Y_1 =0) =1-p$.
The probability that $Y_3$ is observed is equal to the probability that $Y_2$ does not reach or exceed $h_2$, given that $Y_2$ is observed.
$$P^c_3 \equiv Pr[Y_3 \;\text{is observed}] = Pr[Y_2 < h_2 \mid Y_2 \;\text{is observed}]$$
Applying Bayes' theorem, we have
$$Pr[Y_2 < h_2 \mid Y_2 \;\text{is observed}] = Pr[Y_2 \;\text{is observed} \mid Y_2 < h_2 ]\frac {Pr[Y_2 < h_2]}{Pr[Y_2 \;\text{is observed}]}$$
$$=1\cdot\frac {Pr[Y_2 < h_2]}{Pr[Y_2 \;\text{is observed}]} \Rightarrow P^c_3 = Pr[Y_2 < h_2]\cdot \left(P^c_2\right)^{-1}$$
So the probability that the $t+1$ step will be the last is
$$P^s_{t+1} = Pr[Y_{t+1}\; \text{is observed}\mid Y_{t+1} \geq h_{t+1}] \frac {Pr[Y_{t+1} \geq h_{t+1}]}{Pr[Y_{t+1}\; \text{is observed}]}$$
$$= Pr[Y_{t+1} \geq h_{t+1}]\cdot \left(P^c_{t+1}\right)^{-1}$$
Denote for compactness the unconditional probability of not reaching the threshold at trial $t$, $P^b_t$. These are probabilities regardless of whether the process may have crossed the threshold at some step before $t$. Then, using the recursive formula for $P^c_{t+1}$ we obtain
$$P^s_{t+1} = (1-P^b_{t+1})\frac{P^b_{t-1}\cdot P^b_{t-3}\cdot...}{P^b_{t}\cdot P^b_{t-2}\cdot...} \qquad [1]$$
Equation $[1]$ contains unconditional probabilites of sums (of different cardinality) of i.i.d. Bernoulli variables. The probability generating function of a Bernoulli r.v. is $G(z) = 1-p +pz$. The pgf of the sum of i.i.d Bernoullis is just the product of the individual pgf's. So
$$G_{Y}(z;t) = (1-p+pz)^t$$
The probability of $Y_t$ having the integer value $0\le k\le t$ (its pmf, that is) is recovered through the $k$-th derivative of the pgf evaluated at zero
$$P(Y_t=k) = \frac {1}{k!}G^{(k)}_{Y}(0;t) = \left(\begin{matrix} t \\k
\end{matrix}\right) (1-p)^{t-k}p^k$$
and so
$$P(Y_t\le k) = \sum_{j=0}^k \left(\begin{matrix} t \\j
\end{matrix}\right) (1-p)^{t-j}p^j$$
In our case the threshold $h_t$ is not, in general, an integer. In order to cover both the integer and non-integer case, we use the ceiling function of $h_t-1$, $\lceil h_t-1 \rceil$, obtaining for example,
$$P^b_{t} = Pr[Y_{t} \le \lceil h_t-1 \rceil] = \sum_{j=0}^{\lceil h_t-1 \rceil} \left(\begin{matrix} t \\j
\end{matrix}\right) (1-p)^{t-j}p^j \qquad [2]$$
a formula through which we can obtain the various components of $[1]$, for each step, and be able to calculate the required probability $P^s_{t+1}$ for any $\{p,\delta, t\}$. | Concentration bounds on a sequence of (0,1)-experiments | One of the question's queries is the probability that "the next step will be the last". The OP states this probability as $Pr[Y_{t+1} \geq (p+\delta) \cdot (t+1)|Y_t < (p+\delta) \cdot t]$, but this i | Concentration bounds on a sequence of (0,1)-experiments
One of the question's queries is the probability that "the next step will be the last". The OP states this probability as $Pr[Y_{t+1} \geq (p+\delta) \cdot (t+1)|Y_t < (p+\delta) \cdot t]$, but this is not a correct representation. And this is because we require that the process must stop the first time it reaches the relevant threshold. Left unstopped, the process may cross the threshold at time $t$,then fall below the threshold that holds for period $t+1$, and then cross the next threshold again, etc. Namely we may have $Y_t < (p+\delta) \cdot t$, but $Y_{t-1} > (p+\delta) \cdot (t-1)$, in which case $Y_t$ (and of course $Y_{t+1}$) won't be observed since that process will have stopped at $Y_{t-1}$.
To compact notation, denote $h_t$ the threshold of step $t$. Then the appropriate statement of the required probability is
$$ Pr[Y_{t+1} \geq h_{t+1}\mid Y_{t+1}\; \text{is observed}]\equiv P^s_{t+1} $$
So first we need to calculate the continuation probability of the process.
Counting from $1$, the probability that $Y_2$ will be observed is equal to the probability that at the first step, the Bernoulli trial will give a zero value: $Pr[Y_{2} \;\text{is observed}] = Pr(Y_1 =0) =1-p$.
The probability that $Y_3$ is observed is equal to the probability that $Y_2$ does not reach or exceed $h_2$, given that $Y_2$ is observed.
$$P^c_3 \equiv Pr[Y_3 \;\text{is observed}] = Pr[Y_2 < h_2 \mid Y_2 \;\text{is observed}]$$
Applying Bayes' theorem, we have
$$Pr[Y_2 < h_2 \mid Y_2 \;\text{is observed}] = Pr[Y_2 \;\text{is observed} \mid Y_2 < h_2 ]\frac {Pr[Y_2 < h_2]}{Pr[Y_2 \;\text{is observed}]}$$
$$=1\cdot\frac {Pr[Y_2 < h_2]}{Pr[Y_2 \;\text{is observed}]} \Rightarrow P^c_3 = Pr[Y_2 < h_2]\cdot \left(P^c_2\right)^{-1}$$
So the probability that the $t+1$ step will be the last is
$$P^s_{t+1} = Pr[Y_{t+1}\; \text{is observed}\mid Y_{t+1} \geq h_{t+1}] \frac {Pr[Y_{t+1} \geq h_{t+1}]}{Pr[Y_{t+1}\; \text{is observed}]}$$
$$= Pr[Y_{t+1} \geq h_{t+1}]\cdot \left(P^c_{t+1}\right)^{-1}$$
Denote for compactness the unconditional probability of not reaching the threshold at trial $t$, $P^b_t$. These are probabilities regardless of whether the process may have crossed the threshold at some step before $t$. Then, using the recursive formula for $P^c_{t+1}$ we obtain
$$P^s_{t+1} = (1-P^b_{t+1})\frac{P^b_{t-1}\cdot P^b_{t-3}\cdot...}{P^b_{t}\cdot P^b_{t-2}\cdot...} \qquad [1]$$
Equation $[1]$ contains unconditional probabilites of sums (of different cardinality) of i.i.d. Bernoulli variables. The probability generating function of a Bernoulli r.v. is $G(z) = 1-p +pz$. The pgf of the sum of i.i.d Bernoullis is just the product of the individual pgf's. So
$$G_{Y}(z;t) = (1-p+pz)^t$$
The probability of $Y_t$ having the integer value $0\le k\le t$ (its pmf, that is) is recovered through the $k$-th derivative of the pgf evaluated at zero
$$P(Y_t=k) = \frac {1}{k!}G^{(k)}_{Y}(0;t) = \left(\begin{matrix} t \\k
\end{matrix}\right) (1-p)^{t-k}p^k$$
and so
$$P(Y_t\le k) = \sum_{j=0}^k \left(\begin{matrix} t \\j
\end{matrix}\right) (1-p)^{t-j}p^j$$
In our case the threshold $h_t$ is not, in general, an integer. In order to cover both the integer and non-integer case, we use the ceiling function of $h_t-1$, $\lceil h_t-1 \rceil$, obtaining for example,
$$P^b_{t} = Pr[Y_{t} \le \lceil h_t-1 \rceil] = \sum_{j=0}^{\lceil h_t-1 \rceil} \left(\begin{matrix} t \\j
\end{matrix}\right) (1-p)^{t-j}p^j \qquad [2]$$
a formula through which we can obtain the various components of $[1]$, for each step, and be able to calculate the required probability $P^s_{t+1}$ for any $\{p,\delta, t\}$. | Concentration bounds on a sequence of (0,1)-experiments
One of the question's queries is the probability that "the next step will be the last". The OP states this probability as $Pr[Y_{t+1} \geq (p+\delta) \cdot (t+1)|Y_t < (p+\delta) \cdot t]$, but this i |
54,488 | Concentration bounds on a sequence of (0,1)-experiments | The expected time until the sequence is terminates is infinite. In fact, not only is the expected value of $\text{min}\{t : Y_t>(p+\delta)t\}$ infinite for any $\delta>0$, but even the expected value of $\text{min}\{t : Y_t > pt + \delta\}$ is infinite.
This can be proven using the optional sampling theorem (also known as optional stopping theorem). Define $\tau = \text{min}\{t : Y_t > pt + \delta\}$, and define $Z_t=Y_t-pt$. Note that
$Z_t$ is a martingale, and
By definition, $Z_\tau\geq\delta>0$ and therefore $E[Z_\tau]>0$.
If $E[\tau]<\infty$ then the optional sampling theorem would imply that $E[Z_\tau]=Z_0=0$, which is a contradiction.
The situation for the random time you mention in the OP is even more severe.
Let $K_t$ denote the event $Y_t>(p+\delta)t$ and define $\sigma=\text{min}\{t : K_t\}$. It can be shown that $P[\sigma=\infty]>0$, meaning that this threshold might never be crossed.
Briefly, again we can do this by contradiction. First, assume $P[\sigma<\infty]=1$, and let $\sigma_2$ denote the second time that
$K_t$ occurs.
Note that, for any $t$, if $Y_t-Y_\sigma>(p+\delta)(t-\sigma)$, then, since $Y_\sigma>(p+\delta)\sigma$ by assumption, it must be the case that $Y_t>(p+\delta)t$. So, by the independence of the $Y_t$, it follows that
$P[\sigma_2-\sigma_1<\infty]=1$ and thus $P[\sigma_2<\infty]=1$ also. And the same goes for $\sigma_3$ and $\sigma_4$, and so on, which means that the event $K_t$ occurs infinitely often with probability $1$.
However, since each $Y_t$ is the sum of independent Bernoulli trials, then it can be shown (i.e. Chernoff bounds) that there exists a fixed constant $C$ (depending on the constants $p$ and $\delta$) such that
$$P[K_t]=P[Y_t-pt>\delta t]<e^{-Ct}.$$
From this it follows that
$$\sum_{t=1}^\infty P[K_t]<\infty,$$
and thus by the Borel-Cantelli lemma, with probability $1$, $K_t$ only occurs finitely often. Hence we have a contradiction.
As for calculating the probabilities that each step is the last (i.e. the probabilities $P[\sigma=t]$), I can't think of any easy way to do this without doing a simulation. That doesn't mean that there isn't any way to do it, of course.
If you are looking for more information related to this question, I would suggest looking into the continuous version of the process, which would be brownian motion with drift. I imagine you can find somewhere a calculation of the probability that brownian motion with drift crosses this kind of a threshold.
To add: if you are interested in numerical values, they are pretty easy to compute, at least for moderate values of $T$. Here is a simple R function that computes the probability of escape, as well as the conditional distribution conditional on survival up to time n:
simProb<-function(p,n,delta){
pp<-c(p,1-p)
thresh<- floor(1+(p+delta)*(1:n))
x<-1
res<-rep(0,n)
rsum<-1
for(i in 1:n){
x<-convolve(x,pp,t="o")
if(length(x)>thresh[i]){
res[i]<-sum(x[(thresh[i]+1):length(x)])*rsum
rsum<-rsum-res[i]
x<-x[1:(thresh[i])]
x<-x/sum(x)
}
}
list(p=res,x=x)
}
Here is what the survival probabilities are for some specific constants:
plot(1-cumsum(simProb(0.6,1000,0.01)$p),type="l",ylim=c(0,0.4))
You can also examine the survival probability as a function of $\delta$ like this:
delt<-0.001*(1:399)
surv<-sapply(delt, function(delta)1-sum(simProb(0.6,1000,delta)$p))
plot(delt,surv,type='l')
You can see how discontinuous this is: the reason is that, since $Y_t$ and $t$ are integer-valued the exit condition $Y_t>(p+\delta)t$ is effectively discontinuous as a function of $t$. For this reason, it is unlikely that there is a clean formula for the probability of survival up to time $t$. | Concentration bounds on a sequence of (0,1)-experiments | The expected time until the sequence is terminates is infinite. In fact, not only is the expected value of $\text{min}\{t : Y_t>(p+\delta)t\}$ infinite for any $\delta>0$, but even the expected value | Concentration bounds on a sequence of (0,1)-experiments
The expected time until the sequence is terminates is infinite. In fact, not only is the expected value of $\text{min}\{t : Y_t>(p+\delta)t\}$ infinite for any $\delta>0$, but even the expected value of $\text{min}\{t : Y_t > pt + \delta\}$ is infinite.
This can be proven using the optional sampling theorem (also known as optional stopping theorem). Define $\tau = \text{min}\{t : Y_t > pt + \delta\}$, and define $Z_t=Y_t-pt$. Note that
$Z_t$ is a martingale, and
By definition, $Z_\tau\geq\delta>0$ and therefore $E[Z_\tau]>0$.
If $E[\tau]<\infty$ then the optional sampling theorem would imply that $E[Z_\tau]=Z_0=0$, which is a contradiction.
The situation for the random time you mention in the OP is even more severe.
Let $K_t$ denote the event $Y_t>(p+\delta)t$ and define $\sigma=\text{min}\{t : K_t\}$. It can be shown that $P[\sigma=\infty]>0$, meaning that this threshold might never be crossed.
Briefly, again we can do this by contradiction. First, assume $P[\sigma<\infty]=1$, and let $\sigma_2$ denote the second time that
$K_t$ occurs.
Note that, for any $t$, if $Y_t-Y_\sigma>(p+\delta)(t-\sigma)$, then, since $Y_\sigma>(p+\delta)\sigma$ by assumption, it must be the case that $Y_t>(p+\delta)t$. So, by the independence of the $Y_t$, it follows that
$P[\sigma_2-\sigma_1<\infty]=1$ and thus $P[\sigma_2<\infty]=1$ also. And the same goes for $\sigma_3$ and $\sigma_4$, and so on, which means that the event $K_t$ occurs infinitely often with probability $1$.
However, since each $Y_t$ is the sum of independent Bernoulli trials, then it can be shown (i.e. Chernoff bounds) that there exists a fixed constant $C$ (depending on the constants $p$ and $\delta$) such that
$$P[K_t]=P[Y_t-pt>\delta t]<e^{-Ct}.$$
From this it follows that
$$\sum_{t=1}^\infty P[K_t]<\infty,$$
and thus by the Borel-Cantelli lemma, with probability $1$, $K_t$ only occurs finitely often. Hence we have a contradiction.
As for calculating the probabilities that each step is the last (i.e. the probabilities $P[\sigma=t]$), I can't think of any easy way to do this without doing a simulation. That doesn't mean that there isn't any way to do it, of course.
If you are looking for more information related to this question, I would suggest looking into the continuous version of the process, which would be brownian motion with drift. I imagine you can find somewhere a calculation of the probability that brownian motion with drift crosses this kind of a threshold.
To add: if you are interested in numerical values, they are pretty easy to compute, at least for moderate values of $T$. Here is a simple R function that computes the probability of escape, as well as the conditional distribution conditional on survival up to time n:
simProb<-function(p,n,delta){
pp<-c(p,1-p)
thresh<- floor(1+(p+delta)*(1:n))
x<-1
res<-rep(0,n)
rsum<-1
for(i in 1:n){
x<-convolve(x,pp,t="o")
if(length(x)>thresh[i]){
res[i]<-sum(x[(thresh[i]+1):length(x)])*rsum
rsum<-rsum-res[i]
x<-x[1:(thresh[i])]
x<-x/sum(x)
}
}
list(p=res,x=x)
}
Here is what the survival probabilities are for some specific constants:
plot(1-cumsum(simProb(0.6,1000,0.01)$p),type="l",ylim=c(0,0.4))
You can also examine the survival probability as a function of $\delta$ like this:
delt<-0.001*(1:399)
surv<-sapply(delt, function(delta)1-sum(simProb(0.6,1000,delta)$p))
plot(delt,surv,type='l')
You can see how discontinuous this is: the reason is that, since $Y_t$ and $t$ are integer-valued the exit condition $Y_t>(p+\delta)t$ is effectively discontinuous as a function of $t$. For this reason, it is unlikely that there is a clean formula for the probability of survival up to time $t$. | Concentration bounds on a sequence of (0,1)-experiments
The expected time until the sequence is terminates is infinite. In fact, not only is the expected value of $\text{min}\{t : Y_t>(p+\delta)t\}$ infinite for any $\delta>0$, but even the expected value |
54,489 | More complicated classifier vs data preprocessing | You assume that you can separate the data linearly in a lower dimensional space after some nonlinear transformation.
Kernel methods are popular due to the exact opposite: the data might only be linearly separable in a higher dimensional feature space. For any data you provide a higher dimensional feature space exists in which the data is linearly separable. Ofcourse, you don't need to go to a higher dimensional space for all problems but it is often necessary.
What you are interested in is VC dimension. The VC dimension of a model $\mathcal{F}$ is the maximum number of points that can be shattered by $\mathcal{F}$. The VC dimension of oriented hyperplanes in $\mathbb{R}^n$ is $n+1$. This shows that, for any amount of data points, a linear separation is always possible when we go to a feature space of sufficient dimensionality (but not lower!).
Kernel methods provide an efficient means to embed data in such higher dimensional feature spaces at low computational cost thanks to the kernel trick.
Explicitly transforming the data to a space in which it is linearly separable is not always possible. This is easy to see when you recall that some kernel functions, such as the popular RBF kernel, compute inner products in an infinite dimensional feature space. Obviously, you cannot create an infinite dimensional vector yourself. | More complicated classifier vs data preprocessing | You assume that you can separate the data linearly in a lower dimensional space after some nonlinear transformation.
Kernel methods are popular due to the exact opposite: the data might only be linea | More complicated classifier vs data preprocessing
You assume that you can separate the data linearly in a lower dimensional space after some nonlinear transformation.
Kernel methods are popular due to the exact opposite: the data might only be linearly separable in a higher dimensional feature space. For any data you provide a higher dimensional feature space exists in which the data is linearly separable. Ofcourse, you don't need to go to a higher dimensional space for all problems but it is often necessary.
What you are interested in is VC dimension. The VC dimension of a model $\mathcal{F}$ is the maximum number of points that can be shattered by $\mathcal{F}$. The VC dimension of oriented hyperplanes in $\mathbb{R}^n$ is $n+1$. This shows that, for any amount of data points, a linear separation is always possible when we go to a feature space of sufficient dimensionality (but not lower!).
Kernel methods provide an efficient means to embed data in such higher dimensional feature spaces at low computational cost thanks to the kernel trick.
Explicitly transforming the data to a space in which it is linearly separable is not always possible. This is easy to see when you recall that some kernel functions, such as the popular RBF kernel, compute inner products in an infinite dimensional feature space. Obviously, you cannot create an infinite dimensional vector yourself. | More complicated classifier vs data preprocessing
You assume that you can separate the data linearly in a lower dimensional space after some nonlinear transformation.
Kernel methods are popular due to the exact opposite: the data might only be linea |
54,490 | More complicated classifier vs data preprocessing | If you know what kind of non-linear transformation in the preprocessing could/should give you a linearly separable problem, then do it.
This is often the case if physical/chemical/biological/... relations about the type of data are known. E.g. you may have transmittance spectra but know that $- log T = A \propto c$ (concentration). Or you have diffuse reflection data and know that Kubelka-Munk-units are worth a try.
The reason for this is that such customized pre-processing often allows to use "external" knowledge about your data structure in a more specialized and direct way than a general-purpose non-linear model.
A second advantage of this is that a consistent interpretation of the data analysis is easier with pre-processing + linear model. This can be very useful as sanity-check e.g. against knowledge from other studies about the application: Does the model use features that are already known to be related to the classes? Does the model use features that are possibly sensible. In some special cases such as the spectroscopic data I work with, I can even get indications of overfitting this way. | More complicated classifier vs data preprocessing | If you know what kind of non-linear transformation in the preprocessing could/should give you a linearly separable problem, then do it.
This is often the case if physical/chemical/biological/... relat | More complicated classifier vs data preprocessing
If you know what kind of non-linear transformation in the preprocessing could/should give you a linearly separable problem, then do it.
This is often the case if physical/chemical/biological/... relations about the type of data are known. E.g. you may have transmittance spectra but know that $- log T = A \propto c$ (concentration). Or you have diffuse reflection data and know that Kubelka-Munk-units are worth a try.
The reason for this is that such customized pre-processing often allows to use "external" knowledge about your data structure in a more specialized and direct way than a general-purpose non-linear model.
A second advantage of this is that a consistent interpretation of the data analysis is easier with pre-processing + linear model. This can be very useful as sanity-check e.g. against knowledge from other studies about the application: Does the model use features that are already known to be related to the classes? Does the model use features that are possibly sensible. In some special cases such as the spectroscopic data I work with, I can even get indications of overfitting this way. | More complicated classifier vs data preprocessing
If you know what kind of non-linear transformation in the preprocessing could/should give you a linearly separable problem, then do it.
This is often the case if physical/chemical/biological/... relat |
54,491 | More complicated classifier vs data preprocessing | Theoretically speaking, the two approaches are equivalent.
That said, you can tweak your data preprocessing much more than the kernel parameters in most SVM tools, so the second approach will afford you much more flexibility. | More complicated classifier vs data preprocessing | Theoretically speaking, the two approaches are equivalent.
That said, you can tweak your data preprocessing much more than the kernel parameters in most SVM tools, so the second approach will afford y | More complicated classifier vs data preprocessing
Theoretically speaking, the two approaches are equivalent.
That said, you can tweak your data preprocessing much more than the kernel parameters in most SVM tools, so the second approach will afford you much more flexibility. | More complicated classifier vs data preprocessing
Theoretically speaking, the two approaches are equivalent.
That said, you can tweak your data preprocessing much more than the kernel parameters in most SVM tools, so the second approach will afford y |
54,492 | Calculate odds ratio confidence intervals from plink output? | For the calculation of confidence intervals you'll need standard errors for the effects, but those are not available in the output. However, the standard errors can be estimated from the Wald statistics and odds ratios.
The calculation goes as follows:
Take a natural logarithm from the odds ratio. This gives you the beta from the logistic model. For example for the first row of your table: beta=ln(4.23)=1.442
The standard error for the beta is calculated by dividing the beta by the square root of the Walds statistic (STAT). Then take the absolute value of the result. Again, for the first row of your table: se=1.442/sqrt(61.5)=0.183.
The 95% confidence interval for the beta is then beta+/-1.96*se. The constant 1.96 comes from the normal distribution. Again, for the first row of data: 1.442-1.96*0.183 ... 1.442+1.96*0.183 = 1.081...1.802.
Last, you need to change the confidence interval of the beta to the confidence interval of the odds ratio. This happens simply by exponentiating the confidence interval of the beta. For the first line of data: 2.71828^1.081 = 2.949 and 2.71828^1.802 = 6.065.
So, your odds ratio for the first row of the table is 4.23 and it's 95% confidence interval is 2.949-6.065. Because the confidence interval does not include one, the results is statistically significant. The results are subject to error due to rounding of the output from PLINK.
This calculation can be achieved in, e.g., Excel, but below is also an R function that does the same thing (just in case you also use R).
# The data
or<-structure(list(SNP1 = structure(c(1L, 1L, 1L, 1L), .Label = "rs1", class = "factor"),
SNP2 = structure(c(1L, 1L, 1L, 1L), .Label = "rs2", class = "factor"),
HAPLOTYPE = c(22L, 12L, 21L, 11L), F = c(0.00992, 0.038,
0.00015, 0.952), OR = c(4.23, 1.02, 5.22e-10, 0.762), STAT = c(61.5,
0.217, 453, 22.9), P = c(4.43e-15, 0.642, 1.77e-100, 1.73e-06
)), .Names = c("SNP1", "SNP2", "HAPLOTYPE", "F", "OR", "STAT",
"P"), class = "data.frame", row.names = c(NA, -4L))
# The function
orci<-function(or) {
or$beta<-log(or$OR)
or$se<-abs(or$beta/sqrt(or$STAT))
or$lower<-or$beta-1.96*or$se
or$upper<-or$beta+1.96*or$se
or$LOWER<-exp(or$lower)
or$UPPER<-exp(or$upper)
or$res<-paste(or$OR, " (", round(or$LOWER, 3), "-", round(or$UPPER, 3), ")", sep="")
return(or)
}
# The calculation
orci(or)
# The result
#SNP1 SNP2 HAPLOTYPE F OR STAT P beta se lower upper LOWER UPPER res
#1 rs1 rs2 22 0.00992 4.23e+00 61.500 4.43e-15 1.44220199 0.18390288 1.08175235 1.8026516 2.949844e+00 6.065710e+00 4.23 (2.95-6.066)
#2 rs1 rs2 12 0.03800 1.02e+00 0.217 6.42e-01 0.01980263 0.04251018 -0.06351733 0.1031226 9.384579e-01 1.108627e+00 1.02 (0.938-1.109)
#3 rs1 rs2 21 0.00015 5.22e-10 453.000 1.77e-100 -21.37335353 1.00420775 -23.34160072 -19.4051063 7.292419e-11 3.736538e-09 0.000000000522 (0-0)
#4 rs1 rs2 11 0.95200 7.62e-01 22.900 1.73e-06 -0.27180872 0.05679965 -0.38313603 -0.1604814 6.817202e-01 8.517337e-01 0.762 (0.682-0.852) | Calculate odds ratio confidence intervals from plink output? | For the calculation of confidence intervals you'll need standard errors for the effects, but those are not available in the output. However, the standard errors can be estimated from the Wald statisti | Calculate odds ratio confidence intervals from plink output?
For the calculation of confidence intervals you'll need standard errors for the effects, but those are not available in the output. However, the standard errors can be estimated from the Wald statistics and odds ratios.
The calculation goes as follows:
Take a natural logarithm from the odds ratio. This gives you the beta from the logistic model. For example for the first row of your table: beta=ln(4.23)=1.442
The standard error for the beta is calculated by dividing the beta by the square root of the Walds statistic (STAT). Then take the absolute value of the result. Again, for the first row of your table: se=1.442/sqrt(61.5)=0.183.
The 95% confidence interval for the beta is then beta+/-1.96*se. The constant 1.96 comes from the normal distribution. Again, for the first row of data: 1.442-1.96*0.183 ... 1.442+1.96*0.183 = 1.081...1.802.
Last, you need to change the confidence interval of the beta to the confidence interval of the odds ratio. This happens simply by exponentiating the confidence interval of the beta. For the first line of data: 2.71828^1.081 = 2.949 and 2.71828^1.802 = 6.065.
So, your odds ratio for the first row of the table is 4.23 and it's 95% confidence interval is 2.949-6.065. Because the confidence interval does not include one, the results is statistically significant. The results are subject to error due to rounding of the output from PLINK.
This calculation can be achieved in, e.g., Excel, but below is also an R function that does the same thing (just in case you also use R).
# The data
or<-structure(list(SNP1 = structure(c(1L, 1L, 1L, 1L), .Label = "rs1", class = "factor"),
SNP2 = structure(c(1L, 1L, 1L, 1L), .Label = "rs2", class = "factor"),
HAPLOTYPE = c(22L, 12L, 21L, 11L), F = c(0.00992, 0.038,
0.00015, 0.952), OR = c(4.23, 1.02, 5.22e-10, 0.762), STAT = c(61.5,
0.217, 453, 22.9), P = c(4.43e-15, 0.642, 1.77e-100, 1.73e-06
)), .Names = c("SNP1", "SNP2", "HAPLOTYPE", "F", "OR", "STAT",
"P"), class = "data.frame", row.names = c(NA, -4L))
# The function
orci<-function(or) {
or$beta<-log(or$OR)
or$se<-abs(or$beta/sqrt(or$STAT))
or$lower<-or$beta-1.96*or$se
or$upper<-or$beta+1.96*or$se
or$LOWER<-exp(or$lower)
or$UPPER<-exp(or$upper)
or$res<-paste(or$OR, " (", round(or$LOWER, 3), "-", round(or$UPPER, 3), ")", sep="")
return(or)
}
# The calculation
orci(or)
# The result
#SNP1 SNP2 HAPLOTYPE F OR STAT P beta se lower upper LOWER UPPER res
#1 rs1 rs2 22 0.00992 4.23e+00 61.500 4.43e-15 1.44220199 0.18390288 1.08175235 1.8026516 2.949844e+00 6.065710e+00 4.23 (2.95-6.066)
#2 rs1 rs2 12 0.03800 1.02e+00 0.217 6.42e-01 0.01980263 0.04251018 -0.06351733 0.1031226 9.384579e-01 1.108627e+00 1.02 (0.938-1.109)
#3 rs1 rs2 21 0.00015 5.22e-10 453.000 1.77e-100 -21.37335353 1.00420775 -23.34160072 -19.4051063 7.292419e-11 3.736538e-09 0.000000000522 (0-0)
#4 rs1 rs2 11 0.95200 7.62e-01 22.900 1.73e-06 -0.27180872 0.05679965 -0.38313603 -0.1604814 6.817202e-01 8.517337e-01 0.762 (0.682-0.852) | Calculate odds ratio confidence intervals from plink output?
For the calculation of confidence intervals you'll need standard errors for the effects, but those are not available in the output. However, the standard errors can be estimated from the Wald statisti |
54,493 | How do difference-in-difference designs account for temporal autocorrelation | Your point about the standard errors in DiD is an important issue which has been mostly ignored or forgotten until a paper by Bertrand et al. (2004) "How Much Should We Trust Differences-In-Differences Estimates?" in the Quarterly Journal of Economics. In there they discuss several methods to overcome the autocorrelation problem which will answer your first question.
Specifically, they examine:
Parametric $\left(AR(1)\right)$ adaptations - These do not do particularly well, which they attribute to the under-estimation of the auto-correlation coefficient as a result of the short time series typical in DiD work, as well as an incorrectly-specified process.
Block bootstrap - This also performs poorly, which they attribute to the small number of blocks typical of DiD work [ Cameron, Gelbach, Miller would seem to be relevant here ].
What they call the "Empirical Variance-Covariance Matrix" - This avoids the SE inflation but also has low power and relies on the DGP being the same across all states.
An arbitrary V-C matrix of their own design - "This method, therefore, seems to work well, when the number of treated units is large enough."
Pre/Post (discarding all the extra year information) - They suggest doing this on the model residuals so as to allow controlling for some trend and covariate information, and suggest that it works well when the number of treated units is large.
Randomization inference - This is their main proposal, in section 4.6. "It removes the over-rejection problem and does so independently of sample size. Moreover, it appears to have power comparable to that of the other tests."
As concerns your second question I am a bit puzzled because once you have a control group you are essentially dealing with panel data and not a pure time-series anymore. In this sense DiD compares the intervention effect between the time-series data of a control and a treatment unit but maybe I misunderstood the question.
As a last suggestion: if you are interested in newer methods than DiD (but which are similar) have a look at the synthetic control method by Abadie and his co-authors. Especially if you have aggregate data this might be a particularly useful method for your work and it is something which is increasingly used in very recent economics papers. | How do difference-in-difference designs account for temporal autocorrelation | Your point about the standard errors in DiD is an important issue which has been mostly ignored or forgotten until a paper by Bertrand et al. (2004) "How Much Should We Trust Differences-In-Difference | How do difference-in-difference designs account for temporal autocorrelation
Your point about the standard errors in DiD is an important issue which has been mostly ignored or forgotten until a paper by Bertrand et al. (2004) "How Much Should We Trust Differences-In-Differences Estimates?" in the Quarterly Journal of Economics. In there they discuss several methods to overcome the autocorrelation problem which will answer your first question.
Specifically, they examine:
Parametric $\left(AR(1)\right)$ adaptations - These do not do particularly well, which they attribute to the under-estimation of the auto-correlation coefficient as a result of the short time series typical in DiD work, as well as an incorrectly-specified process.
Block bootstrap - This also performs poorly, which they attribute to the small number of blocks typical of DiD work [ Cameron, Gelbach, Miller would seem to be relevant here ].
What they call the "Empirical Variance-Covariance Matrix" - This avoids the SE inflation but also has low power and relies on the DGP being the same across all states.
An arbitrary V-C matrix of their own design - "This method, therefore, seems to work well, when the number of treated units is large enough."
Pre/Post (discarding all the extra year information) - They suggest doing this on the model residuals so as to allow controlling for some trend and covariate information, and suggest that it works well when the number of treated units is large.
Randomization inference - This is their main proposal, in section 4.6. "It removes the over-rejection problem and does so independently of sample size. Moreover, it appears to have power comparable to that of the other tests."
As concerns your second question I am a bit puzzled because once you have a control group you are essentially dealing with panel data and not a pure time-series anymore. In this sense DiD compares the intervention effect between the time-series data of a control and a treatment unit but maybe I misunderstood the question.
As a last suggestion: if you are interested in newer methods than DiD (but which are similar) have a look at the synthetic control method by Abadie and his co-authors. Especially if you have aggregate data this might be a particularly useful method for your work and it is something which is increasingly used in very recent economics papers. | How do difference-in-difference designs account for temporal autocorrelation
Your point about the standard errors in DiD is an important issue which has been mostly ignored or forgotten until a paper by Bertrand et al. (2004) "How Much Should We Trust Differences-In-Difference |
54,494 | How do difference-in-difference designs account for temporal autocorrelation | Another paper that I hadn't thought about in this context until @Andy's comment is Cameron, Gelbach, Miller's "Robust Inference with Multi-Way Clustering".
Upon re-reading, it looks like they explicitly replicate the Bertrand (2004) strategy and handle it via their multi-way clustering (which is a generalization of their one-way clustering strategy from the much-more-famous previous CGM paper). | How do difference-in-difference designs account for temporal autocorrelation | Another paper that I hadn't thought about in this context until @Andy's comment is Cameron, Gelbach, Miller's "Robust Inference with Multi-Way Clustering".
Upon re-reading, it looks like they explicit | How do difference-in-difference designs account for temporal autocorrelation
Another paper that I hadn't thought about in this context until @Andy's comment is Cameron, Gelbach, Miller's "Robust Inference with Multi-Way Clustering".
Upon re-reading, it looks like they explicitly replicate the Bertrand (2004) strategy and handle it via their multi-way clustering (which is a generalization of their one-way clustering strategy from the much-more-famous previous CGM paper). | How do difference-in-difference designs account for temporal autocorrelation
Another paper that I hadn't thought about in this context until @Andy's comment is Cameron, Gelbach, Miller's "Robust Inference with Multi-Way Clustering".
Upon re-reading, it looks like they explicit |
54,495 | Logistic regression and clustering? | What you call "clustering" is also known as local regression, kernel regression or local likelihood smoothing. The overall framework is generalized additive modelling, and the definitive textbooks are Hastie & Tibshirani (1990) Generalized Additive Models, and Wood (2006) Generalized Additive Models: An Introduction With R.
GAMs extend on GLMs (including logistic regression) by allowing nonlinear trends to enter the model in a data-driven way. You can include such nonlinear trends manually via transformations, eg polynomial terms or spline terms, but this usually requires examining the data beforehand. This can be tedious if you have many variables and/or they're correlated with each other. The fact that using a local fit improved your model suggests that the relationship between your IV and DV is nonlinear. | Logistic regression and clustering? | What you call "clustering" is also known as local regression, kernel regression or local likelihood smoothing. The overall framework is generalized additive modelling, and the definitive textbooks are | Logistic regression and clustering?
What you call "clustering" is also known as local regression, kernel regression or local likelihood smoothing. The overall framework is generalized additive modelling, and the definitive textbooks are Hastie & Tibshirani (1990) Generalized Additive Models, and Wood (2006) Generalized Additive Models: An Introduction With R.
GAMs extend on GLMs (including logistic regression) by allowing nonlinear trends to enter the model in a data-driven way. You can include such nonlinear trends manually via transformations, eg polynomial terms or spline terms, but this usually requires examining the data beforehand. This can be tedious if you have many variables and/or they're correlated with each other. The fact that using a local fit improved your model suggests that the relationship between your IV and DV is nonlinear. | Logistic regression and clustering?
What you call "clustering" is also known as local regression, kernel regression or local likelihood smoothing. The overall framework is generalized additive modelling, and the definitive textbooks are |
54,496 | Distribution in logistic regression | Yes: the model is $\operatorname{logit} p_i = \beta_0 +\beta_1 x_{1i} + \beta_2 x_{2i} + \beta_3 x_{3i}$.
That's true for bog-standard logistic regression anyway - the term is sometimes used where there's an extra parameter for dispersion, or for an estimating equation approach for which the Bernoulli model isn't assumed.
Re your comment: $\sum_{i=1}^{m_j} Y_{ij}$ has a binomial distribution $\operatorname{Bin}(m_j,p_j)$ for groups of $m_j$ people (from the original $n$) who have the same covariate pattern—the same blood pressure, pulse rate & glucose levels—& therefore the same probability $p_j$ of obesity. If no-one has the same covariate pattern, then there are $n$ groups, each with $m_j=1$, i.e. $n$ different Bernoulli distributions. To be clear, for each individual person $Y_i\sim\operatorname{Bin}(1,p_i)\equiv\operatorname{Bern}(p_i)$, & as @Frank says, there's no real need to consider people grouped together by covariate pattern, though it's sometimes useful for diagnostics.
To be really clear, if your model says this:–
Tom: 90 mmHg, 80 /min, 6 mmol/l => 60% chance of obesity
Dick: 90 mmHg, 80 /min, 6 mmol/l => 60%
Harry: 60 mmHg, 60 /min, 5 mmol/l => 20%
you can write this:–
$$Y_{\mathrm{Tom}}+Y_{\mathrm{Dick}}\sim \operatorname{Bin}(2,60\%)$$
$$Y_{\mathrm{Harry}}\sim \operatorname{Bin}(1,20\%)\equiv\operatorname{Bern}(20\%)$$
or this:–
$$Y_{\mathrm{Tom}}\sim \operatorname{Bin}(1,60\%)\equiv\operatorname{Bern}(60\%)$$
$$Y_{\mathrm{Dick}}\sim \operatorname{Bin}(1,60\%)\equiv\operatorname{Bern}(60\%)$$
$$Y_{\mathrm{Harry}}\sim \operatorname{Bin}(1,20\%)\equiv\operatorname{Bern}(20\%)$$
Note that $Y_{\mathrm{Tom}}+Y_{\mathrm{Dick}}+Y_{\mathrm{Harry}}$ is not binomially distributed because there's not a common probability for each person. | Distribution in logistic regression | Yes: the model is $\operatorname{logit} p_i = \beta_0 +\beta_1 x_{1i} + \beta_2 x_{2i} + \beta_3 x_{3i}$.
That's true for bog-standard logistic regression anyway - the term is sometimes used where th | Distribution in logistic regression
Yes: the model is $\operatorname{logit} p_i = \beta_0 +\beta_1 x_{1i} + \beta_2 x_{2i} + \beta_3 x_{3i}$.
That's true for bog-standard logistic regression anyway - the term is sometimes used where there's an extra parameter for dispersion, or for an estimating equation approach for which the Bernoulli model isn't assumed.
Re your comment: $\sum_{i=1}^{m_j} Y_{ij}$ has a binomial distribution $\operatorname{Bin}(m_j,p_j)$ for groups of $m_j$ people (from the original $n$) who have the same covariate pattern—the same blood pressure, pulse rate & glucose levels—& therefore the same probability $p_j$ of obesity. If no-one has the same covariate pattern, then there are $n$ groups, each with $m_j=1$, i.e. $n$ different Bernoulli distributions. To be clear, for each individual person $Y_i\sim\operatorname{Bin}(1,p_i)\equiv\operatorname{Bern}(p_i)$, & as @Frank says, there's no real need to consider people grouped together by covariate pattern, though it's sometimes useful for diagnostics.
To be really clear, if your model says this:–
Tom: 90 mmHg, 80 /min, 6 mmol/l => 60% chance of obesity
Dick: 90 mmHg, 80 /min, 6 mmol/l => 60%
Harry: 60 mmHg, 60 /min, 5 mmol/l => 20%
you can write this:–
$$Y_{\mathrm{Tom}}+Y_{\mathrm{Dick}}\sim \operatorname{Bin}(2,60\%)$$
$$Y_{\mathrm{Harry}}\sim \operatorname{Bin}(1,20\%)\equiv\operatorname{Bern}(20\%)$$
or this:–
$$Y_{\mathrm{Tom}}\sim \operatorname{Bin}(1,60\%)\equiv\operatorname{Bern}(60\%)$$
$$Y_{\mathrm{Dick}}\sim \operatorname{Bin}(1,60\%)\equiv\operatorname{Bern}(60\%)$$
$$Y_{\mathrm{Harry}}\sim \operatorname{Bin}(1,20\%)\equiv\operatorname{Bern}(20\%)$$
Note that $Y_{\mathrm{Tom}}+Y_{\mathrm{Dick}}+Y_{\mathrm{Harry}}$ is not binomially distributed because there's not a common probability for each person. | Distribution in logistic regression
Yes: the model is $\operatorname{logit} p_i = \beta_0 +\beta_1 x_{1i} + \beta_2 x_{2i} + \beta_3 x_{3i}$.
That's true for bog-standard logistic regression anyway - the term is sometimes used where th |
54,497 | Distribution in logistic regression | As @Scortchi correctly notes, the answer is yes. However, I think this is not quite the right question.
I suspect what you are wondering about is the way that probability, $p_i$, is related to the explanatory variables. In generalized linear models, this is done via a link function. The default link function for binary GLiMs is the logit, however, if BMI is normally distributed, but was categorized as obese, not obese for the study, then your response variable depends on a hidden Gaussian variable, and a different link function is appropriate (namely the probit). For more on this topic, you may want to read my answer here: difference-between-logit-and-probit-models. | Distribution in logistic regression | As @Scortchi correctly notes, the answer is yes. However, I think this is not quite the right question.
I suspect what you are wondering about is the way that probability, $p_i$, is related to the | Distribution in logistic regression
As @Scortchi correctly notes, the answer is yes. However, I think this is not quite the right question.
I suspect what you are wondering about is the way that probability, $p_i$, is related to the explanatory variables. In generalized linear models, this is done via a link function. The default link function for binary GLiMs is the logit, however, if BMI is normally distributed, but was categorized as obese, not obese for the study, then your response variable depends on a hidden Gaussian variable, and a different link function is appropriate (namely the probit). For more on this topic, you may want to read my answer here: difference-between-logit-and-probit-models. | Distribution in logistic regression
As @Scortchi correctly notes, the answer is yes. However, I think this is not quite the right question.
I suspect what you are wondering about is the way that probability, $p_i$, is related to the |
54,498 | Robust Wald test for Poisson with Stata | As far as I know there is no ready made command for your purpose in Stata but it does not seem to be necessary because it can be easily implemented by hand. If you run your regression
poisson y x, vce(robust)
Create a local which holds the coefficient of your variable of interest (call it "bx", for instance) and then use the bootstrap command on the test statistic you posted.
bootstrap twald = ((_b[x] - `bx')/_se[x]), reps(800) nodots: poisson y x, vce(robust)
If you need your results to be replicable, set a seed first. Once you have the result you can compare the bootstrapped standard error of "twald" with the standard deviation of the standard normal which is 1 (if this is what you meant by comparing the refined t with the un-refined t). | Robust Wald test for Poisson with Stata | As far as I know there is no ready made command for your purpose in Stata but it does not seem to be necessary because it can be easily implemented by hand. If you run your regression
poisson y x, vce | Robust Wald test for Poisson with Stata
As far as I know there is no ready made command for your purpose in Stata but it does not seem to be necessary because it can be easily implemented by hand. If you run your regression
poisson y x, vce(robust)
Create a local which holds the coefficient of your variable of interest (call it "bx", for instance) and then use the bootstrap command on the test statistic you posted.
bootstrap twald = ((_b[x] - `bx')/_se[x]), reps(800) nodots: poisson y x, vce(robust)
If you need your results to be replicable, set a seed first. Once you have the result you can compare the bootstrapped standard error of "twald" with the standard deviation of the standard normal which is 1 (if this is what you meant by comparing the refined t with the un-refined t). | Robust Wald test for Poisson with Stata
As far as I know there is no ready made command for your purpose in Stata but it does not seem to be necessary because it can be easily implemented by hand. If you run your regression
poisson y x, vce |
54,499 | Robust Wald test for Poisson with Stata | The Robust Wald test in Stata is achieved by using the option robust after the glm or poisson command in Stata. This is not the same as bootstrapped standard error estimates, which are another type of "robust" estimate for standard errors. The nominal "robust standard errors" estimates you receive by using the robust command are the Huber White standard error estimates.
Huber White estimators are not resampling based, but use the empirical information to estimate $\widehat{\mbox{se}(\hat{\beta})}$. They account for unspecified sources of correlation in the data, heteroscedasticity, and model mispecification. The parameter estimates from such model coefficients are interpreted as "population averaged" which makes them useful for non-causal epidemiologic studies of association. | Robust Wald test for Poisson with Stata | The Robust Wald test in Stata is achieved by using the option robust after the glm or poisson command in Stata. This is not the same as bootstrapped standard error estimates, which are another type of | Robust Wald test for Poisson with Stata
The Robust Wald test in Stata is achieved by using the option robust after the glm or poisson command in Stata. This is not the same as bootstrapped standard error estimates, which are another type of "robust" estimate for standard errors. The nominal "robust standard errors" estimates you receive by using the robust command are the Huber White standard error estimates.
Huber White estimators are not resampling based, but use the empirical information to estimate $\widehat{\mbox{se}(\hat{\beta})}$. They account for unspecified sources of correlation in the data, heteroscedasticity, and model mispecification. The parameter estimates from such model coefficients are interpreted as "population averaged" which makes them useful for non-causal epidemiologic studies of association. | Robust Wald test for Poisson with Stata
The Robust Wald test in Stata is achieved by using the option robust after the glm or poisson command in Stata. This is not the same as bootstrapped standard error estimates, which are another type of |
54,500 | Cluster standard error _versus_ fixed effects | Is the cluster something you're not really interested in - just an irritant? Use clustered standard errors. E.g. if you've got kids in classrooms, and want to know their mean score on a test, you can use clustered standard errors.
Is the cluster something you're interested in or want to remove? E.g. if you've got kids in classrooms, and you want to make one classroom the reference, use fixed effects. The mean in the fixed effects model will be the mean of the classroom that was the reference category. Any between classroom effects are removed.
If you wanted to look at (say) teacher experience and test scores, you can't use fixed effects, because when you use control for classroom, you control for all differences between classes, including teacher experience. | Cluster standard error _versus_ fixed effects | Is the cluster something you're not really interested in - just an irritant? Use clustered standard errors. E.g. if you've got kids in classrooms, and want to know their mean score on a test, you can | Cluster standard error _versus_ fixed effects
Is the cluster something you're not really interested in - just an irritant? Use clustered standard errors. E.g. if you've got kids in classrooms, and want to know their mean score on a test, you can use clustered standard errors.
Is the cluster something you're interested in or want to remove? E.g. if you've got kids in classrooms, and you want to make one classroom the reference, use fixed effects. The mean in the fixed effects model will be the mean of the classroom that was the reference category. Any between classroom effects are removed.
If you wanted to look at (say) teacher experience and test scores, you can't use fixed effects, because when you use control for classroom, you control for all differences between classes, including teacher experience. | Cluster standard error _versus_ fixed effects
Is the cluster something you're not really interested in - just an irritant? Use clustered standard errors. E.g. if you've got kids in classrooms, and want to know their mean score on a test, you can |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.