Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1087 | 2 | null | 152 | 14 | null | There is a nice and reasonably recent discussion of this problem here:
>
Christian P. Robert Multimodality and label switching: a
discussion. Workshop on mixtures, ICMS March 3, 2010.
Essentially, there are several standard strategies, and each has pros and cons. The most obvious thing to do is to formulate the prior in such a way as to ensure there is only one posterior mode (eg. order the means of the mixuture components), but this turns out to have a strange effect on the posterior, and therefore isn't generally used. Next is to ignore the problem during sampling, and then post-process the output to re-label the components to keep the labels consistent. This is easy to implement and seems to work OK. The more sophisticated approaches re-label on-line, either by keeping a single mode, or deliberately randomly permuting the labels to ensure mixing over multiple modes. I quite like the latter approach, but this still leaves the problem of how to summarise the output meaningfully. However, I see that as a separate problem.
| null | CC BY-SA 4.0 | null | 2010-08-01T13:51:19.667 | 2019-05-30T06:16:09.020 | 2019-05-30T06:16:09.020 | 35989 | 643 | null |
1088 | 2 | null | 114 | 18 | null | In addition to those already mentioned, I like Rob Hyndman's blog:
[http://robjhyndman.com/researchtips/](http://robjhyndman.com/researchtips/)
I guess he's too modest to mention it himself! ;-)
| null | CC BY-SA 2.5 | null | 2010-08-01T14:16:22.900 | 2010-08-01T14:16:22.900 | null | null | 643 | null |
1089 | 2 | null | 726 | 11 | null | >
The true logic of this world is in the calculus of probabilities.
-- James Clerk Maxwell
| null | CC BY-SA 2.5 | null | 2010-08-01T14:21:18.177 | 2010-12-03T04:04:49.117 | 2010-12-03T04:04:49.117 | 795 | 643 | null |
1090 | 2 | null | 726 | 10 | null | A bit obscure this one, but a great quote about subjective probability:
>
... There is no way, however, in which the individual can avoid the
burden of responsibility for his own evaluations. The key cannot be
found that will unlock the enchanted garden wherein, among the
fairy-rings and the shrubs of magic wands, beneath the trees laden
with monads and noumena, blossom forth the flowers of probabilitas
realis. With these fabulous blooms safely in our button-holes we would
be spared the necessity of forming opinions, and the heavy loads we
bear upon our necks would be rendered superflous once and for all.
Bruno de Finetti, Theory of Probability, Vol 2
| null | CC BY-SA 3.0 | null | 2010-08-01T14:25:28.323 | 2011-08-15T04:13:32.757 | 2011-08-15T04:13:32.757 | 1381 | 643 | null |
1091 | 2 | null | 575 | 2 | null | There are TWO options for the inferential F-tests In SPSS.
Multivariate does NOT assume sphericity, adn so makes use of a different pairwise correlation for each pair of variables.
The "tests of within subjects effects", including any post hoc tests, assumes sphericity and makes some corrections for using a common correlation across all tests. These procedures are a legacy of the days when computation was expensive, and are a waste of time with modern computing facilities.
My recommendation is to take the omnibus MULTIVARIATE F for any repeated measures. Then follow up with post hoc pairwise t-test, or ANOVA with only 2 levels in each repeated measure comparison if there are also between subject factors. I would make the simple bon ferroni correction of dividing the alpha level by the number of tests.
Also be sure to look at the effect size [available in the option dialogue]. Large effect sizes that are 'close' to significant may be more worthy of attention [and future experiments] than small, but significant effects.
A more sophisticated approach is available in SPSS procedure MIXED, and also in less user friendly [but free] packages such as R.
Summary, in SPSSS, multivariate F followed by pairwise post hocs eith Bon Ferroniwith Bonferroni should be sufficient for most needs.
| null | CC BY-SA 2.5 | null | 2010-08-01T18:14:31.930 | 2010-08-01T18:14:31.930 | null | null | null | null |
1092 | 2 | null | 1082 | 3 | null | Dawy et al. outline an algorithm in [Gene mapping and marker clustering using Shannon's mutual information](http://www.ece.iit.edu/~biitcomm/research/references/Zaher%20Dawy/Zaher%20Dawy%202006/Gene%20Mapping%20and%20Marker%20Clustering%20Using%20Shannon%92s%20Mutual%20Information.pdf) (2006). If you're using R, you may prefer the [BUS](http://www.bioconductor.org/packages/2.6/bioc/vignettes/BUS/inst/doc/bus.pdf) package in Bioconductor.
| null | CC BY-SA 2.5 | null | 2010-08-01T18:41:27.523 | 2010-08-01T18:41:27.523 | null | null | 251 | null |
1093 | 1 | 1098 | null | 3 | 1211 | I'm looking for a distribution to model a vector of $k$ binary random variables, $X_1, \ldots, X_k$. Suppose I have observed that $\sum_i X_i = n$. In this case I do not want to treat them as independent Bernoulli random variables. Instead, I would like something like the multinomial:
$P(X_1=x_1, \ldots, X_k=x_k) = f(x_1, \ldots, x_k; n, p_1, \ldots, p_k) = \frac{n!}{x_1! \cdots x_k!} \prod_{i=1}^k p_i^{x_i}$
but instead of the $x_i$ being nonnegative integers, I want them restricted to be either 0 or 1. I have been trying to see if the [multivariate hypergeometric](http://en.wikipedia.org/wiki/Hypergeometric_distribution) is appropriate, but I'm not sure.
Thanks in advance for any advice.
| Density function for a multivariate Bernoulli-like distribution | CC BY-SA 2.5 | null | 2010-08-01T22:31:15.863 | 2011-04-29T00:33:52.767 | 2011-04-29T00:33:52.767 | 3911 | 647 | [
"distributions"
] |
1094 | 2 | null | 1093 | 2 | null | Update
In light of your comments, here is an updated answer:
Approach 1: Difficult to implement/analyze
Consider the simple case of $k$ = 3 and $n$ = 2. In other words you toss 3 coins (with probabilities $p_1$, $p_2$ and $p_3$). Then, the required mass function for the above case is:
$p_1 p_2 (1-p_3) + p_1 (1-p_2) p_3 + (1-p_1) p_2 p_3$
The above reduces to the [binomial](http://en.wikipedia.org/wiki/Binomial_distribution) if the probabilities $p_i$ are all identical.
In the general case, you will have ${k \choose n}$ terms where each term is unique with a structure similar to the one above.
Approach 2: Easier to analyze/implement
Instead of the above, you could model each $X_i$ as a [bernoulli variable](http://en.wikipedia.org/wiki/Bernoulli_trial) with probability $p_i$. You could then assume that $p_i$ follows a [dirichlet distribution](http://en.wikipedia.org/wiki/Dirichlet_distribution).
You would then estimate the model parameters by constructing the posterior distribution for $p_i$ conditional on observing $n$ successes.
If you can normalize by n and and assuming that treating them as probabilities/proportions makes sense in your context you can use the [dirichlet distribution](http://en.wikipedia.org/wiki/Dirichlet_distribution).
| null | CC BY-SA 2.5 | null | 2010-08-01T22:38:49.430 | 2010-08-02T02:01:37.240 | 2010-08-02T02:01:37.240 | null | null | null |
1095 | 1 | 1721 | null | 2 | 404 | This question concerns an implementation of the topmoumoute natural gradient (tonga) algorithm as described in page 5 in the paper Le Roux et al 2007 [http://research.microsoft.com/pubs/64644/tonga.pdf](http://research.microsoft.com/pubs/64644/tonga.pdf).
I understand that the basic idea is to augment stochastic gradient ascent with the covariance of the stochastic gradient estimates. Basically, the natural gradient approach multiplies a stochastic gradient with the inverse of the covariance of the gradient estimates in order to weight each component of the gradient by the variance of this component. We prefer moving into directions that show less variance during the stochastic gradient estimates:
$ng \propto C^{-1} g$
Since updating and inverting the covariance in an online optimisation setting is costly, the authors describe a ``cheap'' approximate update algorithm as described on page 5 as:
$C_t = \gamma \hat{C}_{t-1} + g_tg_t^T$ where $C_{t-1}$ is the low rank approximation at time step t-1. Writing $\hat{C}_{t} = X_tX_t^T$ with $X_t =\sqrt{\gamma} X_{t-1}\ \ g_t]$ they use an iterative update rule for the gram matrix $X_t^T X_T = G_t = $
($\gamma G_{t-1}$ $\sqrt{\gamma} X^T_{t-1}g_t$
$\sqrt{\gamma} g^T_t X_{t-1}$ $g_t^Tg_t$)
They then state ``To keep a low-rank estimate of $\hat{C}_{t} = X_tX_t^T$, we can compute its eigendecomposition and keep only the first k eigenvectors. This can be made low cost using its relation to that of the Gram matrix
$G_t= X_t^T X_T$:
$G_t = VDV^T$
$C_t = (X_tVD^{-\frac12})D(X_tVD^{-\frac12})^T$''
Because it's cheaper than updating and decomposing G at every step, they then suggest that you should update X for several steps using
$C_{t+b} = X_{t+b}X_{t+b}^T$ with $X_{t+b} = \left[\gamma U_t, \ \gamma^{\frac{b-1}{2}g_{t+1}},...\ \ \gamma^{-\frac12}g_{t+b-1}, \ \gamma^{\frac{t+b}{2}g_{t+b}}\right]$
I can see why you can get $C_t$ from $G_t$ using the eigendecomposition. But I'm unsure about their update rule for X. The authors don't explain where U is coming from. I assume (by notation) this is the first k eigenvectors of $C_t$, correct? But if so, why would the formula for $X_t$ be a good approximation for $X_t$? When I implement this update rule, $X_t$ does not seem to be a good approximation of the ''real'' $X_t$ (that you would get from $X_t = [\sqrt{\gamma} X_{t-1}\ \ g_t]$) at all. So why should I then be able to get a good approximation of $C^{-1}$ from this (I don't)? The authors are also not quite clear about how they keep $G_t$ from growing (The size oft he matrix $G_t$ increases at each iterative update). I assume they replace $G_t$ by $\hat V\hat D\hat V^T$ with $\hat V$ and $\hat D$ being the first k components of the eigendecomposition?
So in summary:
- I tried implementing this update rule, but I'm not getting good results and am unsure my implementation is correct
- why should the update rule for $X_t$ be reasonable? Is $U_t$ really the first k eigenvectors of $C_t$? (Clearly I cannot let $X_t$ and $G_t$ grow for each observed gradient $g_t$)
- This is far fetched, but has anyone implemented this low rank approximate update of the covariance before and has some code to share so I can compare it to my implementation?
As a simple example if I simulate having 15 gradients in matlab like:
```
X = rand(15, 5);
c = cov(X);
e = eig(c);
%if the eigenvectos of C would give a good approximation of the original , this should give an appoximation of c, no?:
c_r = e'*e; %no resemblence to c
```
So I'm quite certainly doing it wrong, I guess U might actually not be the eigenvectors of C, but then what is U?
Any suggestions or references would be most welcome!
(sorry about the terrible layout, looks like only a subset of latex is supported, no arrays for matriced and embedded latex doesn't look particularly good, all the formulas are much more readable on page 5 of the referenced paper :)
Also, is this considered off-topic here? It's really more related to optimisation and machine-learning...)
| Tonga: low rank approximation of the natural gradient, question regarding Le Roux et al. 2007 | CC BY-SA 2.5 | null | 2010-08-01T22:52:13.423 | 2011-04-29T00:34:34.327 | 2011-04-29T00:34:34.327 | 3911 | 282 | [
"algorithms"
] |
1097 | 2 | null | 181 | 653 | null | I realize this question has been answered, but I don't think the extant answer really engages the question beyond pointing to a link generally related to the question's subject matter. In particular, the link describes one technique for programmatic network configuration, but that is not a "[a] standard and accepted method" for network configuration.
By following a small set of clear rules, one can programmatically set a competent network architecture (i.e., the number and type of neuronal layers and the number of neurons comprising each layer). Following this schema will give you a competent architecture but probably not an optimal one.
But once this network is initialized, you can iteratively tune the configuration during training using a number of ancillary algorithms; one family of these works by pruning nodes based on (small) values of the weight vector after a certain number of training epochs--in other words, eliminating unnecessary/redundant nodes (more on this below).
So every NN has three types of layers: input, hidden, and output.
---
Creating the NN architecture, therefore, means coming up with values for the number of layers of each type and the number of nodes in each of these layers.
The Input Layer
Simple--every NN has exactly one of them--no exceptions that I'm aware of.
With respect to the number of neurons comprising this layer, this parameter is completely and uniquely determined once you know the shape of your training data. Specifically, the number of neurons comprising that layer is equal to the number of features (columns) in your data. Some NN configurations add one additional node for a bias term.
---
The Output Layer
Like the Input layer, every NN has exactly one output layer. Determining its size (number of neurons) is simple; it is completely determined by the chosen model configuration.
Is your NN going to run in Machine Mode or Regression Mode (the ML convention of using a term that is also used in statistics but assigning a different meaning to it is very confusing)? Machine mode: returns a class label (e.g., "Premium Account"/"Basic Account"). Regression Mode returns a value (e.g., price).
If the NN is a regressor, then the output layer has a single node.
If the NN is a classifier, then it also has a single node unless softmax is used
in which case the output layer has one node per class label in your model.
The Hidden Layers
So those few rules set the number of layers and size (neurons/layer) for both the input and output layers. That leaves the hidden layers.
How many hidden layers? Well, if your data is linearly separable (which you often know by the time you begin coding a NN), then you don't need any hidden layers at all. Of course, you don't need an NN to resolve your data either, but it will still do the job.
Beyond that, as you probably know, there's a mountain of commentary on the question of hidden layer configuration in NNs (see the insanely thorough and insightful [NN FAQ](http://www.faqs.org/faqs/ai-faq/neural-nets/part1/preamble.html) for an [excellent summary](http://www.faqs.org/faqs/ai-faq/neural-nets/part1/preamble.html) of that commentary). One issue within this subject on which there is a consensus is the performance difference from adding additional hidden layers: the situations in which performance improves with a second (or third, etc.) hidden layer are very few. One hidden layer is sufficient for the large majority of problems.
So what about the size of the hidden layer(s)--how many neurons? There are some empirically derived rules of thumb; of these, the most commonly relied on is 'the optimal size of the hidden layer is usually between the size of the input and size of the output layers'. Jeff Heaton, the author of [Introduction to Neural Networks in Java](https://www.heatonresearch.com/book/), offers a few more.
In sum, for most problems, one could probably get decent performance (even without a second optimization step) by setting the hidden layer configuration using just two rules: (i) the number of hidden layers equals one; and (ii) the number of neurons in that layer is the mean of the neurons in the input and output layers.
---
Optimization of the Network Configuration
Pruning describes a set of techniques to trim network size (by nodes, not layers) to improve computational performance and sometimes resolution performance. The gist of these techniques is removing nodes from the network during training by identifying those nodes which, if removed from the network, would not noticeably affect network performance (i.e., resolution of the data). (Even without using a formal pruning technique, you can get a rough idea of which nodes are not important by looking at your weight matrix after training; look at weights very close to zero--it's the nodes on either end of those weights that are often removed during pruning.) Obviously, if you use a pruning algorithm during training, then begin with a network configuration that is more likely to have excess (i.e., 'prunable') nodes--in other words, when deciding on network architecture, err on the side of more neurons, if you add a pruning step.
Put another way, by applying a pruning algorithm to your network during training, you can approach optimal network configuration; whether you can do that in a single "up-front" (such as a genetic-algorithm-based algorithm), I don't know, though I do know that for now, this two-step optimization is more common.
| null | CC BY-SA 4.0 | null | 2010-08-02T02:20:30.080 | 2022-08-31T12:09:15.680 | 2022-08-31T12:09:15.680 | 78767 | 438 | null |
1098 | 2 | null | 1093 | 4 | null | The appropriate distribution is [Wallenius's noncentral hypergeometric distribution](http://en.wikipedia.org/wiki/Noncentral_hypergeometric_distributions). Using an urn analogy, the problem is equivalent to picking $n$ of $k$ balls without replacement, where each ball is a different color: the parameters $p$ are analogous to the weights of picking a particular color.
The problem: it's not very convenient to work with, though there is an [R package](http://cran.r-project.org/web/packages/BiasedUrn/BiasedUrn.pdf).
| null | CC BY-SA 2.5 | null | 2010-08-02T04:04:13.047 | 2010-08-02T04:04:13.047 | null | null | 647 | null |
1099 | 1 | 1108 | null | 9 | 3233 | I am working on disease infection data, and I am puzzled on whether to handle the data as "categorical" or "continuous".
- "Infection Count"
the number of infection cases found in a specific period of time, the count
is generated from categorical data (i.e. no. of patient tagged as "infected")
- "Patient Bed Days"
sum of total number of day stay in the ward by all patients in that ward, again, the count is generated from categorical data (i.e. no. of patient tagged as "staying in that particular ward")
- "infection per patient bed days"
"infection count" / "patient bed days"
both were originally count data, but now becomes a rate
### Question:
- Can I use Chi-Square here to assess whether the difference in "infections per patient bed days" is statistically significant or not?
---
### Updates
I have found that I can compare the incidence rate (or call it infection rate), but doing something like "incidence rate difference" (IRD) or "incidence rate ratio" (IRR). (I found it from [here](http://www.statsdirect.com/help/rates/incidence_rates.htm))
- What is the difference between IRD and t-test?
- Is there any statistical test complementary for IRR?
| How to handle count data (categorical data), when it has been converted to a rate? | CC BY-SA 2.5 | null | 2010-08-02T04:40:22.673 | 2020-03-06T23:20:10.997 | 2020-03-06T23:20:10.997 | 11887 | 588 | [
"categorical-data",
"count-data",
"incidence-rate-ratio"
] |
1100 | 2 | null | 886 | 20 | null | I will give an itemized answer. Can provide more citations on demand, although this is not really controversial.
- Statistics is not all about
maximizing (log)-likelihood. That's
anathema to principled bayesians who
just update their posteriors or
propagate their beliefs through an
appropriate model.
- A lot of statistics is about loss
minimization. And so is a lot of
Machine Learning. Empirical loss
minimization has a different meaning
in ML. For a clear, narrative view,
check out Vapnik's "The nature of
statistical learning"
- Machine Learning is not all about
loss minimization. First, because
there are a lot of bayesians in ML;
second, because a number of
applications in ML have to do with
temporal learning and approximate DP.
Sure, there is an objective function,
but it has a very different meaning
than in "statistical" learning.
I don't think there is a gap between the fields, just many different approaches, all overlapping to some degree. I don't feel the need to make them into systematic disciplines with well-defined differences and similarities, and given the speed at which they evolve, I think it's a doomed enterprise anyway.
| null | CC BY-SA 2.5 | null | 2010-08-02T05:16:40.183 | 2010-08-02T05:16:40.183 | null | null | 30 | null |
1101 | 2 | null | 1053 | 3 | null | Survival Analysis: A Self-Learning Text
by Kleinbaum and Klein
is pretty good. It depends on what you want. This is more of a non-technical introduction. It's focused on practical applications and minimizes the mathematics. Pedegocially, it's also intended for learning outside of the classroom.
| null | CC BY-SA 2.5 | null | 2010-08-02T05:49:40.147 | 2010-08-02T05:49:40.147 | null | null | 485 | null |
1102 | 2 | null | 890 | 0 | null | Thannks a lot for this very detailed answer. It does make perfect sense to me. If I have a model know. For example:
```
y = - 2.8 - 1.2 * urban - 3 * forest
```
with the reference category grassland and I would like to predict the model for a new environment.
If I have a point with grassland, the probability of y would be like:
```
y = 1 / (1 + exp(2.8))
```
Is that right?
And lets say for a point in in a forest
```
y = 1 / (1+ exp(-(-2.8 - 3*1)))
```
Thanks a lot again
Mike
| null | CC BY-SA 2.5 | null | 2010-08-02T08:14:36.877 | 2010-08-02T08:14:36.877 | null | null | null | null |
1103 | 2 | null | 1063 | 10 | null | To start with what we're talking about here is the standard normal distribution, a normal distribution with a mean of 0 and a standard deviation of 1. The short-hand for a variable which is distributed as a standard normal distribution is Z.
Here are my answers to your questions.
(1) I think there are two key reasons why standard normal distributions are attractive. Firstly, any normally distributed variable can be converted or transformed to a standard normal by subtracting its mean from each observation before dividing each observation by the standard deviation. This is called the Z-transformation or the creation of Z-scores. This is very handy especially in the days before computers.
If you wanted to find out the probability of some event from your variable which is normally distributed with mean 65.6 with a standard deviation of 10.2 wouldn't that be a right pain in the backside without a computer? Let's say that this variable is the heights in inches of American women. And let's say that we're interested in finding out the probability that a woman randomly drawn from the population will be very tall - say over 75 inches tall. Well this is a bit of a pain to find out with a computer as I would have to carry around a table for every possible normal distribution with me. However, if I transform this to a Z-score I can use the one table to find out the probability, thus:
$$
\begin{aligned}
\frac{(x_i - \bar x)}{\sigma_x} &= Z \\
\frac{(75 - 65.6)}{10.2} &= 0.9215
\end{aligned}
$$
Using the Z table I find that the cumulative probability P(z < Z) - 0.8212 and therefore the probability of finding a woman as tall or taller than 75 inches is 17.88%. We can do this with any normally distributed variable and so this standard normal distribution is very handy.
The second reason why the standard normal distribution is used frequently is due to the interpretation is provides in terms of Z-scores. Each "observation" in a Z-transformed variable is how many standard deviations the original untransformed observation was from the mean. This is particularly handy for standardized tests where the raw or absolute performance is less important than the relative performance.
(2) I don't follow you here. I think you may be confused as to what we mean by a cumulative distribution function. Note that the expected value of a standard normal distribution is 0, and this value corresponds to the value of .5 on the associated cumulative distribution function.
(3) Z-scores are the individual "observations" or datum in a variable which has been Z-transformed. Return to my example of the variable - height of American women in inches. One particular observation of which may be a tall woman of height 75 inches. The Z-score for this is the result of Z-transforming the variable as we did earlier:
$$
\begin{aligned}
\frac{(x_i - \bar x)}{\sigma_x} &= Z \\
\frac{(75 - 65.6)}{10.2} &= 0.9215
\end{aligned}
$$
The Z-score in this case is 0.9215. The interpretation of the Z-score is that this particular woman is 0.9215 standard deviations taller than the mean height. A person who was 55.4 inches tall have a Z-score of 1 and would be 1 standard deviation below mean height.
| null | CC BY-SA 3.0 | null | 2010-08-02T08:46:38.047 | 2012-08-20T17:02:53.953 | 2012-08-20T17:02:53.953 | 7290 | 215 | null |
1104 | 2 | null | 726 | 19 | null | >
While the individual man is an insoluble puzzle, in the aggregate he becomes a mathematical certainty. You can, for example, never foretell what any one man will be up to, but you can say with precision what an average number will be up to. Individuals vary, but percentages remain constant. So says the statistician.
Arthur Conan Doyle
| null | CC BY-SA 2.5 | null | 2010-08-02T08:58:27.830 | 2010-08-02T08:58:27.830 | null | null | 17 | null |
1106 | 2 | null | 1081 | 6 | null | Few opening remarks. In nMDS you have a matrix of dissimilarities $D_{ij}$ (not distances; for instance this can be a per cent of people that said in some poll that i&j are not similar). What you want to obtain is a set of points ($E=[X_i]$) representing objects on M-dim space; having it, you have the matrix of distances between objects in this space $d_{ij}$.
nMDS tries to guess such $E$ that $d_{ij}$ has the same rank as $D_{ij}$; it is like connecting each object pair with spring the more strong the less dissimilar the pair is and then releasing the whole configuration -- after relaxation, the objects that has been connected using stronger springs will be nearer.
Point 4 is something like overfitting regression. You have some approximation of objects position $E^a$, and so also approximated distances $d^a_{ij}$. Now you can do regression $d^a_{ij}$~$f(D_{ij})$ and using it count the distances that should be if the $D$ would be represented perfectly $d^r_{ij}=f(D_{ij})$.
Still, because you cannot directly count $E$ from $d^r$ (this is the problem of nonlinear optimization here), you must somehow mutate $E$ so that the distances will approach $d^r$. The standard method here is to mimic physical analogy with springs and move objects which are connected with most extended springs (having largest $|d^a_{ij}-d^r_{ij}|$) towards themselves so that the potential energy (this STRESS) of the system will be minimized mostly.
| null | CC BY-SA 2.5 | null | 2010-08-02T09:58:34.953 | 2010-08-02T11:23:47.897 | 2010-08-02T11:23:47.897 | null | null | null |
1107 | 2 | null | 1099 | 1 | null | From a technical purist point of view, you cannot as your ratio "infection per patient bed days" is not a continuous variable. For example, an irrational value will never appear in your dataset. However, you can ignore this technical issue and do whatever tests that may be appropriate for your context. By way of analogy, incomes levels are discrete but almost everyone treats them as continuous.
By the way, it is not entirely clear why you want to do a chi-square but I am assuming there is some background context why that makes sense for you.
| null | CC BY-SA 2.5 | null | 2010-08-02T10:03:30.803 | 2010-08-02T10:03:30.803 | null | null | null | null |
1108 | 2 | null | 1099 | 4 | null | For me it does not at all sound appropriate to use a chi-square test here.
I guess what you wanna do is the following: You have different wards or treatments or whatever else kind of nominal variable (i.e., groups) that divides your data. For each of these groups you collected the Infection Count and the Patient Bed Days to calculate the infection per patient bed days. Know you wanna check for differences between the groups, right?
If so, an analysis of variance (ANOVA, in case of more than two groups) or a t-test (in case of two groups) is probably appropriate given by the reasons in Srikant Vadali's post (and if the assumptions homogeneity of variances and comparable groups sizes are also met) and the `beginner` tag should be added.
| null | CC BY-SA 2.5 | null | 2010-08-02T10:55:30.890 | 2010-08-02T10:55:30.890 | null | null | 442 | null |
1109 | 2 | null | 1099 | 7 | null | I'm not quite sure what your data look like, or what your precise problem is, but I assume you have a table with the following headings and type:
>
ward (categorical), infections (integer), patient-bed-days (integer or continuous).
and you want to tell if the infection rate is statistically different for different wards?
One way of doing this is to use a Poisson model:
>
Infections ~ Poisson (Patient bed days * ward infection rate)
This can be achieved by using a Poisson glm, with log link function and the log of patient-bed-days in the offset. In R, the code would look something like:
```
glm(infections ~ ward + offset(log(patient-bed-days)), family=poisson())
```
| null | CC BY-SA 2.5 | null | 2010-08-02T11:24:35.680 | 2010-08-02T11:24:35.680 | null | null | 495 | null |
1111 | 2 | null | 1053 | 6 | null | For a very clear, succinct and applied approach, I highly recommend [Event History Modeling](http://rads.stackoverflow.com/amzn/click/0521546737) by Box-Steffenmeier and Jones
| null | CC BY-SA 2.5 | null | 2010-08-02T13:53:47.847 | 2010-08-02T13:53:47.847 | null | null | 302 | null |
1112 | 1 | 1113 | null | 34 | 53970 | I want to represent a variable as a number between 0 and 1. The variable is a non-negative integer with no inherent bound. I map 0 to 0 but what can I map to 1 or numbers between 0 and 1?
I could use the history of that variable to provide the limits. This would mean I have to restate old statistics if the maximum increases. Do I have to do this or are there other tricks I should know about?
| How to represent an unbounded variable as number between 0 and 1 | CC BY-SA 2.5 | null | 2010-08-02T14:38:55.070 | 2021-05-21T15:17:22.467 | 2010-09-17T20:29:37.823 | null | 652 | [
"normalization"
] |
1113 | 2 | null | 1112 | 40 | null | A very common trick to do so (e.g., in connectionist modeling) is to use the [hyperbolic tangent tanh](http://en.wikipedia.org/wiki/Tanh) as the 'squashing function".
It automatically fits all numbers into the interval between -1 and 1. Which in your case restricts the range from 0 to 1.
In `r` and `matlab` you get it via `tanh()`.
Another squashing function is the logistic function (thanks to Simon for the name), provided by $ f(x) = 1 / (1 + e ^{-x} ) $, which restricts the range from 0 to 1 (with 0 mapped to .5). So you would have to multiply the result by 2 and subtract 1 to fit your data into the interval between 0 and 1.
Here is some simple R code which plots both functions (tanh in red, logistic in blue) so you can see how both squash:
```
x <- seq(0,20,0.001)
plot(x,tanh(x),pch=".", col="red", ylab="y")
points(x,(1 / (1 + exp(-x)))*2-1, pch=".",col="blue")
```
| null | CC BY-SA 2.5 | null | 2010-08-02T14:56:35.277 | 2010-08-03T00:57:51.577 | 2010-08-03T00:57:51.577 | 159 | 442 | null |
1114 | 2 | null | 1112 | 11 | null | Any sigmoid function will work:
- The top half of the logistic function (multiply by 2, subtract 1)
- The error function
- tanh, as suggested by Henrik.
| null | CC BY-SA 2.5 | null | 2010-08-02T15:20:11.690 | 2010-08-02T15:20:11.690 | null | null | 495 | null |
1115 | 1 | null | null | -1 | 4445 | I have computed percentage change from time1 to time2 for several variables.
Can I predict percentage change in earnings from percentage change in produced and percentage changed in price?
When I ran a model with actual data and dummy coded time (time1=1, time2=0), the dummy variable was not statistically significant. But there are large changes.
| Can I predict percentage change in earnings from percentage change in produced and percentage changed in price? | CC BY-SA 3.0 | null | 2010-08-02T16:28:37.277 | 2011-09-28T00:05:47.827 | 2011-09-28T00:05:47.827 | 183 | 474 | [
"regression"
] |
1116 | 2 | null | 1053 | 4 | null | "Survival analysis using SAS: a practical guide" by Paul D. Allison provides a good guide to the connection between the math and SAS code - how to think about your information, how to code, how to interpret results. Even if you are using R, there will be parallels that could prove useful.
| null | CC BY-SA 2.5 | null | 2010-08-02T16:29:37.387 | 2010-08-02T16:29:37.387 | null | null | null | null |
1117 | 2 | null | 1040 | 0 | null | The answer depends on the degree of misspecification and sample size. In small and moderate samples simplified model will fit (in most cases) better to data then the true model.
In moderate and large samples residuals don't have to be normal as due to CLT regression coefficients are normal anyway.
| null | CC BY-SA 2.5 | null | 2010-08-02T16:43:44.403 | 2010-08-02T16:43:44.403 | null | null | 419 | null |
1118 | 2 | null | 1115 | 0 | null | You're likely to have problems with covariance - your model fails to meet the assumption of linear regression that the observations are independent, because a subject in your study will be correlated with itself between time 1 and time 2 (By the way, what are your observations? One for each product type?)
You might want to look into "repeated measures" methods.
| null | CC BY-SA 2.5 | null | 2010-08-02T16:43:49.333 | 2010-08-02T16:43:49.333 | null | null | null | null |
1119 | 2 | null | 1112 | 4 | null | In addition to the good suggestions by Henrik and Simon Byrne, you could use f(x) = x/(x+1). By way of comparison, the logistic function will exaggerate differences as x grows larger. That is, the difference between f(x) and f(x+1) will be larger with the logistic function than with f(x) = x/(x+1). You may or may not want that effect.
| null | CC BY-SA 2.5 | null | 2010-08-02T16:49:48.093 | 2010-08-02T16:49:48.093 | null | null | null | null |
1120 | 2 | null | 138 | 2 | null | There are some very good learning materials here: [http://scc.stat.ucla.edu/mini-courses/materials-from-past-mini-courses/spring-2009-mini-course-materials/](http://scc.stat.ucla.edu/mini-courses/materials-from-past-mini-courses/spring-2009-mini-course-materials/)
| null | CC BY-SA 2.5 | null | 2010-08-02T16:55:33.157 | 2010-08-02T16:55:33.157 | null | null | null | null |
1121 | 2 | null | 1112 | 1 | null | There are two ways to implement this that I use commonly. I am always working with realtime data, so this assumes continuous input. Here's some pseudo-code:
Using a trainable minmax:
```
define function peak:
// keeps the highest value it has received
define function trough:
// keeps the lowest value it has received
define function calibrate:
// toggles whether peak() and trough() are receiving values or not
define function scale:
// maps input range [trough.value() to peak.value()] to [0.0 to 1.0]
```
This function requires that you either perform an initial training phase (by using `calibrate()`) or that you re-train either at certain intervals or according to certain conditions. For instance, imagine a function like this:
```
define function outBounds (val, thresh):
if val > (thresh*peak.value()) || val < (trough.value() / thresh):
calibrate()
```
>
peak and trough are normally not receiving values, but if outBounds() receives a value that is more than 1.5 times the current peak or less than the current trough divided by 1.5, then calibrate() is called which allows the function to re-calibrate automatically.
Using an historical minmax:
```
var arrayLength = 1000
var histArray[arrayLength]
define historyArray(f):
histArray.pushFront(f) //adds f to the beginning of the array
define max(array):
// finds maximum element in histArray[]
return max
define min(array):
// finds minimum element in histArray[]
return min
define function scale:
// maps input range [min(histArray) to max(histArray)] to [0.0 to 1.0]
main()
historyArray(histArray)
scale(min(histArray), max(histArray), histArray[0])
// histArray[0] is the current element
```
| null | CC BY-SA 4.0 | null | 2010-08-02T16:57:33.557 | 2019-03-20T23:06:17.843 | 2019-03-20T23:06:17.843 | 241755 | 162 | null |
1122 | 2 | null | 534 | 9 | null |
- Almost always in randomized trials
- Almost always in observational study when someone measure all confouders (almost never)
- Sometimes when someone measure some counfounders (IC* algorithim of DAG discovery in Pearl's book Causality)
- In non gaussian linear models with two or more variables but not using correlation as measure of relationship (LiNGAM)
Most of discovery algorithms are implemented in [Tetrad IV](http://www.phil.cmu.edu/projects/tetrad/)
| null | CC BY-SA 3.0 | null | 2010-08-02T17:05:49.053 | 2013-06-09T03:50:18.943 | 2013-06-09T03:50:18.943 | 7290 | 419 | null |
1123 | 1 | null | null | 2 | 452 | In many papers I see data representing a rate of success (i.e a number between 0 and 1) modeled as a gaussian. This is clearly a sin (the range of variation of the gaussian is all of R),
but how bad is that sin? Under what assumptions would you say it is tolerable?
| Modeling success rate with gaussian distribution | CC BY-SA 2.5 | null | 2010-08-02T17:08:08.113 | 2010-08-03T19:26:55.673 | 2010-08-03T18:49:43.080 | null | null | [
"distributions",
"normality-assumption"
] |
1124 | 2 | null | 1123 | 1 | null | Could you quote from "many papers" so that we would get some context? Between "Gaussian" and "number between 0 and 1" I see slight conflict as the draws from a Gaussian are not bounded. Maybe you meant p-values?
| null | CC BY-SA 2.5 | null | 2010-08-02T17:15:04.730 | 2010-08-02T17:15:04.730 | null | null | 334 | null |
1125 | 2 | null | 1123 | 1 | null | It depends on the data. While the normal distribution does span the real line do note that nearly 99% of the values are contained within 3 standard deviations of the mean. Thus, if the following conditions hold it may be a reasonable assumption:
(a) the data range is such that 99% of the data falls between [$\mu - 3\sigma,\mu+3\sigma]$
(b) the data is unimodal
(c) the data 'passes' other relevant tests for [normality](http://en.wikipedia.org/wiki/Normality_test)
Having said that some decision needs to be taken in the event that a draw from this distribution falls below 0 or above 1. Two ideas in such a situation:
(a) If draw < 0 set the draw to 0 and if draw > 1 set the draw to 1 or
(b) Model the distribution as truncated normal with the cut off points at 0 and 1.
| null | CC BY-SA 2.5 | null | 2010-08-02T17:15:14.387 | 2010-08-02T17:15:14.387 | null | null | null | null |
1126 | 1 | null | null | 11 | 584 | Joshua Epstein wrote a paper titled "Why Model?" available at [http://www.santafe.edu/media/workingpapers/08-09-040.pdf](http://www.santafe.edu/media/workingpapers/08-09-040.pdf) in which gives 16 reasons:
- Explain (very distinct from predict)
- Guide data collection
- Illuminate core dynamics
- Suggest dynamical analogies
- Discover new questions
- Promote a scientific habit of mind
- Bound (bracket) outcomes to plausible ranges
- Illuminate core uncertainties.
- Offer crisis options in near-real time
- Demonstrate tradeoffs / suggest efficiencies
- Challenge the robustness of prevailing theory through perturbations
- Expose prevailing wisdom as incompatible with available data
- Train practitioners
- Discipline the policy dialogue
- Educate the general public
- Reveal the apparently simple (complex) to be complex (simple)
(Epstein elaborates on many of the reasons in more detail in his paper.)
I would like to ask the community:
- are there are additional reasons that Epstein did not list?
- is there a more elegant way to conceptualize (a different grouping perhaps) these reasons?
- are any of Epstein's reasons flawed or incomplete?
- are their clearer elaborations of these reasons?
| Reasons besides prediction to build models? | CC BY-SA 2.5 | null | 2010-08-02T17:29:12.087 | 2010-09-03T03:49:48.030 | null | null | 660 | [
"modeling"
] |
1127 | 2 | null | 1115 | 1 | null | You can do it, but I think that using percentages in a regression framework is likely to lead to a model that has little value. I would try to generalize the model so that the percentage change is a special case, but that more complex behaviour is possible.
| null | CC BY-SA 2.5 | null | 2010-08-02T17:55:23.633 | 2010-08-02T17:55:23.633 | null | null | 187 | null |
1128 | 2 | null | 1126 | 6 | null | >
Reason 17. Write a paper.
Sort-of just kidding but not really. There seems to be a bit of overlap between some of his points (eg 1, 5, 6, 12, 14).
| null | CC BY-SA 2.5 | null | 2010-08-02T17:57:15.127 | 2010-08-02T17:57:15.127 | null | null | 334 | null |
1129 | 2 | null | 1016 | 1 | null | Another vote for Rob's answer.
There are also some interesting ideas in the "relative importance" literature. This work develops methods that seek to determine how much importance is associated with each of a number of candidate predictors. There are Bayesian and Frequentist methods. Check the "relaimpo" package in R for citations and code.
| null | CC BY-SA 2.5 | null | 2010-08-02T18:00:10.207 | 2010-08-02T18:00:10.207 | null | null | 187 | null |
1130 | 1 | null | null | 4 | 2001 | I have data for about 1 year, 100 observations, multiple observations per subject, transactions occur on weekly basis but have 6-12 subjects per week, there is no order to this. There is a policy change in latter half of year, I want to model change in dependent variable due to policy change as a dummy variable: time1=0, time2=1.
- Is this a case for fixed effects estimation?
The number of weeks per subject varies a lot and the number of weeks in time1 is greater than time2. Computed means for time1, time2 and percent change=large change in dependent variable, estimated linear model:
pay=X1 X2 Time(dummy). Dummy variable is not statistically significant.
- Any suggestions as to how to model this?
Can I treat it as panel data?
| Regression-multiple observations per subject | CC BY-SA 3.0 | null | 2010-08-02T18:44:15.063 | 2011-10-10T08:43:36.230 | 2011-10-10T08:43:36.230 | 183 | 474 | [
"regression"
] |
1131 | 2 | null | 1126 | 5 | null | >
Save money
I build mathematical/statistical of cellular mechanisms. For example, how a particular protein affects cellular ageing. The role of the model is mainly prediction, but also to save money. It's far cheaper to employ a single modeller than (say) a few wet-lab biologists with the associated equipment costs. Of course modelling doesn't fully replace the experiment, it just aids the process.
| null | CC BY-SA 2.5 | null | 2010-08-02T18:52:56.510 | 2010-08-02T18:52:56.510 | null | null | 8 | null |
1132 | 2 | null | 1126 | 5 | null | >
For fun!
I'm sure most statisticians/modellers do their job because they enjoy it. Getting paid to do something you enjoy is quite nice!
| null | CC BY-SA 2.5 | null | 2010-08-02T19:16:14.867 | 2010-08-02T19:16:14.867 | null | null | 8 | null |
1133 | 1 | 1210 | null | 14 | 20145 | I have cross classified data in a 2 x 2 x 6 table. Let's call the dimensions `response`, `A` and `B`. I fit a logistic regression to the data with the model `response ~ A * B`. An analysis of deviance of that model says that both terms and their interaction are significant.
However, looking at the proportions of the data, it looks like only 2 or so levels of `B` are responsible for these significant effects. I would like to test to see which levels are the culprits. Right now, my approach is to perform 6 chi-squared tests on 2 x 2 tables of `response ~ A`, and then to adjust the p-values from those tests for multiple comparisons (using the Holm adjustment).
My question is whether there is a better approach to this problem. Is there a more principled modeling approach, or multiple chi-squared test comparison approach?
| Multiple Chi-Squared Tests | CC BY-SA 2.5 | null | 2010-08-02T19:19:42.860 | 2016-02-04T08:01:37.070 | null | null | 287 | [
"categorical-data",
"logistic",
"multiple-comparisons",
"chi-squared-test"
] |
1134 | 2 | null | 1126 | 4 | null | >
dimension reduction
Sometimes there can be too much data, so forming an initial model allows for further analysis.
| null | CC BY-SA 2.5 | null | 2010-08-02T19:28:31.653 | 2010-08-02T19:28:31.653 | null | null | 5 | null |
1135 | 2 | null | 1126 | 4 | null | >
regulation
Government agencies require firms to provide reports using certain models. This provides for a degree of standardization in oversight. An example is the use of Value-at-Risk in the financial sector.
| null | CC BY-SA 2.5 | null | 2010-08-02T19:38:11.380 | 2010-08-02T19:38:11.380 | null | null | 5 | null |
1136 | 2 | null | 1126 | 2 | null | This is closely related to some of the others, but:
>
Eliminate human judgement
Human decision making is subject to many different forces and biases. That means that you not only get different answers to the same question, but you can also end up with really suboptimal outcomes. Examples would be the over-confidence bias or anchoring.
| null | CC BY-SA 2.5 | null | 2010-08-02T19:44:20.117 | 2010-08-02T19:44:20.117 | null | null | 5 | null |
1137 | 2 | null | 1115 | 0 | null | Regress the X1 value for time1 (and any other covariates you want) on the X1 variable for time 2 (your dependent variable). Your regression model will look something like this:
"x1 time 2" = "x1 time1" + x2 + x3 + x4 etc.
Your regression coefficients for x2....xn will be the effect of changes of that variable on "x1 time2" controlling for "x1 time 1" (and everything else in the model). Therefore you'll be able to see what is going on at time2 controlling for where things were at time 1.
| null | CC BY-SA 2.5 | null | 2010-08-02T20:17:03.580 | 2010-08-02T20:17:03.580 | null | null | null | null |
1138 | 2 | null | 1126 | 1 | null | >
Repetitive problems that involve some form of benefit / cost
In my field, we model the same set of variables in different locations, time frame, and magnitudes
| null | CC BY-SA 2.5 | null | 2010-08-02T20:20:52.697 | 2010-08-02T20:20:52.697 | null | null | 59 | null |
1140 | 2 | null | 103 | 4 | null | [EagerEyes ](http://eagereyes.org) by Robert Kosara (~5 posts a month). This blog includes tutorials and discussion articles plus it has a great home page with lots of links to useful information.
| null | CC BY-SA 3.0 | null | 2010-08-02T20:31:22.017 | 2012-10-24T14:53:17.483 | 2012-10-24T14:53:17.483 | 615 | 665 | null |
1141 | 2 | null | 103 | 4 | null | [https://web.archive.org/web/20120102041205/https://datavisualization.ch/](https://web.archive.org/web/20120102041205/https://datavisualization.ch/)
by Benjamin Wiederkehr and others (~15 links a month). If you want heaps of links you can subscribe to their twitter feed twitter slash datavis (~5 links a day)
ahhh... i'm a new member and so i can only post one link per post.
| null | CC BY-SA 4.0 | null | 2010-08-02T20:35:41.460 | 2022-11-29T16:30:03.407 | 2022-11-29T16:30:03.407 | 362671 | 665 | null |
1142 | 1 | null | null | 107 | 82823 | I am working with a large amount of time series. These time series are basically network measurements coming every 10 minutes, and some of them are periodic (i.e. the bandwidth), while some other aren't (i.e. the amount of routing traffic).
I would like a simple algorithm for doing an online "outlier detection". Basically, I want to keep in memory (or on disk) the whole historical data for each time series, and I want to detect any outlier in a live scenario (each time a new sample is captured). What is the best way to achieve these results?
I'm currently using a moving average in order to remove some noise, but then what next? Simple things like standard deviation, mad, ... against the whole data set doesn't work well (I can't assume the time series are stationary), and I would like something more "accurate", ideally a black box like:
double outlier_detection(double* vector, double value);
where vector is the array of double containing the historical data, and the return value is the anomaly score for the new sample "value" .
| Simple algorithm for online outlier detection of a generic time series | CC BY-SA 2.5 | null | 2010-08-02T20:37:27.650 | 2018-08-07T10:50:54.290 | null | null | 667 | [
"time-series",
"outliers",
"mathematical-statistics",
"real-time"
] |
1144 | 2 | null | 1142 | 2 | null | You could use the standard deviation of the last N measurements (you have to pick a suitable N). A good anomaly score would be how many standard deviations a measurement is from the moving average.
| null | CC BY-SA 2.5 | null | 2010-08-02T20:48:01.103 | 2010-08-02T20:48:01.103 | null | null | 666 | null |
1145 | 2 | null | 1123 | 1 | null | It's usually a small sin. In nature, most phenomena can't realistically receive any value in R, but we model them as if they could.
The greater sin is to assume that the rate of success is shaped like a normal distribution if it isn't.
| null | CC BY-SA 2.5 | null | 2010-08-02T20:53:20.623 | 2010-08-02T20:53:20.623 | null | null | 666 | null |
1146 | 2 | null | 1126 | 3 | null | >
Control
A major aspect of the dynamic modelling literature is associated with control. This kind of work spans a lot of disciplines from politics/economics (see, e.g. Stafford Beer), biology (see e.g. N Weiner's 1948 work on Cybernetics) through to contemporary state space control theory (see for an intro Ljung 1999).
Control is kind of related to Epstein's 9 and 10, and Shane's answers about human judgement / regulation, but I thought it made sense to be explicit. Indeed, at the end of my engineering undergraduate career I would have given you a very concise response to the uses of modelling: control, inference and prediction. I guess inference, by which I mean filtering/smoothing/dimension-reduction etc, is maybe similar to Epstein's points 3 and 8.
Of course in my later years I wouldn't be so bold as to limit the purposes of modelling to control, inference and prediction. Maybe a fourth, covering many of Epsteins's points, should be "coercion" - the only way you should "educate the public" is to encourage us to make our own models...
| null | CC BY-SA 2.5 | null | 2010-08-02T20:54:01.713 | 2010-08-02T20:54:01.713 | null | null | 668 | null |
1147 | 2 | null | 1142 | 6 | null | I am guessing sophisticated time series model will not work for you because of the time it takes to detect outliers using this methodology. Therefore, here is a workaround:
- First establish a baseline 'normal' traffic patterns for a year based on manual analysis of historical data which accounts for time of the day, weekday vs weekend, month of the year etc.
- Use this baseline along with some simple mechanism (e.g., moving average suggested by Carlos) to detect outliers.
You may also want to review the statistical [process control literature](http://en.wikipedia.org/wiki/Statistical_process_control) for some ideas.
| null | CC BY-SA 2.5 | null | 2010-08-02T21:23:37.547 | 2010-08-02T21:23:37.547 | null | null | null | null |
1148 | 2 | null | 1142 | 6 | null | Seasonally adjust the data such that a normal day looks closer to flat. You could take today's 5:00pm sample and subtract or divide out the average of the previous 30 days at 5:00pm. Then look past N standard deviations (measured using pre-adjusted data) for outliers.
This could be done separately for weekly and daily "seasons."
| null | CC BY-SA 2.5 | null | 2010-08-02T21:50:27.147 | 2010-08-02T21:50:27.147 | null | null | 33 | null |
1149 | 1 | 1150 | null | 106 | 51609 | The wiki discusses the problems that arise when [multicollinearity](http://en.wikipedia.org/wiki/Multicollinearity) is an issue in linear regression. The basic problem is multicollinearity results in unstable parameter estimates which makes it very difficult to assess the effect of independent variables on dependent variables.
I understand the technical reasons behind the problems (may not be able to invert $X' X$, ill-conditioned $X' X$ etc) but I am searching for a more intuitive (perhaps geometric?) explanation for this issue.
Is there a geometric or perhaps some other form of easily understandable explanation as to why multicollinearity is problematic in the context of linear regression?
| Is there an intuitive explanation why multicollinearity is a problem in linear regression? | CC BY-SA 2.5 | null | 2010-08-02T22:42:32.947 | 2021-05-26T12:01:47.603 | 2021-05-22T15:32:35.887 | 11887 | null | [
"regression",
"multicollinearity",
"intuition",
"faq"
] |
1150 | 2 | null | 1149 | 112 | null | Consider the simplest case where $Y$ is regressed against $X$ and $Z$ and where $X$ and $Z$ are highly positively correlated. Then the effect of $X$ on $Y$ is hard to distinguish from the effect of $Z$ on $Y$ because any increase in $X$ tends to be associated with an increase in $Z$.
Another way to look at this is to consider the equation. If we write $Y = b_0 + b_1X + b_2Z + e$, then the coefficient $b_1$ is the increase in $Y$ for every unit increase in $X$ while holding $Z$ constant. But in practice, it is often impossible to hold $Z$ constant and the positive correlation between $X$ and $Z$ mean that a unit increase in $X$ is usually accompanied by some increase in $Z$ at the same time.
A similar but more complicated explanation holds for other forms of multicollinearity.
| null | CC BY-SA 2.5 | null | 2010-08-02T22:59:09.380 | 2010-08-10T06:07:48.497 | 2010-08-10T06:07:48.497 | 159 | 159 | null |
1151 | 2 | null | 1149 | 22 | null | The geometric approach is to consider the least squares projection of $Y$ onto the subspace spanned by $X$.
Say you have a model:
$E[Y | X] = \beta_{1} X_{1} + \beta_{2} X_{2}$
Our estimation space is the plane determined by the vectors $X_{1}$ and $X_{2}$ and the problem is to find coordinates corresponding to $(\beta_{1}, \beta_{2})$ which will describe the vector $\hat{Y}$, a least squares projection of $Y$ on to that plane.
Now suppose $X_{1} = 2 X_{2}$, i.e. they're collinear. Then, the subspace determined by $X_{1}$ and $X_{2}$ is just a line and we have only one degree of freedom. So we can't determine two values $\beta_{1}$ and $\beta_{2}$ as we were asked.
| null | CC BY-SA 3.0 | null | 2010-08-02T23:26:02.567 | 2013-09-21T22:18:13.130 | 2013-09-21T22:18:13.130 | 17230 | 251 | null |
1152 | 2 | null | 1130 | 1 | null | If you estimate the policy change as a fixed effects estimation in the context of an OLS regression you'll over-estimate your degrees of freedom because of the repeated measures by subject. If you do not think there is an overall trend of time (beyond the policy shift) then there is no reason to keep all of the observations, you could simply aggregate by subject for "before policy change" and "after policy change" and do a paired samples t-test. Failing that - if you intend to add more predictors, like perhaps a linear effect of time, you might think about liner mixed effects regression, e.g.
```
lmer(pay ~ time * policy + (1+time|SubjID))
```
... might be a good place to start.
| null | CC BY-SA 2.5 | null | 2010-08-02T23:31:46.130 | 2010-08-02T23:31:46.130 | null | null | 196 | null |
1153 | 2 | null | 1142 | 97 | null | Here is a simple R function that will find time series outliers (and optionally show them in a plot). It will handle seasonal and non-seasonal time series. The basic idea is to find robust estimates of the trend and seasonal components and subtract them. Then find outliers in the residuals. The test for residual outliers is the same as for the standard boxplot -- points greater than 1.5IQR above or below the upper and lower quartiles are assumed outliers. The number of IQRs above/below these thresholds is returned as an outlier "score". So the score can be any positive number, and will be zero for non-outliers.
I realise you are not implementing this in R, but I often find an R function a good place to start. Then the task is to translate this into whatever language is required.
```
tsoutliers <- function(x,plot=FALSE)
{
x <- as.ts(x)
if(frequency(x)>1)
resid <- stl(x,s.window="periodic",robust=TRUE)$time.series[,3]
else
{
tt <- 1:length(x)
resid <- residuals(loess(x ~ tt))
}
resid.q <- quantile(resid,prob=c(0.25,0.75))
iqr <- diff(resid.q)
limits <- resid.q + 1.5*iqr*c(-1,1)
score <- abs(pmin((resid-limits[1])/iqr,0) + pmax((resid - limits[2])/iqr,0))
if(plot)
{
plot(x)
x2 <- ts(rep(NA,length(x)))
x2[score>0] <- x[score>0]
tsp(x2) <- tsp(x)
points(x2,pch=19,col="red")
return(invisible(score))
}
else
return(score)
}
```
| null | CC BY-SA 3.0 | null | 2010-08-03T00:54:56.310 | 2012-02-17T11:27:27.007 | 2012-02-17T11:27:27.007 | 159 | 159 | null |
1154 | 2 | null | 1142 | 16 | null | If you're worried about assumptions with any particular approach, one approach is to train a number of learners on different signals, then use [ensemble methods](http://en.wikipedia.org/wiki/Ensembles_of_classifiers) and aggregate over the "votes" from your learners to make the outlier classification.
BTW, this might be worth reading or skimming since it references a few approaches to the problem.
- Online outlier detection over data streams
| null | CC BY-SA 3.0 | null | 2010-08-03T00:56:33.157 | 2012-02-11T08:29:21.253 | 2012-02-11T08:29:21.253 | 2921 | 251 | null |
1155 | 2 | null | 1149 | 2 | null | If two regressors are perfectly correlated, their coefficients will be impossible to calculate; it's helpful to consider why they would be difficult to interpret if we could calculate them. In fact, this explains why it's difficult to interpret variables that are not perfectly correlated but that are also not truly independent.
Suppose that our dependent variable is the daily supply of fish in New York, and our independent variables include one for whether it rains on that day and one for the amount of bait purchased on that day. What we don't realize when we collect our data is that every time it rains, fishermen purchase no bait, and every time it doesn't, they purchase a constant amount of bait. So Bait and Rain are perfectly correlated, and when we run our regression, we can't calculate their coefficients. In reality, Bait and Rain are probably not perfectly correlated, but we wouldn't want to include them both as regressors without somehow cleaning them of their endogeneity.
| null | CC BY-SA 2.5 | null | 2010-08-03T02:20:32.477 | 2010-08-03T02:20:32.477 | null | null | 672 | null |
1156 | 2 | null | 1149 | 4 | null | My (very) layman intuition for this is that the OLS model needs a certain level of "signal" in the X variable in order to detect it gives a "good" predicting for Y. If the same "signal" is spread over many X's (because they are correlated), then none of the correlated X's can give enough of a "proof" (statistical significance) that it is a real predictor.
The previous (wonderful) answers do a great work in explaining why that is the case.
| null | CC BY-SA 2.5 | null | 2010-08-03T02:28:37.437 | 2010-08-03T02:28:37.437 | null | null | 253 | null |
1157 | 2 | null | 103 | 4 | null | [Chart Porn](http://chartporn.org/)
I find the blog name pretty humorous. Great dataviz.
| null | CC BY-SA 3.0 | null | 2010-08-03T02:45:53.883 | 2012-10-24T14:53:45.140 | 2012-10-24T14:53:45.140 | 615 | 11 | null |
1158 | 2 | null | 103 | 7 | null | It's not a blog, but Edward Tufte has an [interesting forum](http://www.edwardtufte.com/bboard/q-and-a?topic_id=1) on information design including data visualization.
| null | CC BY-SA 2.5 | null | 2010-08-03T02:49:32.377 | 2010-08-03T02:49:32.377 | null | null | 159 | null |
1159 | 2 | null | 1133 | 1 | null | The unprincipled approach is to discard the disproportionate data, refit the model and see if logit/conditional odds ratios for response and A are very different (controlling for B). This might tell you if there's cause for concern. Pooling the levels of B is another approach. On more principled lines, If you're worried about relative proportions inducing Simpson's paradox, then you can look into the conditional and marginal odds ratios for response/A and see if they reverse.
For avoiding multiple comparisons in particular, the only thing that occurs to me is to use a hierarchical model which accounts for random effects across levels.
| null | CC BY-SA 2.5 | null | 2010-08-03T02:54:30.290 | 2010-08-03T02:54:30.290 | null | null | 251 | null |
1160 | 1 | null | null | 2 | 467 | R allows us to put code to run in the beginning/end of a session.
What codes would you suggest putting there?
I know of three interesting examples (although I don't have "how to do them" under my fingers here):
- Saving the session history when closing R.
- Running a fortune() at the beginning of an R session.
- I was thinking of having an automated saving of the workspace. But I didn't set on solving the issue of managing space (so there would always be X amount of space used for that backup)
Any more ideas? (or how you implement the above ideas)
p.s: I am not sure if to put this here or on stackoverflow. But I feel the people here are the right ones to ask.
| What code would you put before/after your R session? | CC BY-SA 2.5 | null | 2010-08-03T03:02:57.650 | 2010-09-07T20:59:46.453 | 2010-08-03T07:28:24.693 | null | 253 | [
"r"
] |
1161 | 2 | null | 1160 | 5 | null | Some information about how to implement this is provided at `help(.First)` and `help(.Last)`.
| null | CC BY-SA 2.5 | null | 2010-08-03T03:49:10.593 | 2010-08-03T03:49:10.593 | null | null | 159 | null |
1162 | 2 | null | 1126 | 2 | null | >
To take (useful) action.
I'm paraphrasing someone else here, but suppose we built a system of public health around the model that infectious diseases are due to malevolent spirits that spread through contact. The science of microbes may be an infinitely better model, but you could prevent a good number of contagions nonetheless. (I think this was on reading a history of cybernetics, but I can't remember who made the point.)
The point is that, along the lines of "all models bad, some useful", we need to formulate models and refine them in order to undertake any useful actions with lasting consequences. Otherwise, we might as well flip coins.
| null | CC BY-SA 2.5 | null | 2010-08-03T06:26:47.963 | 2010-08-03T06:26:47.963 | null | null | 251 | null |
1164 | 1 | 1170 | null | 88 | 6844 | When solving business problems using data, it's common that at least one key assumption that under-pins classical statistics is invalid. Most of the time, no one bothers to check those assumptions so you never actually know.
For instance, that so many of the common web metrics are "long-tailed" (relative to the normal distribution) is, by now, so well documented that we take it for granted. Another example, online communities--even in communities with thousands of members, it's well-documented that by far the largest share of contribution to/participation in many of these community is attributable to a minuscule group of 'super-contributors.' (E.g., a few months ago, just after the SO API was made available in beta, a StackOverflow member published a brief analysis from data he collected through the API; his conclusion--less than one percent of the SO members account for most of the activity on SO (presumably asking questions, and answering them), another 1-2% accounted for the rest, and the overwhelming majority of the members do nothing).
Distributions of that sort--again more often the rule rather than the exception--are often best modeled with a power law density function. For these type of distributions, even the central limit theorem is problematic to apply.
So given the abundance of populations like this of interest to analysts, and given that classical models perform demonstrably poorly on these data, and given that robust and resistant methods have been around for a while (at least 20 years, I believe)--why are they not used more often? (I am also wondering why I don't use them more often, but that's not really a question for CrossValidated.)
Yes I know that there are textbook chapters devoted entirely to robust statistics and I know there are (a few) R Packages (robustbase is the one I am familiar with and use), etc.
And yet given the obvious advantages of these techniques, they are often clearly the better tools for the job--why are they not used much more often? Shouldn't we expect to see robust (and resistant) statistics used far more often (perhaps even presumptively) compared with the classical analogs?
The only substantive (i.e., technical) explanation I have heard is that robust techniques (likewise for resistant methods) lack the power/sensitivity of classical techniques. I don't know if this is indeed true in some cases, but I do know it is not true in many cases.
A final word of preemption: yes I know this question does not have a single demonstrably correct answer; very few questions on this Site do. Moreover, this question is a genuine inquiry; it's not a pretext to advance a point of view--I don't have a point of view here, just a question for which i am hoping for some insightful answers.
| Why haven't robust (and resistant) statistics replaced classical techniques? | CC BY-SA 4.0 | null | 2010-08-03T07:49:34.003 | 2022-12-21T10:12:20.553 | 2022-12-21T10:12:20.553 | 110833 | 438 | [
"model-selection",
"nonparametric",
"outliers",
"robust",
"philosophical"
] |
1165 | 2 | null | 1160 | 2 | null | On open, I set R options, load environment variables (eg. global variables, API keys) and open database connections, and then close those connections when exiting. With some of these things, I prefer to do them onLoad of my packages rather than per session.
Regarding how to save your session, use the save command.
| null | CC BY-SA 2.5 | null | 2010-08-03T08:27:13.243 | 2010-08-03T08:27:13.243 | null | null | 5 | null |
1166 | 2 | null | 1164 | 12 | null | I Give an answer in two directions:
- things that are robust are not necessarily labeled robust. If you believe robustness against everything exists then you are naive.
- Statistical approaches that leave the problem of robustness appart are sometime not adapted to the real world but are often more valuable (as a concept) than an algorithm that looks like kitchening.
developpment
First, I think there are a lot of good approaches in statistic (you will find them in R packages not necessarily with robust mentionned somewhere) which are naturally robust and tested on real data and the fact that you don't find algorithm with "robust" mentionned somewhere does not mean it is not robust. Anyway if you think being robust means being universal then you'll never find any robust procedure (no free lunch) you need to have some knowledge/expertise on the data you analyse in order to use adapted tool or to create an adapted model.
On the other hand, some approaches in statistic are not robust because they are dedicated to one single type of model. I think it is good sometime to work in a laboratory to try to understand things. It is also good to treat problem separatly to understand to what problem our solution is... this is how mathematician work. The example of Gaussian model elocant: is so much criticised because the gaussian assumption is never fulfilled but has bring 75% of the ideas used practically in statistic today. Do you really think all this is about writting paper to follow the publish or perish rule (which I don't like, I agree) ?
| null | CC BY-SA 2.5 | null | 2010-08-03T09:05:56.737 | 2010-08-03T16:51:26.903 | 2010-08-03T16:51:26.903 | 223 | 223 | null |
1169 | 1 | 1171 | null | 5 | 906 | I'm looking to check my logic here.
Say you measure a quantity in group A, and find the mean is 2 and your 95% confidence interval ranges from 1 to 3. Then you measure the same quantity in group B and find a mean of 4 with a 95% confidence interval that ranges from 3.5 to 4.5. Assuming that A & B are independent, what is the 95% confidence interval for the difference between the groups? Presumably you can compute this using standard t-statistics, but I'd like to know if it's also possible to compute an estimate based on the CI's alone.
I reason that the lower bound of the CI of the difference should be the minimum credible difference between A & B; that is, the lower bound of the interval for B (3.5) minus upper bound of the interval for A (3), which yields a lower bound for the difference of 0.5. Similarly, the upper bound of the CI of the difference should be the maximum credible difference between A & B; that is, the upper bound of the interval for B (4.5) minus lower bound of the interval for A (1), which yields a lower bound for the difference of 3.5. This reasoning thus yields a confidence interval for the difference that ranges from 0.5 to 3.5.
Does that make sense, or is this a case where logic and statistics diverge?
| CI for a difference based on independent CIs | CC BY-SA 2.5 | null | 2010-08-03T12:20:09.890 | 2010-10-30T12:50:27.890 | 2010-08-13T00:56:41.777 | 364 | 364 | [
"confidence-interval"
] |
1170 | 2 | null | 1164 | 76 | null | Researchers want small p-values, and you can get smaller p-values if you use methods that make stronger distributional assumptions. In other words, non-robust methods let you publish more papers. Of course more of these papers may be false positives, but a publication is a publication. That's a cynical explanation, but it's sometimes valid.
| null | CC BY-SA 2.5 | null | 2010-08-03T12:22:58.247 | 2010-08-03T12:22:58.247 | null | null | 319 | null |
1171 | 2 | null | 1169 | 10 | null | No, you can't compute a CI for the difference that way I'm afraid, for the same reason you can't use whether the CIs overlap to judge the statistical significance of the difference. See, for example,
"On Judging the Significance of Differences by Examining the Overlap Between Confidence Intervals"
Nathaniel Schenker, Jane F Gentleman. The American Statistician. August 1, 2001, 55(3): 182-186. doi:10.1198/000313001317097960.
[http://pubs.amstat.org/doi/abs/10.1198/000313001317097960](http://pubs.amstat.org/doi/abs/10.1198/000313001317097960)
or:
Overlapping confidence intervals or standard error intervals: What do they mean in terms of statistical significance?
Mark E. Payton, Matthew H. Greenstone, and Nathaniel Schenker. Journal of Insect Science 2003; 3: 34. [http://www.ncbi.nlm.nih.gov/pmc/articles/PMC524673/](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC524673/)
The correct procedure requires you also know the sample sizes of both groups. You can then back-compute the two standard deviations from the CIs and use those to conduct a standard two-sample t-test, or to calculate a standard error of the difference and hence a CI for the difference.
| null | CC BY-SA 2.5 | null | 2010-08-03T12:30:00.740 | 2010-08-03T12:30:00.740 | null | null | 449 | null |
1172 | 2 | null | 563 | 18 | null | As a medical statistician with no previous knowledge of econom(etr)ics, I struggled to get to grips with instrumental variables as I often struggled to follow their examples and didn't understand their rather different terminology (e.g. 'endogeneity', 'reduced form', 'structural equation', 'omitted variables'). Here's a few references I found useful (the first should be freely available, but I'm afraid the others probably require a subscription):
- Staiger D. Instrumental Variables. AcademyHealth Cyber Seminar in Health
Services Research Methods, March 2002.
http://www.dartmouth.edu/~dstaiger/wpapers-Econ.htm
- Newhouse JP, McClellan M. Econometrics in Outcomes Research: The Use
of Instrumental Variables. Annual Review of Public Health 1998;19:17-34.
http://dx.doi.org/10.1146/annurev.publhealth.19.1.17
- Greenland S. An introduction to instrumental variables for epidemiologists. International Journal of Epidemiology 2000;29:722-729. http://dx.doi.org/10.1093/ije/29.4.722
- Zohoori N, Savitz DA. Econometric approaches to epidemiologic data: Relating endogeneity and unobserved heterogeneity to confounding. Annals of Epidemiology 1997;7:251-257. http://dx.doi.org/10.1016/S1047-2797(97)00023-9
I'd also recommend chapter 4 of:
- Angrist JD, Pischke JS. Mostly harmless econometrics: an empiricist's companion. Princeton, N.J: Princeton University Press, 2009. http://www.mostlyharmlesseconometrics.com/
| null | CC BY-SA 2.5 | null | 2010-08-03T13:12:04.477 | 2010-08-03T13:12:04.477 | null | null | 449 | null |
1173 | 1 | 1251 | null | 15 | 3729 | In my area of research, a popular way of displaying data is to use a combination of a bar chart with "handle-bars". For example,

The "handle-bars" alternate between standard errors and standard deviations depending on the author. Typically, the sample sizes for each "bar" are fairly small - around six.
These plots seem to be particularly popular in biological sciences - see the first few papers of [BMC Biology, vol 3](http://www.biomedcentral.com/bmcbiol/3) for examples.
So how would you present this data?
Why I dislike these plots
Personally I don't like these plots.
- When the sample size is small, why not just display the individual data points.
- Is it the sd or the se that is being displayed? No-one agrees which to use.
- Why use bars at all. The data doesn't (usually) go from 0 but a first pass at the graph suggests it does.
- The graphs don't give an idea about range or sample size of the data.
R script
This is the R code I used to generate the plot. That way you can (if you want) use the same data.
```
#Generate the data
set.seed(1)
names = c("A1", "A2", "A3", "B1", "B2", "B3", "C1", "C2", "C3")
prevs = c(38, 37, 31, 31, 29, 26, 40, 32, 39)
n=6; se = numeric(length(prevs))
for(i in 1:length(prevs))
se[i] = sd(rnorm(n, prevs, 15))/n
#Basic plot
par(fin=c(6,6), pin=c(6,6), mai=c(0.8,1.0,0.0,0.125), cex.axis=0.8)
barplot(prevs,space=c(0,0,0,3,0,0, 3,0,0), names.arg=NULL, horiz=FALSE,
axes=FALSE, ylab="Percent", col=c(2,3,4), width=5, ylim=range(0,50))
#Add in the CIs
xx = c(2.5, 7.5, 12.5, 32.5, 37.5, 42.5, 62.5, 67.5, 72.5)
for (i in 1:length(prevs)) {
lines(rep(xx[i], 2), c(prevs[i], prevs[i]+se[i]))
lines(c(xx[i]+1/2, xx[i]-1/2), rep(prevs[i]+se[i], 2))
}
#Add the axis
axis(2, tick=TRUE, xaxp=c(0, 50, 5))
axis(1, at=xx+0.1, labels=names, font=1,
tck=0, tcl=0, las=1, padj=0, col=0, cex=0.1)
```
| Alternative graphics to "handle bar" plots | CC BY-SA 3.0 | null | 2010-08-03T13:36:38.303 | 2022-11-30T05:28:30.260 | 2013-12-02T22:54:25.163 | 11633 | 8 | [
"data-visualization"
] |
1174 | 1 | 1212 | null | 50 | 69008 | I know of normality tests, but how do I test for "Poisson-ness"?
I have sample of ~1000 non-negative integers, which I suspect are taken from a Poisson distribution, and I would like to test that.
| How can I test if given samples are taken from a Poisson distribution? | CC BY-SA 3.0 | null | 2010-08-03T13:54:19.897 | 2015-09-03T22:17:07.177 | 2013-05-15T04:10:56.497 | 805 | 634 | [
"hypothesis-testing",
"distributions",
"poisson-distribution",
"goodness-of-fit"
] |
1175 | 2 | null | 1173 | 2 | null | If the data are rates: that is number of successes divided by number of trials, then a very elegant method is a funnel plot. For example, see [this](https://web.archive.org/web/20200928130315/https://qualitysafety.bmj.com/content/11/4/390.2.full) (apologies if the link requires a subscription--let me know and I'll find another).
It may be possible to adapt it to other types of data, but I haven't seen any examples.
UPDATE:
Here's a link to an example which doesn't require a subscription (and has a good explanation for how they might be used):
[http://understandinguncertainty.org/fertility](http://understandinguncertainty.org/fertility)
They can be used for non-rate data, by simply plotting mean against standard error, however they may lose some of their simplicity.
The wikipedia article is not great, as it only discusses their use in meta-analyses. I'd argue they could be useful in many other contexts.
| null | CC BY-SA 4.0 | null | 2010-08-03T13:55:22.063 | 2022-11-30T05:28:30.260 | 2022-11-30T05:28:30.260 | 362671 | 495 | null |
1176 | 2 | null | 1173 | 10 | null | Frank Harrell's (most excellent) keynote entitled "Information Allergy" at useR! last month showed alternatives to these: rather than hiding the raw data via the aggregation the bars provide, the raw data is also shown as dots (or points). "Why hide the data?" was Frank's comment.
Given alpa blending, that strikes as a most sensible suggestion (and the whole talk most full of good, and important, nuggets).
| null | CC BY-SA 2.5 | null | 2010-08-03T13:59:03.973 | 2010-08-03T13:59:03.973 | null | null | 334 | null |
1177 | 2 | null | 1174 | 8 | null | I suppose the easiest way is just to do a chi-squared [Goodness of fit](http://en.wikipedia.org/wiki/Pearson%27s_chi-square_test) test.
In fact here's nice [java applet](http://home.ubalt.edu/ntsbarsh/Business-stat/otherapplets/PoissonTest.htm) that will do just that!
| null | CC BY-SA 2.5 | null | 2010-08-03T14:14:54.700 | 2010-08-03T14:14:54.700 | null | null | 8 | null |
1178 | 2 | null | 1174 | 9 | null | You can use the dispersion (ratio of variance to the mean) as a test statistic, since the Poisson should give a dispersion of 1. [Here is a link](http://www.stats.uwo.ca/faculty/aim/2004/04-259/notes/DispersionTests.pdf) to how to use it as a model test.
| null | CC BY-SA 2.5 | null | 2010-08-03T14:21:43.390 | 2010-08-03T14:21:43.390 | null | null | 378 | null |
1179 | 2 | null | 1173 | 2 | null | I'm curious at to why you don't like these plots. I use them all the time. Without wanting to state the blooming obvious, they allow you to compare the means of different groups and see if their 95% CIs overlap (i.e., true mean likely to be different).
It's important to get a balance of simplicity and information for different purposes, I guess. But when I use these plots I am saying- "these two groups are different from each other in some important way" [or not].
Seems pretty great to me, but I'd be interested to hear counter-examples. I suppose implicit in the use of the plot is that the data do not have a bizzare distribution which renders the mean invalid or misleading.
| null | CC BY-SA 2.5 | null | 2010-08-03T14:26:33.223 | 2010-08-03T14:26:33.223 | null | null | 199 | null |
1180 | 2 | null | 1174 | 12 | null | For a Poisson distribution, the mean equals the variance. If your sample mean is very different from your sample variance, you probably don't have Poisson data. The dispersion test also mentioned here is a formalization of that notion.
If your variance is much larger than your mean, as is commonly the case, you might want to try a negative binomial distribution next.
| null | CC BY-SA 2.5 | null | 2010-08-03T14:39:16.187 | 2010-08-03T14:39:16.187 | null | null | 319 | null |
1181 | 2 | null | 1174 | 15 | null | Here is a sequence of R commands that may be helpful. Feel free to comment or edit if you spot any mistakes.
```
set.seed(1)
x.poi<-rpois(n=200,lambda=2.5) # a vector of random variables from the Poisson distr.
hist(x.poi,main="Poisson distribution")
lambda.est <- mean(x.poi) ## estimate of parameter lambda
(tab.os<-table(x.poi)) ## table with empirical frequencies
freq.os<-vector()
for(i in 1: length(tab.os)) freq.os[i]<-tab.os[[i]] ## vector of emprical frequencies
freq.ex<-(dpois(0:max(x.poi),lambda=lambda.est)*200) ## vector of fitted (expected) frequencies
acc <- mean(abs(freq.os-trunc(freq.ex))) ## absolute goodness of fit index acc
acc/mean(freq.os)*100 ## relative (percent) goodness of fit index
h <- hist(x.poi ,breaks=length(tab.os))
xhist <- c(min(h$breaks),h$breaks)
yhist <- c(0,h$density,0)
xfit <- min(x.poi):max(x.poi)
yfit <- dpois(xfit,lambda=lambda.est)
plot(xhist,yhist,type="s",ylim=c(0,max(yhist,yfit)), main="Poison density and histogram")
lines(xfit,yfit, col="red")
#Perform the chi-square goodness of fit test
#In case of count data we can use goodfit() included in vcd package
library(vcd) ## loading vcd package
gf <- goodfit(x.poi,type= "poisson",method= "MinChisq")
summary(gf)
plot(gf,main="Count data vs Poisson distribution")
```
| null | CC BY-SA 2.5 | null | 2010-08-03T14:52:44.720 | 2010-08-03T14:52:44.720 | null | null | 339 | null |
1182 | 2 | null | 1173 | 7 | null | From a psychological perspective, I advocate plotting the data plus your uncertainty about the data. Thus, in a plot like you show, I would never bother with extending the bars all the way to zero, which only serves to minimize the eye's ability to distinguish differences in the range of the data.
Additionally, I'm frankly anti-bargraph; bar graphs map two variables to the same aesthetic attribute (x-axis location), which can cause confusion. A better approach is to avoid redundant aesthetic mapping by mapping one variable to the x-axis and another variable to another aesthetic attribute (eg. point shape or color or both).
Finally, in your plot above, you only include error bars above the value, which hinders one's ability to compare the intervals of uncertainty relative to bars above and below the value.
Here's how I would plot the data (via the ggplot2 package). Note that I add lines connecting points in the same series; some argue that this is only appropriate when the series across which the lines are connected are numeric (as seems to be in this case), however as long as there is any reasonable ordinal relationship among the levels of the x-axis variable, I think connecting lines are useful for helping the eye associate points across the x-axis. This can become particularly useful for detecting interactions, which really stand out with lines.
```
library(ggplot2)
a = data.frame(names,prevs,se)
a$let = substr(a$names,1,1)
a$num = substr(a$names,2,2)
ggplot(data = a)+
layer(
geom = 'point'
, mapping = aes(
x = num
, y = prevs
, colour = let
, shape = let
)
)+
layer(
geom = 'line'
, mapping = aes(
x = num
, y = prevs
, colour = let
, linetype = let
, group = let
)
)+
layer(
geom = 'errorbar'
, mapping = aes(
x = num
, ymin = prevs-se
, ymax = prevs+se
, colour = let
)
, alpha = .5
, width = .5
)
```
[](https://i.stack.imgur.com/9buX9.png)
| null | CC BY-SA 4.0 | null | 2010-08-03T15:08:16.867 | 2019-08-21T18:48:29.680 | 2019-08-21T18:48:29.680 | 162986 | 364 | null |
1183 | 2 | null | 1173 | 2 | null | I would use boxplots here; clean, meaningful, nonparametric... Or [vioplot](http://cran.r-project.org/web/packages/vioplot/index.html) if the distribution is more interesting.
| null | CC BY-SA 2.5 | null | 2010-08-03T15:19:15.750 | 2010-08-03T15:19:15.750 | null | null | null | null |
1184 | 1 | 1187 | null | 5 | 3085 | Could you recommend an introductory reference to index decomposition analysis, including
- different methods (e.g. methods linked to the Laspeyre index and methods linked to the Divisa index)
- properties of decomposition methods which can be used to compare the different methods
- implementations of methods, e.g. in R
? Any hint appreciated.
(could not tag as index-decomposition due to missing reputation)
| Introduction to index decomposition analysis | CC BY-SA 2.5 | null | 2010-08-03T16:26:48.983 | 2011-07-09T16:47:49.890 | 2011-04-13T08:20:20.310 | null | 573 | [
"index-decomposition"
] |
1185 | 2 | null | 1164 | 30 | null | I would suggest that it's a lag in teaching. Most people either learn statistics at college or University. If statistics is not your first degree and instead did a mathematics or computer science degree then you probably only cover the fundamental statistics modules:
- Probability
- Hypothesis testing
- Regression
This means that when faced with a problem you try and use what you know to solve the problem.
- Data isn't Normal - take logs.
- Data has annoying outliers - remove them.
Unless you stumble across something else, then it's difficult to do something better. It's really hard using Google to find something if you don't know what it's called!
I think with all techniques it will take a while before the newer techniques filter down. How long did it take standard hypothesis tests to be part of a standard statistics curriculum?
BTW, with a statistics degree there will still be a lag in teaching - just a shorter one!
| null | CC BY-SA 3.0 | null | 2010-08-03T17:03:58.947 | 2018-04-19T10:55:00.923 | 2018-04-19T10:55:00.923 | 22047 | 8 | null |
1186 | 2 | null | 1174 | 3 | null | You can draw a single figure in which the observed and expected frequencies are drawn side by side. If the distributions are very different and you also have a variance-mean ratio bigger than one, then a good candidate is the negative binomial. Read the section [Frequency Distributions](http://books.google.com/books?id=8D4HVx0apZQC&lpg=PP1&dq=the%20r%20book&hl=el&pg=PA536#v=onepage&q&f=false) from `The R Book`. It deals with a very similar problem.
| null | CC BY-SA 2.5 | null | 2010-08-03T17:17:31.300 | 2010-08-03T17:17:31.300 | null | null | 632 | null |
1187 | 2 | null | 1184 | 4 | null | This thesis may be a starting place: [A Comparative Analysis of Index Decomposition Methods](https://scholarbank.nus.edu.sg/bitstream/handle/10635/14229/GranelF.pdf?sequence=1), by Frédéric Granel. It should serve as a basic introduction to IDA and the Laspeyre index, but it does not include the Divisa index or any code implementing the methods.
For more detail, including the Divisa index, you might try [Properties and linkages of some index decomposition analysis methods](http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V2W-4WR2C4X-3&_user=10&_coverDate=11%2F30%2F2009&_rdoc=1&_fmt=high&_orig=search&_sort=d&_docanchor=&view=c&_searchStrId=1420130047&_rerunOrigin=google&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=5872e15ee97331e2e332f10a8cefb041). As for implementations in R, there seems to be no package for IDA specifically, but [micEcon](http://cran.r-project.org/web/packages/micEcon/micEcon.pdf) and [micEconAids](http://cran.r-project.org/web/packages/micEconAids/micEconAids.pdf.) have parts of it, by way of demand analysis.
Best of luck.
| null | CC BY-SA 2.5 | null | 2010-08-03T17:24:56.913 | 2010-08-03T17:30:07.610 | 2010-08-03T17:30:07.610 | 39 | 39 | null |
1188 | 2 | null | 1123 | 1 | null | Are you entirely sure that they're using the normal distribution directly? It's very common to use transformed responses to model success rates, but this involves passing through a link function to move from a Gaussian random variable to a value in [0,1]. A commonly used transform is the probit one, which is just the inverse Gaussian CDF. (So you'd end up with something like $\Phi^{-1}(p) = X\beta + \sigma$, where $\Phi$ is the Gaussian CDF).
If you're actually using a normal distribution directly to model a result in [0,1], then it strikes me that the variance would have to be so small -- especially for p near 0 or 1 -- that you'd nearly always overfit the model.
| null | CC BY-SA 2.5 | null | 2010-08-03T19:26:55.673 | 2010-08-03T19:26:55.673 | null | null | 61 | null |
1189 | 2 | null | 881 | 4 | null | You can also use Edgeworth series, if your random variable has a finite mean and variance, which expands the CDF of your random variable in terms of the Gaussian CDF. At first glance it's not quite as tidy conceptually as using a mixture model, but the derivation is quite pretty and it gives you a closed form with very fast decay in the tail terms.
| null | CC BY-SA 2.5 | null | 2010-08-03T19:37:30.837 | 2010-08-03T19:37:30.837 | null | null | 61 | null |
1190 | 2 | null | 1099 | 1 | null | Chi-square tests do not seem appropriate. As others said, provided there are a reasonable number of different rates, you could treat the data as continuous and do regression or ANOVA. You would then want to look at the distribution of the residuals.
| null | CC BY-SA 2.5 | null | 2010-08-03T19:51:33.140 | 2010-08-03T19:51:33.140 | null | null | 686 | null |
1191 | 2 | null | 1016 | 1 | null | I also like Rob's answer. And, if you happen to use SAS rather than R, you can use PROC GLMSELECT for models that would be done with PROC GLM, although it works well for some other models, as well. See
Flom and Cassell "Stopping Stepwise: Why Stepwise Selection Methods are Bad and What you Should Use" presented at various groups, most recently, NESUG 2009
| null | CC BY-SA 2.5 | null | 2010-08-03T19:57:01.447 | 2010-08-03T19:57:01.447 | null | null | 686 | null |
1192 | 2 | null | 1174 | 1 | null | Yet another way to test this is with a quantile quantile plot. In R, there is qqplot. This directly plots your values against a normal distribution with similar mean and sd
| null | CC BY-SA 3.0 | null | 2010-08-03T20:00:30.287 | 2013-05-15T12:38:18.147 | 2013-05-15T12:38:18.147 | 686 | 686 | null |
1193 | 2 | null | 726 | 3 | null | A variation on the Fisher quotation given [here](https://stats.stackexchange.com/questions/726/famous-statistician-quotes/739#739) is
>
Hiring a statistician after the data have been collected is like hiring a physician when your patient is in the morgue. He may be able to tell you what went wrong, but he is unlikely to be able to fix it.
But I heard this attributed to Box, not Fisher.
| null | CC BY-SA 2.5 | null | 2010-08-03T20:07:57.437 | 2010-08-07T14:44:30.670 | 2017-04-13T12:44:37.420 | -1 | 686 | null |
1194 | 1 | null | null | 77 | 40283 | Back in April, I attended a talk at the UMD Math Department Statistics group seminar series called "To Explain or To Predict?". The talk was given by [Prof. Galit Shmueli](http://www.rhsmith.umd.edu/faculty/gshmueli/web/html/) who teaches at UMD's Smith Business School. Her talk was based on research she did for a paper titled ["Predictive vs. Explanatory Modeling in IS Research"](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1112893), and a follow up working paper titled ["To Explain or To Predict?"](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1351252).
Dr. Shmueli's argument is that the terms predictive and explanatory in a statistical modeling context have become conflated, and that statistical literature lacks a a thorough discussion of the differences. In the paper, she contrasts both and talks about their practical implications. I encourage you to read the papers.
The questions I'd like to pose to the practitioner community are:
- How do you define a predictive exercise vs an explanatory/descriptive
one? It would be useful if you could talk about the specific
application.
- Have you ever fallen into the trap of using one when meaning to use the other? I certainly have. How do you know which one to use?
| Practical thoughts on explanatory vs. predictive modeling | CC BY-SA 4.0 | null | 2010-08-03T20:19:57.303 | 2020-01-10T11:53:15.637 | 2018-11-13T20:37:42.233 | 226655 | 11 | [
"predictive-models"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.