Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
3601
2
null
3595
4
null
It is hard to give a good answer without knowing what kind of models you have in mind. For linear regression, I have successfully used the biglm package in R.
null
CC BY-SA 2.5
null
2010-10-14T13:44:36.413
2010-10-14T13:44:36.413
null
null
439
null
3602
2
null
3589
0
null
Another way to solve such a problem is to impute the missing data for the shorter series using a time series model which may or may not make sense in a particular context. In your context, imputing the stock prices into the past would mean that you are asking the following counter-factual question: What would be the stock price for company X had it gone public n years in the past instead of when it actually went public? Such a data imputation could potentially be done by taking into account stock prices of related companies, general market trends etc. But, such an analysis may not make sense or may not be needed given the goals of your project.
null
CC BY-SA 2.5
null
2010-10-14T13:49:41.147
2010-10-14T13:49:41.147
null
null
null
null
3603
2
null
3599
1
null
Your problem is one of multi-collinear regressors (since B and C are correlated). I would suggest that you look at the answers to the question: [Dealing with correlated regressors](https://stats.stackexchange.com/q/3561/28). The following paper may also be relevant in your context: [Using principal components for estimating logistic regression with high-dimensional multicollinear data](http://dx.doi.org/10.1016/j.csda.2005.03.011)
null
CC BY-SA 2.5
null
2010-10-14T13:57:04.500
2010-10-14T13:57:04.500
2017-04-13T12:44:24.947
-1
null
null
3604
2
null
3592
4
null
I'll concentrate on your example question: does class 1 of the old classification scheme have a better or worse survival than class 1 of the updated classification scheme? We can form four mutually exclusive groups of patients: (a) Patients who weren't in class 1 under either scheme. Clearly, they don't help us answer the question. (b) Patients who were in class 1 under both schemes. Clearly, they don't help us answer the question either. (c) Patients who were in class 1 under the old scheme, but aren't in class 1 under the new scheme. (d) Patients who weren't in class 1 under the old scheme, but are in class 1 under the new scheme. Compare survival in groups (c) and (d). If survival if better in (c), then class 1 of the old scheme has better survival than class 1 of the new scheme. If survival if better in (d), then class 1 of the new scheme has better survival than class 1 of the old scheme.
null
CC BY-SA 2.5
null
2010-10-14T13:58:37.590
2010-10-14T13:58:37.590
null
null
449
null
3605
2
null
3589
10
null
No amount of imputation, time series analysis, GARCH models, interpolation, extrapolation, or other fancy algorithms will do anything to create information where it does not exist (although they can create that illusion ;-). The history of Y's price before X went public is useless for assessing their subsequent correlation. Sometimes (often preparatory to an IPO) analysts use internal accounting information (or records of private stock transactions) to retrospectively reconstruct hypothetical prices for X's stock before it went public. Conceivably such information could be used to enhance estimates of correlation, but given the extremely tentative nature of such backcasts, I doubt the effort would be of any help except initially when there are only a few days or weeks of prices for X available.
null
CC BY-SA 2.5
null
2010-10-14T14:07:20.877
2010-10-14T14:07:20.877
null
null
919
null
3606
2
null
3599
2
null
You have clearly stated a part of your model: > C depends on B, in that values of B above a threshold will change C. The change in C will furthermore reduce B in the next measurement. By "next measurement" I understand you mean next in time. Let's index time as $t = 0, 1, 2, \ldots$. Then the dependence of C on B sounds like a contemporaneous one. If we adopt a simple linear model (which can readily be expanded to incorporate covariates and variable but predetermined thresholds) and let $u$ be a constant threshold, $$C(t) = \beta_1 I_{B(t) \gt u} + \epsilon$$ with random deviations $\epsilon$ (which I won't bother to index; you know the drill). Here, $I$ is the indicator function. Also, $$B(t+1) = B(t) - \beta_2 (C(t) - C(t-1)) + \delta$$ and again $\delta$ represents random (independent) deviations. I'm stuck here because you haven't specified more precisely just how B changes in response to a change in C; I have merely provided one possible interpretation. Finally, $$A(t) = \beta_3 B(t) + \beta_4 C(t) + \beta_5 + \gamma$$ with independent random deviations $\gamma$. The presence of that indicator function in the first formula is problematic: it makes this a nonlinear problem. However, this seems to be an essential feature of the situation; I would be loth to ignore it in the name of simplicity or ease of calculation (although both are important considerations). The lags $B(t+1) - B(t)$ and $C(t) - C(t-1)$ also point towards autoregressive models, another complication. Because of these issues, the most tractable approach might be with Bayesian techniques: parameterize the distributions of $\epsilon$, $\delta$, and $\gamma$, provide priors for those parameters and the $\beta$s, and let the machinery (e.g., WinBUGS or RBUGS) roll.
null
CC BY-SA 2.5
null
2010-10-14T14:21:48.683
2010-10-14T14:21:48.683
null
null
919
null
3607
2
null
3599
2
null
Not easy at all. This starts to sound like the sort of thing that [Jamie Robins](http://en.wikipedia.org/wiki/James_Robins) and colleagues have done a lot of work on. To quote the start of the abstract of one of their papers: "In observational studies with exposures or treatments that vary over time, standard approaches for adjustment of confounding are biased when there exist time-dependent confounders that are also affected by previous treatment." (Robins JM, Hernán MA, Brumback B. Marginal Structural Models and Causal Inference in Epidemiology. Epidemiology 2000;11:550-560. [http://www.jstor.org/stable/3703997](http://www.jstor.org/stable/3703997)) In their example, low CD4 count means you're more likely to get anti-HIV drugs, which (hopefully) increase your CD4 count. Sounds very much like your setting. There's been a lot of work in this area recently (i.e. more recently than that paper), and it's not easy to get to grips with from the journal papers. [Hernán & Robins are writing a book](http://www.hsph.harvard.edu/faculty/miguel-hernan/causal-inference-book/), which should help a lot -- it's not finished yet, but there's a draft of the first 10 chapters available.
null
CC BY-SA 2.5
null
2010-10-14T14:23:57.400
2010-10-14T14:23:57.400
null
null
449
null
3609
1
null
null
4
2054
It seems to me that rather than using non robust estimation methods with robust standard errors it would be better to use robust estimation from the outset. I wonder what other people think.
Robust standard errors for panel data vs robust estimation for panel data
CC BY-SA 2.5
null
2010-10-14T17:57:07.050
2010-10-15T04:27:41.783
2010-10-14T17:59:07.143
930
1585
[ "robust", "panel-data" ]
3610
2
null
3595
3
null
We trained 3.5M observations and 44 features using 64-bit R on an EC2 instance with 32GB ram and 4 cores. We used random forests and it worked well. Note that we had to preprocess/manipulate the data before training.
null
CC BY-SA 2.5
null
2010-10-14T18:00:23.600
2010-10-14T18:00:23.600
null
null
616
null
3611
1
3612
null
9
5206
Does X (hazard) variable in Cox proportional hazard regression analysis always have to be time? If not, could you provide an example, please? Can age of cancer patient be a hazard variable? If so, can it be interpreted as the risk of getting cancer at a certain age? Would Cox regression be a legitimate analysis to study the association between gene expression and age?
Cox regression and time scale
CC BY-SA 3.0
null
2010-10-14T18:40:10.550
2014-05-30T12:56:22.647
2014-05-30T12:56:22.647
28740
1586
[ "regression", "survival", "hazard" ]
3612
2
null
3611
8
null
Usually, age at baseline is used as a covariate (because it is often associated to disease/death), but it can be used as your time scale as well (I think it is used in some longitudinal studies, because you need to have enough people at risk along the time scale, but I can't remember actually -- just found these slides about [Analysing cohort studies assuming a continuous time scale](http://staff.pubhealth.ku.dk/~bxc/SPE.2002/Slides/survival.pdf) which talk about cohort studies). In the interpretation, you should replace event time by age, and you might include age at diagnosis as a covariate. This would make sense when you study age-specific mortality of a particular disease (as illustrated in these [slides](http://www.dina.dk/~torbenm/DINA/survival/L4.pdf)). Maybe this article is interesting since it contrasts the two approaches, time-on-study vs. chronological age: [Time Scales in Cox Model: Effect of Variability Among Entry Ages on Coefficient Estimates](http://stat.fsu.edu/techreports/M996.pdf). Here is another paper: Cheung, YB, Gao, F, and Khoo, KS (2003). [Age at diagnosis and the choice of survival analysis methods in cancer epidemiology](http://www.jclinepi.com/article/S0895-4356%2802%2900536-X/abstract). Journal of Clinical Epidemiology, 56(1), 38-43. But there are certainly better papers.
null
CC BY-SA 2.5
null
2010-10-14T19:05:25.133
2010-10-14T19:05:25.133
null
null
930
null
3613
2
null
3611
7
null
No, it doesn't always have to be time. Many censored responses can be modeled with survival analysis techniques. In his book Nondetects and Data Analysis, Dennis Helsel advocates using the negative of a concentration in place of time (in order to cope with nondetects, which when negated become right-censored values). A [synopsis](http://www.practicalstats.com/nada/nada/downloads_files/NADAforR_Examples.pdf) is available on the Web (pdf format) and an R package, [NADA](http://water.usgs.gov/software/NADA_for_R/), implements this.
null
CC BY-SA 2.5
null
2010-10-14T19:10:51.673
2010-10-14T19:10:51.673
null
null
919
null
3614
1
null
null
27
48355
I want to calculate the probability distribution for the total of a combination of dice. I remember that the probability of is the number of combinations that total that number over the total number of combinations (assuming the dice have a uniform distribution). What are the formulas for - The number of combinations total - The number of combinations that total a certain number
How to easily determine the results distribution for multiple dice?
CC BY-SA 2.5
null
2010-10-14T19:23:39.887
2019-10-29T03:58:46.750
2010-10-14T22:40:19.860
null
1456
[ "probability", "dice" ]
3615
2
null
3614
5
null
Approximate Solution I explained the exact solution earlier (see below). I will now offer an approximate solution which may suit your needs better. Let: $X_i$ be the outcome of a roll of a $s$ faced dice where $i=1, ... n$. $S$ be the total of all $n$ dice. $\bar{X}$ be the sample average. By definition, we have: $\bar{X} = \frac{\sum_iX_i}{n}$ In other words, $\bar{X} = \frac{S}{n}$ The idea now is to visualize the process of observing ${X_i}$ as the outcome of throwing the same dice $n$ times instead of as outcome of throwing $n$ dice. Thus, we can invoke the central limit theorem (ignoring technicalities associated with going from discrete distribution to continuous), we have as $n \rightarrow \infty$: $\bar{X} \sim N(\mu, \sigma^2/n)$ where, $\mu = (s+1)/2$ is the mean of the roll of a single dice and $\sigma^2 = (s^2-1)/12$ is the associated variance. The above is obviously an approximation as the underlying distribution $X_i$ has discrete support. But, $S = n \bar{X}$. Thus, we have: $S \sim N(n \mu, n \sigma^2)$. Exact Solution [Wikipedia](http://en.wikipedia.org/w/index.php?title=Dice&oldid=393538650#Probability) has a brief explanation as how to calculate the required probabilities. I will elaborate a bit more as to why the explanation there makes sense. To the extent possible I have used similar notation to the Wikipedia article. Suppose that you have $n$ dice each with $s$ faces and you want to compute the probability that a single roll of all $n$ dice the total adds up to $k$. The approach is as follows: Define: $F_{s,n}(k)$: Probability that you get a total of $k$ on a single roll of $n$ dices with $s$ faces. By definition, we have: $F_{s,1}(k) = \frac{1}{s}$ The above states that if you just have one dice with $s$ faces the probability of obtaining a total $k$ between 1 and s is the familiar $\frac{1}{s}$. Consider the situation when you roll two dice: You can obtain a sum of $k$ as follows: The first roll is between 1 to $k-1$ and the corresponding roll for the second one is between $k-1$ to $1$. Thus, we have: $F_{s,2}(k) = \sum_{i=1}^{i=k-1}{F_{s,1}(i) F_{s,1}(k-i)}$ Now consider a roll of three dice: You can get a sum of $k$ if you roll a 1 to $k-2$ on the first dice and the sum on the remaining two dice is between $k-1$ to $2$. Thus, $F_{s,3}(k) = \sum_{i=1}^{i=k-2}{F_{s,1}(i) F_{s,2}(k-i)}$ Continuing the above logic, we get the recursion equation: $F_{s,n}(k) = \sum_{i=1}^{i=k-n+1}{F_{s,1}(i) F_{s,n-1}(k-i)}$ See the Wikipedia link for more details.
null
CC BY-SA 3.0
null
2010-10-14T19:58:44.760
2017-02-28T01:24:51.983
2017-02-28T01:24:51.983
805
null
null
3616
1
3617
null
16
44924
I have a simple time series with 5-10 data points per data set at regular intervals. I am wondering what is the best way to determine whether two data sets are different. Should i try t-tests on each data point, or look at the area under the curves or is there some kind of multivariate model that would work better?
What is the best statistical test for a time series?
CC BY-SA 2.5
null
2010-10-14T20:17:42.653
2010-11-05T14:54:01.720
2010-10-15T02:50:22.197
159
1327
[ "time-series", "statistical-significance" ]
3617
2
null
3616
12
null
You will need to specify precisely what you mean by "different". You will also need to specify what assumptions you are willing to make about the serial correlation structure within each time series. With t-tests, you are comparing the mean of each group and you are assuming that the groups consist of independent observations with equal variances (the latter is sometimes relaxed). When testing time series, the assumption of independence is usually not reasonable, but then you need to replace it with a specified correlation structure -- e.g., you might assume that the time series follow AR(1) processes with equal autocorrelation. Consequently, even comparing the means of two or more time series is considerably more difficult than with independent data. I would carefully specify what assumptions I was willing to make about each time series, and what I was wishing to compare, and then use a parametric bootstrap (based on the assumed model) to carry out the test.
null
CC BY-SA 2.5
null
2010-10-14T21:14:33.157
2010-10-14T21:14:33.157
null
null
159
null
3618
2
null
3614
19
null
Exact solutions The number of combinations in $n$ throws is of course $6^n$. These calculations are most readily done using the probability generating function for one die, $$p(x) = x + x^2 + x^3 + x^4 + x^5 + x^6 = x \frac{1-x^6}{1-x}.$$ (Actually this is $6$ times the pgf--I'll take care of the factor of $6$ at the end.) The pgf for $n$ rolls is $p(x)^n$. We can calculate this fairly directly--it's not a closed form but it's a useful one--using the Binomial Theorem: $$p(x)^n = x^n (1 - x^6)^n (1 - x)^{-n}$$ $$= x^n \left( \sum_{k=0}^{n} {n \choose k} (-1)^k x^{6k} \right) \left( \sum_{j=0}^{\infty} {-n \choose j} (-1)^j x^j\right).$$ The number of ways to obtain a sum equal to $m$ on the dice is the coefficient of $x^m$ in this product, which we can isolate as $$\sum_{6k + j = m - n} {n \choose k}{-n \choose j}(-1)^{k+j}.$$ The sum is over all nonnegative $k$ and $j$ for which $6k + j = m - n$; it therefore is finite and has only about $(m-n)/6$ terms. For example, the number of ways to total $m = 14$ in $n = 3$ throws is a sum of just two terms, because $11 = 14-3$ can be written only as $6 \cdot 0 + 11$ and $6 \cdot 1 + 5$: $$-{3 \choose 0} {-3 \choose 11} + {3 \choose 1}{-3 \choose 5}$$ $$= 1 \frac{(-3)(-4)\cdots(-13)}{11!} + 3 \frac{(-3)(-4)\cdots(-7)}{5!}$$ $$= \frac{1}{2} 12 \cdot 13 - \frac{3}{2} 6 \cdot 7 = 15.$$ (You can also be clever and note that the answer will be the same for $m = 7$ by the symmetry 1 <--> 6, 2 <--> 5, and 3 <--> 4 and there's only one way to expand $7 - 3$ as $6 k + j$; namely, with $k = 0$ and $j = 4$, giving $$ {3 \choose 0}{-3 \choose 4} = 15 \text{.}$$ The probability therefore equals $15/6^3$ = $5/36$, about 14%. By the time this gets painful, the Central Limit Theorem provides good approximations (at least to the central terms where $m$ is between $\frac{7 n}{2} - 3 \sqrt{n}$ and $\frac{7 n}{2} + 3 \sqrt{n}$: on a relative basis, the approximations it affords for the tail values get worse and worse as $n$ grows large). I see that this formula is given in the Wikipedia article Srikant references but no justification is supplied nor are examples given. If perchance this approach looks too abstract, fire up your favorite computer algebra system and ask it to expand the $n^{\text{th}}$ power of $x + x^2 + \cdots + x^6$: you can read the whole set of values right off. E.g., a Mathematica one-liner is ``` With[{n=3}, CoefficientList[Expand[(x + x^2 + x^3 + x^4 + x^5 + x^6)^n], x]] ```
null
CC BY-SA 4.0
null
2010-10-14T22:29:58.243
2019-10-29T03:58:46.750
2019-10-29T03:58:46.750
7250
919
null
3619
2
null
3595
7
null
There is the [RHIPE](http://www.rhipe.org/) package (R-Hadoop integration). It is can make it very easy (with exceptions) to analyze large amounts of data in R.
null
CC BY-SA 3.0
null
2010-10-15T01:03:14.220
2012-03-12T16:11:13.280
2012-03-12T16:11:13.280
6432
1307
null
3620
2
null
3609
2
null
I don't know if I understand you correctly, but still I will give it a shot. I think robust estimation from the outset would be better in most of the cases. Reason: If you estimation is not robust, outliers might severely affect your estimate. Still, in general, you will be far the true value. This may be also looked as the confidence interval not containing the true value. In picture form: (This can happen, outliers cause the estimate to be very far from truth). If the estimate was found by robust methods, it would have been closer to the True Value. ----------||------------------------[------------------Not Robust Est---------------------]- ........True Value..................<--------CI found by Robust Method-------------->
null
CC BY-SA 2.5
null
2010-10-15T01:22:06.443
2010-10-15T01:22:06.443
null
null
1307
null
3621
2
null
3611
4
null
On the age-scale vs. time-scale issue, chl has some good references and captures the essentials -- in particular, the requirement that the at-risk set contain sufficient subjects from all ages as would arise in a longitudinal study. I would only note that there is no general consensus around this yet, but there is some literature to suggest that age should be preferred as the time scale in certain cases. In particular, if you have a situation where time doesn't accumulate in the same way for all subjects, for example due to exposure to some toxic material, then age may be more appropriate. On the other hand, you can handle that specific example on a time-scale Cox PH model by using age as a time varying covariate -- rather than a fixed covariate at start time. You need to think about the mechanism behind your object of study to figure out which time scale is more appropriate. Sometimes it's worth fitting both models to existing data to see if discrepancies arise and how they might be explained before designing your new study. Finally, the obvious difference in analyzing the two is that on an age-scale, the interpretation of survival is with respect to an absolute scale (age), whereas on a time-scale, it's relative to the start/entry date of the study.
null
CC BY-SA 2.5
null
2010-10-15T02:06:12.473
2010-10-15T02:06:12.473
null
null
251
null
3622
2
null
3609
8
null
They're robust with respect to different things. If you use robust regression to obtain an estimate of fixed effect in panel data, then you're computing an estimate that's resistant to outliers. If you use robust standard errors for your OLS estimator, it's because you suspect that the assumption behind your error model is violated. For example, in panel data, your errors may be autocorrelated and not iid, and your robust standard error offers protection against such phenomena. But, robust standard errors don't guard against outliers, and robust regression doesn't necessarily account for autocorrelation. Though I think there are techniques to provide robust standard errors for robust estimators, you would only use these when both conditions are true: your data have outliers and your error model assumption is violated.
null
CC BY-SA 2.5
null
2010-10-15T02:40:18.180
2010-10-15T04:27:41.783
2010-10-15T04:27:41.783
251
251
null
3623
1
3626
null
3
421
I have a logistic regression (in SAS, for reference) with continuous and categorical predictors (with reference coding), and an interaction term between one of each type (assume for now that the categorical variable in question has three response levels, reference coded to $c_1$ and $c_2$): $logit(p) = a + (continuous terms) + (categorical terms) + b_1 (c_1 x) + b_2 (c_2 x)$ where $b_1$ and $b_2$ are the estimated coefficients from my code. From this I can obviously get an expression for the probability $p$ of my outcome. I want to estimate the average p by plugging in the means of the continuous terms and the proportions of the categorical terms. But what do I do with the interaction term(s)? Do I set $(c_i x) = mean(x)$? Or do I set it to $proportion(c_i)mean(x)$?
Plugging in mean values/proportions to a logistic regression with continuous-discrete interaction
CC BY-SA 2.5
null
2010-10-15T04:44:29.807
2010-10-15T08:27:19.977
2010-10-15T08:27:19.977
null
1144
[ "logistic", "categorical-data", "fitting" ]
3625
2
null
3614
7
null
There's a very neat way of computing the combinations or probabilities in a spreadsheet (such as excel) that computes the convolutions directly. I'll do it in terms of probabilities and illustrate it for six sided dice but you can do it for dice with any number of sides (including adding different ones). (btw it's also easy in something like R or matlab that will do convolutions) Start with a clean sheet, in a few columns, and move down a bunch of rows from the top (more than 6). - put the value 1 in a cell. That's the probabilities associated with 0 dice. put a 0 to its left; that's the value column - continue down from there with 1,2,3 down as far as you need. - move one column to the right and down a row from the '1'. enter the formula "=sum(" then left-arrow up-arrow (to highlight the cell with 1 in it), hit ":" (to start entering a range) and then up-arrow 5 times, followed by ")/6" and press Enter - so you end up with a formula like =sum(c4:c9)/6 (where here C9 is the cell with the 1 in it). Then copy the formula and paste it to the 5 cells below it. They should each contain 0.16667 (ish). Don't type anything into the empty cells these formulas refer to! - move down 1 and to the right 1 from the top of that column of values and paste ... ... a total of another 11 values. These will be the probabilities for two dice. It doesn't matter if you paste a few too many, you'll just get zeroes. - repeat step 3 for the next column for three dice, and again for four, five, etc dice. We see here that the probability of rolling $12$ on 4d6 is 0.096451 (if you multiply by $4^6$ you'll be able to write it as an exact fraction). If you're adept with Excel - things like copying a formula from a cell and pasting into many cells in a column, you can generate all tables up to say 10d6 in about a minute or so (possibly faster if you've done it a few times). --- If you want combination counts instead of probabilities, don't divide by 6. If you want dice with different numbers of faces, you can sum $k$ (rather than 6) cells and then divide by $k$. You can mix dice across columns (e.g. do a column for d6 and one for d8 to get the probability function for d6+d8): ![enter image description here](https://i.stack.imgur.com/mnJlr.png)
null
CC BY-SA 3.0
null
2010-10-15T06:19:31.640
2014-07-28T02:19:47.983
2014-07-28T02:19:47.983
805
805
null
3626
2
null
3623
3
null
Regardless of the interaction term, this procedure isn't going to estimate the average p, because logit is a nonlinear function so the mean of the logit isn't the same as the logit of the mean. If you want to calculate the expected proportion of positive outcomes in a sample, the easiest way is to calculated the predicted p for each individual and average them.
null
CC BY-SA 2.5
null
2010-10-15T06:39:37.757
2010-10-15T06:39:37.757
null
null
449
null
3627
2
null
3623
4
null
The predicted grand average (see onestop's answer) may not be all that informative - after all, you are fitting a model to understand systematic deviations from it. You can predict your p for any setting of your predictors. Given that you are interested in the effects of the predictors, it would make sense to look at what happens when your continuous x takes values at the first, second and third quartile of the values in your sample, while the categorical predictor varies over the categories you have. Then again, you sound like you have more than one continuous and more than one categorical variable, in which case an enumeration as I proposed in the previous paragraph becomes unintelligible. Better look at and report p for "interesting" or "relevant" predictor settings.
null
CC BY-SA 2.5
null
2010-10-15T07:35:14.613
2010-10-15T07:35:14.613
null
null
1352
null
3628
1
3629
null
9
38461
From the output of a logistic regression in JMP, I read about two binary variables: ``` Var1 estimate -0.1007384 Var2 estimate 0.21528927 ``` and then ``` Odds ratio for Var1 lev1/lev2 1.2232078 reciprocal 0.8175225 Odds ratio for Var2 lev1/lev2 0.6501329 reciprocal 1.5381471 ``` Now I obtain `1.2232078` as `exp(2*0.1007384)`, and similarly for the other odds ratio. So, my question is: why I have to multiply by two? Is the relation between a coefficient c and its odds ratio r equal to r=exp(c)? Isn't it?
Relation between logistic regression coefficient and odds ratio in JMP
CC BY-SA 2.5
null
2010-10-15T10:48:41.760
2017-06-16T19:24:59.167
2010-10-15T14:42:37.287
930
1219
[ "logistic", "odds-ratio", "jmp" ]
3629
2
null
3628
15
null
Ok, I drop a quick response. Your idea is correct in that the regression coefficient is the log of the OR. More precisely, if $b$ is your regression coefficient, $\exp(b)$ is the odds ratio corresponding to a one unit change in your variable. So, to get back to the adjusted odds, you need to know what are the internal coding convention for your factor levels. Usually, for a binary variable it is 0/1 or 1/2. But if it happens that your levels are represented as -1/+1 (which I suspect here), then you have to multiply the regression coefficient by 2 when exponentiating. The same would apply if you were working with a continuous variable, like age, and want to express the odds for 5 years ($\exp(5b)$) instead of 1 year ($\exp(b)$). Update: I just found this about [JMP coding for nominal variables](http://www.jmp.com/support/faq/jmp2079.shtml) (version < 7).
null
CC BY-SA 2.5
null
2010-10-15T10:54:40.603
2010-10-15T11:08:31.683
2010-10-15T11:08:31.683
930
930
null
3630
1
null
null
5
435
I'm trying to evaluate a path dependent function, $f(r_t)$, on a [Cox-Ingersoll-Ross](https://en.wikipedia.org/wiki/Cox%E2%80%93Ingersoll%E2%80%93Ross_model) process: $$dr_t = \theta (\mu - r_t)dt + \sigma \sqrt r_t dW_t$$ by Monte Carlo simulation. Could anyone suggest and explain any effective variance reduction techniques that could be used on this process? E.g. importance sampling, stratified sampling etc. --- Original Question: Is anybody aware of advanced variance reduction techniques (i.e. importance sampling or stratified sampling) for the simulation of [Cox-Ingersoll-Ross process](http://en.wikipedia.org/wiki/CIR_process)? Ideally I would like to see a concrete implementation if available... Regards
CIR Process-Variance reduction
CC BY-SA 3.0
null
2010-10-15T11:00:45.113
2016-12-09T08:31:15.880
2016-12-09T08:31:15.880
113090
1443
[ "r", "references", "markov-chain-montecarlo", "stochastic-processes", "methodology" ]
3631
2
null
3611
3
null
Per the OP's request, heres another application I have seen survival analysis used in a spatial context (although obviously different than measuring environmental substances [mentioned](https://stats.stackexchange.com/questions/3611/cox-regression-and-time-scale/3613#3613) by whuber) is modeling the distance between events in space. Heres one example in [criminology](http://goo.gl/mtk8) and here is one in [epidemiology](http://dx.doi.org/10.1016/S0277-9536(99)00349-4). The reasoning behind using survival analysis to measure the distance between events is not per say an issue of censoring (although censoring can definately occur in a spatial context), it is more so because of the similar distributions between time to event characteristics and distance between events characteristics (i.e. they both have similar types of error structures (frequently distance decay) that violate OLS and so the non-parametric solutions are ideal for both). --- Because of my poor citation practices I had to spend and hour finding the correct link/reference to the link above. For the example in criminology, Kikuchi, George, Mamoru Amemiya, Tomonori Saito, Takahito Shimada & Yutaka Harada. 2010. [A Spatio-Temporal analysis of near repeat victimization in Japan](http://www.ucl.ac.uk/jdi/events/mapping-conf/conf-2010/conf-pres2010/pres20). 8th National Crime Mapping Conference. Jill Dando Institute of Crime Science. PDF currently available at referenced webpage. In epidemiology, Reader, Steven. 2000. Using survival analysis to study spatial point patterns in geographical epidemiology. [Social Science & Medicine 50(7-8): 985-1000.](http://dx.doi.org/10.1016/S0277-9536(99)00349-4)
null
CC BY-SA 3.0
null
2010-10-15T12:01:27.760
2011-10-20T19:51:54.067
2017-04-13T12:44:33.237
-1
1036
null
3632
1
3665
null
2
260
Suppose we need to take an action on a population with income (x) more than $5,000. Income is not observed directly. Should we use logistic regression to estimate x, or should we use logistic regression to estimate the probability of x>5000 directly? (What are the drawbacks/advantage of the methods?) Edit: Yes - by logistic I meant logistic regression. The other variables I have are Financial history, Demograpgic and Credit Bureau variables. For example, the balance, utilization, gender, # cars, house owned or not, external card balance, external # bad debts, etc. Thanks
Should we regress x or use logistic regression on x>5000
CC BY-SA 2.5
null
2010-10-15T14:14:33.220
2010-10-17T04:02:18.357
2010-10-16T04:50:04.427
994
994
[ "regression", "logistic" ]
3633
2
null
3616
6
null
Maybe repeated measures anova is what you want. It allows you to compare the subjects (inter subject factors) while taking the correlated structure of the "time series" per subject (intra subject factor). It is an easy but dated method and can be found in the context of "general linear models", it needs some additional features (e.g. sphericity). Another way could be mixed linear models which allow for more general correlations structures (even AR(1) like Rob suggested) and unbalanced data.
null
CC BY-SA 2.5
null
2010-10-15T15:08:03.417
2010-10-15T15:08:03.417
null
null
1573
null
3634
1
null
null
9
6300
I have to analyze a factorial design with five factors (one of them nested in another one) and numeric responses. I would like to perform a nonparametric ANOVA, but of course I can't use neither Kruskall Wallis nor Friedman test (I have replicated measures). Is there a command or a code in R that could help me? Thank you! Stefania
Multi-way nonparametric anova
CC BY-SA 2.5
null
2010-10-15T15:42:56.067
2011-03-31T16:14:14.503
2010-10-15T16:41:33.940
null
null
[ "r", "anova", "nonparametric" ]
3635
2
null
3419
4
null
I might be misunderstanding your goals here, but to me it sounds like a [multi-dimensional scaling](http://en.wikipedia.org/wiki/Multidimensional_scaling) (MDS) problem. I've never used MDS myself, but my sense is that it should allow you to derive a global measure of similarity as well as dimensional measures of similarity. My memory is that it is able to handle both continuous items (e.g. pulse rate) and nominal items (e.g. Gender) which seems like it would be an important consideration for what you are trying to do.
null
CC BY-SA 3.0
null
2010-10-15T15:50:12.507
2017-11-16T13:21:24.017
2017-11-16T13:21:24.017
196
196
null
3636
1
null
null
1
2763
I have a sample set of values that were taken over a period of time. However, the delta time between each sample is different. Do I need to account for the different time deltas in the std-dev? Is std-dev even appropriate for this kind of data? --- More info... The data are temperature samples. The time range is from 1 hour to multiple days.
How to calculate the standard deviation on a sample set with irregular time periods
CC BY-SA 2.5
0
2010-10-15T15:53:01.893
2019-07-21T11:38:26.037
2019-07-21T11:38:26.037
11887
1595
[ "standard-deviation", "unevenly-spaced-time-series" ]
3637
2
null
3636
2
null
Yes, you do need to account for the irregularity of the time series because volatility scales with time. Depending upon the distribution and independence assumptions, [sometimes a "square root of time" rule can be appropriate](http://ideas.repec.org/p/fmg/fmgdps/dp439.html). Is this data sampled irregularly intraday or across a longer time period? What kind of data is it? For dealing with high-frequency financial data, you can apply a [realized volatility](http://www.ssc.upenn.edu/~fdiebold/papers/paper29/temp.pdf) measure, which is available in R in the [realized package](http://cran.r-project.org/web/packages/realized/index.html).
null
CC BY-SA 2.5
null
2010-10-15T16:02:52.387
2010-10-15T16:02:52.387
null
null
5
null
3638
2
null
3634
4
null
Tukey's Median Polish is implemented in R as medpolish {stats}. See Chapter 6 in [Venables and Ripley](http://www.stats.ox.ac.uk/pub/MASS4/VR4stat.pdf)
null
CC BY-SA 2.5
null
2010-10-15T16:27:19.637
2011-03-31T16:14:14.503
2011-03-31T16:14:14.503
919
919
null
3639
2
null
3383
2
null
The one-way ANOVA approach you mention sounds fine to me. Sure the individual change scores aren't going to be the "true change" by any means, but they are better than nothing. If anything the resulting variance in the model should be over estimated as a consequence of this procedure. In R the easiest way to do ANOVA in simple designs (IMO) is to use ezANOVA (package ez). E.g. ezANOVA(data,dv=.(WeightChange),sid=.(PseudoSubjID),between=.(Time)) I can't quite say anything about instantiation, but another approach might be to find the optimal set of paired scores such that the difference between scores is minimized and then treat that pairing as if it is the true pairing. I want to say that approach should be conservative, minimizing the differences between t0 and t1, but your mileage may vary.
null
CC BY-SA 2.5
null
2010-10-15T16:34:25.457
2010-10-15T16:34:25.457
null
null
196
null
3640
1
3644
null
12
8016
Problem: I am parameterizing distributions for use as a priors and data in a Bayesian meta-analysis. The data are provided in the literature as summary statistics, almost exclusively assumed to be normally distributed (although none of the variables can be < 0, some are ratios, some are mass, and etc.). I have come across two cases for which I have no solution. Sometimes the parameter of interest is the inverse of the data or the ratio of two variables. Examples: - the ratio of two normally distributed variables: data: mean and sd for percent nitrogen and percent carbon parameter: ratio of carbon to nitrogen. - the inverse of a normally distributed variable: data: mass/area parameter: area/mass My current approach is to use simulation: e.g. for a set of percent carbon and nitrogen data with means: xbar.n,c, variance: se.n,c, and sample size: n.n, n.c: ``` set.seed(1) per.c <- rnorm(100000, xbar.c, se.c*n.c) # percent C per.n <- rnorm(100000, xbar.n, se.n*n.n) # percent N ``` I want to parameterize ratio.cn = perc.c/perc.n ``` # parameter of interest ratio.cn <- perc.c / perc.n ``` Then choose the best fit distributions with range $0 \rightarrow \infty$ for my prior ``` library(MASS) dist.fig <- list() for(dist.i in c('gamma', 'lognormal', 'weibull')) { dist.fit[[dist.i]] <- fitdist(ratio.cn, dist.i) } ``` Question: Is this a valid approach? Are there other / better approaches? Thanks in advance! Update: the Cauchy distribution, which is defined as the ratio of two normals with $\mu=0$, has limited utility since I would like to estimate variance. Perhaps I could calculate the variance of a simulation of n draws from a Cauchy? I did find the following closed-form approximations but I haven't tested to see if they give the same results... [Hayya et al, 1975](https://netfiles.uiuc.edu/dlebauer/www/hayya1975nrt.pdf) $$\hat{\mu}_{y:x} = \mu_y/mu_x + \sigma^2_x * \mu_y / \mu_x^3 + cov(x,y) * \sigma^2_x * \sigma^2_y / \mu_x^2$$ $$\hat{\sigma}^2_{y:x} = \sigma^2_x\times\mu_y / mu_x^4 + \sigma^2_y / mu_x^2 - 2 * cov(x,y) * \sigma^2_x * \sigma^2_y / mu_x^3$$ Hayya, J. and Armstrong, D. and Gressis, N., 1975. A note on the ratio of two normally distributed variables. Management Science 21: 1338--1341
How to parameterize the ratio of two normally distributed variables, or the inverse of one?
CC BY-SA 2.5
null
2010-10-15T16:45:49.320
2010-10-15T19:45:31.940
2010-10-15T19:28:25.393
1381
1381
[ "distributions", "bayesian", "variance", "random-variable", "meta-analysis" ]
3641
1
null
null
4
216
I would like to project the data in this graph for at least 4 or 5 periods. Unfortunately, that won't be possible with a moving average. A regression will result in negative values after the 3rd period. What are my forecasting options? Basically, what i'm trying to do, is predict where the boomer hump is gonna be based soley on the population and age per year data EDIT AT work we have lots of image sites blocked, [here's the image](http://97.107.136.148/age_dist.jpg) in case you can't see it. EDIT 2 Updated Image ![State Wide Age Distribution](https://i.stack.imgur.com/Ad80M.jpg)
Forecasting Age distribution
CC BY-SA 2.5
null
2010-10-15T16:50:38.817
2010-10-15T19:43:31.437
2010-10-15T17:25:32.207
59
59
[ "time-series", "forecasting" ]
3642
2
null
3634
1
null
You might check out the ezBoot() function in the ez package for bootstrapping confidence intervals on your effects of interest.
null
CC BY-SA 2.5
null
2010-10-15T16:52:13.963
2010-10-15T16:52:13.963
null
null
364
null
3643
2
null
3640
0
null
Could you not assume that $y^{-1} \sim N(.,.)$ for the inverse of a normal random variable and do the necessary bayesian computation after identifying the appropriate parameters for the normal distribution. My suggestion below to use the Cauchy does not work as pointed out in the comments by ars and John. The ratio of two normally random variables follows the [Cauchy](http://en.wikipedia.org/wiki/Cauchy_distribution) distribution. You may want to use this idea to identify the parameters of the cauchy that most closely fits the data you have.
null
CC BY-SA 2.5
null
2010-10-15T17:05:22.430
2010-10-15T18:42:03.223
2010-10-15T18:42:03.223
null
null
null
3644
2
null
3640
6
null
You might want to look at some of the references under the Wikipedia article on [Ratio Distribution](http://en.wikipedia.org/wiki/Ratio_distribution). It's possible you'll find better approximations or distributions to use. Otherwise, your approach seems sound. Update I think a better reference might be: - Ratios of Normal Variables and Ratios of Sums of Uniform Variables (Marsaglia, 1965) See formulas 2-4 on page 195. Update 2 On your updated question regarding variance from a Cauchy -- as John Cook pointed out in the comments, the variance doesn't exist. So, taking a sample variance simply won't work as an "estimator". In fact, you'll find that your sample variance does not converge at all and fluctuates wildly as you keep taking samples.
null
CC BY-SA 2.5
null
2010-10-15T17:17:18.427
2010-10-15T19:45:31.940
2010-10-15T19:45:31.940
251
251
null
3645
2
null
3634
4
null
The [vegan](http://cc.oulu.fi/~jarioksa/softhelp/vegan.html) package implements permutation testing for distance based ANOVA, which should work with multi-way, repeated measures data.
null
CC BY-SA 2.5
null
2010-10-15T17:43:27.083
2010-10-15T17:43:27.083
null
null
251
null
3646
1
3648
null
2
536
I have a univariate data set that's approximately normally distributed. I am happy to assume that the population is normally distributed, and I'd like to estimate the mean and variance of the population. My textbook suggests (as I understand it) that since my sample size is large (1000's of data points), it is reasonable to take the sample mean and sample variance as my estimates for the population sample/variance. However, I'm also vaguely aware that regression can be used to fit a model to data. So in the case of my problem, which is a more reasonable approach (and why?): - Just use the sample mean/variance values as estimates for the population mean/variance - Fit the data to a normal model and use the calculated mean/variance.
Simple Estimates vs Model for calculating mean and variance of population
CC BY-SA 2.5
null
2010-10-15T19:27:02.353
2010-10-15T20:20:29.600
null
null
1598
[ "normal-distribution" ]
3647
2
null
3641
6
null
For demographic forecasting of any quality whatsoever you need to account for birth and death rates and, if possible, migration, breaking them down by gender (at a minimum) and, if possible, by race. These rates have all changed substantially during your time period and are likely to continue changing in the future. This information is available on the US Census Bureau's site. Many states have state-specific mortality tables available. I have performed such projections at state and even census tract levels with the same aim in mind (to support market research analysis for assisted living facilities) and found that a careful projection will differ substantially from purely statistical (demographically ignorant) procedures like moving averages or regression.
null
CC BY-SA 2.5
null
2010-10-15T19:43:31.437
2010-10-15T19:43:31.437
null
null
919
null
3648
2
null
3646
7
null
If I understand your question, and you mean using a least squares model of the form $Y=\beta + \epsilon$ where $\epsilon\sim N(0,\sigma^2)$ these two approaches are equivalent. A simple R example will demonstrate this: ``` #generate pseudo-data set.seed(0) n <- 1000 x <- rnorm(n) # approach 1: calculation sum(x)/n #mean mse <- sum((x-mean(x))^2)/n #mse se <- sqrt(mse/n) #std error # approach 2: model model <- lm(x~1) model$coefficients[1] #mean sqrt(sum(model$residuals^2)/n)/sqrt(n) #standard error ```
null
CC BY-SA 2.5
null
2010-10-15T19:48:34.417
2010-10-15T20:20:29.600
2010-10-15T20:20:29.600
1381
1381
null
3649
2
null
3646
3
null
I was about to make the same point as David, except illustrating using Stata rather than R: . summarize length Variable | Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- length | 74 187.9324 22.26634 142 233 . regress length Source | SS df MS Number of obs = 74 -------------+------------------------------ F( 0, 73) = 0.00 Model | 0 0 . Prob > F = . Residual | 36192.6622 73 495.789893 R-squared = 0.0000 -------------+------------------------------ Adj R-squared = 0.0000 Total | 36192.6622 73 495.789893 Root MSE = 22.266 ------------------------------------------------------------------------------ length | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- _cons | 187.9324 2.588409 72.61 0.000 182.7737 193.0911 ------------------------------------------------------------------------------ I've added the bolding to highlight that the mean and standard deviation are the same as the estimate of the constant and root mean square error from a linear regression with no covariates.
null
CC BY-SA 2.5
null
2010-10-15T19:58:01.210
2010-10-15T19:58:01.210
null
null
449
null
3650
1
3658
null
5
802
I have a two part question; First Part: I have an urn with 20 balls, 2 of those balls are purple, and I pull out 6 balls at random. I witness 100 realizations of this process. Given the observed frequency at which I drew purple balls, how do I determine if I am really pulling balls out at random? Also given that there are 2 purple balls, I have a hunch that if purple balls are pulled out disproprionately, I expect that both purple balls would be pulled out (i.e. I'm more interested in seeing if 2 purple balls are pulled out disproportionately than I am if 1 purple ball is pulled out more frequently than expected). Second Part: I have an urn with a variable number of balls, a variable number of purple balls within that urn, and a variable number of draws. I witness 100 realizations of this process, and I observe in each realization how many balls there were, how many of those balls were purple, and how many balls I drew from the urn. Same questions; Given the observed frequency at which I drew purple balls, how do I determine if I am really pulling balls out at random? Again I'm more interested to see if higher frequencies of purple balls are disproportionately drawn than I am to see if 1 purple ball being drawn happens more than expected by chance. (I'm open to suggestions for title of question and tags) Edit: Srikant suggested I may need to make distributional assumptions about my variables, which I am willing to do. Lets say the number of balls in the urn is uniform between 20 and 30, the number of purple balls is uniform between 0 and 4, and the number of draws is uniform between 6 and 12. See my [answer](https://stats.stackexchange.com/questions/3650/expected-distribution-of-random-draws/3873#3873) that describes my motivation for asking this question.
Expected distribution of random draws
CC BY-SA 2.5
null
2010-10-15T20:17:13.240
2010-10-22T16:38:05.127
2017-04-13T12:44:33.310
-1
1036
[ "distributions", "probability" ]
3651
2
null
3650
2
null
First Part: The draws from the urn follow a [hypergeometric distribution](http://en.wikipedia.org/wiki/Hypergeometric_distribution) assuming random draws. Any deviation from the theoretical probabilities vis-a-vis the observed frequencies can be evaluated using chi-square tests. Second Part: Let: $n \sim U(20,30)$ be the total number of balls in the urn $p \sim U(0,4)$ be the number of purple balls. $d \sim U(6,12)$ be the number of draws. $n_{pi}$ be the number of purple balls we the $i^\mbox{th}$ draw. Thus, $$P(n_{pi}|n,p,d) = \frac{\binom{p}{n_{pi}} \binom{n-p}{d-n_{pi}}}{\binom{n}{d}}$$ You can integrate the above probabilities using the priors for $n$, $p$ and $d$ which will give you the expected frequencies to observe purple balls provided you draw them at random. You can then compare the expected frequencies with the observed frequencies to assess if the process is truly random.
null
CC BY-SA 2.5
null
2010-10-15T20:23:22.837
2010-10-16T00:13:23.233
2010-10-16T00:13:23.233
null
null
null
3652
1
null
null
20
21738
Suppose one has two independent samples from the same population, and different methods were used on the two samples to derive point estimate and confidence intervals. In trivial cases a sensible person would just pool the two samples and use one method to do the analysis, but let's suppose for the moment that different method has to be used due to limitation of one of the sample such as missing data. These two separate analyses would generate independent, equally valid estimates for the population attribute of interest. Intuitively I think there should be a way to properly combine these two estimates, both in terms of point estimate and confidence interval, resulting in a better estimation procedure. My question is what should be the best way to do it? I can imagine a weighted mean of some sort according to the information/sample size in each sample, but what about the confidence intervals?
Combining two confidence intervals/point estimates
CC BY-SA 2.5
null
2010-10-15T20:24:37.687
2015-01-09T15:52:22.550
2010-10-15T23:18:35.950
449
1600
[ "confidence-interval", "meta-analysis" ]
3653
1
3657
null
15
9699
I have a multivariate regression, which includes interactions. For example, to get the estimate of the treatment effect for the poorest quintile I need to add the coefficients from the treatment regressor to the coefficient from the interaction variable (which interacts treatment and quintile 1). When adding two coefficients from a regression, how does one obtain standard errors? Is it possible to add the standard errors from the two coefficients? What about the t-stats? Is it possible to add these as well? I'm guessing not but I can't find any guidance on this. Thanks so much in advance for your help!
Adding coefficients to obtain interaction effects - what to do with SEs?
CC BY-SA 2.5
null
2010-10-15T20:30:05.690
2012-01-24T17:10:44.107
2010-10-15T20:31:52.317
930
834
[ "regression", "standard-deviation", "standard-error" ]
3654
2
null
3652
1
null
This is not unlike a stratified sample. So, pooling the samples for a point estimate and standard error seems like a reasonable approach. The two samples would be weighted by sample proportion.
null
CC BY-SA 2.5
null
2010-10-15T20:32:35.200
2010-10-15T20:32:35.200
null
null
485
null
3655
2
null
3652
8
null
You could do a pooled estimate as follows. You can then use the pooled estimates to generate a combined confidence interval. Specifically, let: $\bar{x_1} \sim N(\mu,\frac{\sigma^2}{n_1})$ $\bar{x_2} \sim N(\mu,\frac{\sigma^2}{n_2})$ Using the confidence intervals for the two cases, you can re-construct the standard errors for the estimates and replace the above with: $\bar{x_1} \sim N(\mu,SE_1)$ $\bar{x_2} \sim N(\mu,SE_2)$ A pooled estimate would be: $\bar{x} = \frac{n_1 \bar{x_1} + n_2 \bar{x_2}}{n_1 + n_2}$ Thus, $\bar{x} \sim N(\mu,\frac{{n_1}^2 SE_1 + {n_2}^2 SE_2}{(n_1+n_2)^2})=N(\mu,\frac{\sigma^2}{n_1+n_2})$
null
CC BY-SA 3.0
null
2010-10-15T20:33:13.343
2015-01-09T15:52:22.550
2015-01-09T15:52:22.550
-1
null
null
3657
2
null
3653
10
null
I think this the expression for $SE_{b_{new}}$: $$\sqrt{SE_1^2 + SE_2^2+2Cov(b_1,b_2)}$$ You can work with this new standard error to find your new test statistic for testing $H_o: \beta=0$
null
CC BY-SA 2.5
null
2010-10-15T20:44:47.467
2010-10-15T22:09:51.290
2010-10-15T22:09:51.290
1307
1307
null
3658
2
null
3650
3
null
The expected frequency of observing $k$ purple balls in $d$ draws (without replacement) from an urn of $p$ purple balls and $n-p$ other balls is obtained by counting and equals $$\frac{{p \choose k} {n-p \choose d-k} }{{n \choose d}}.$$ Test a sample (of say $100$) such experiments with a chi-squared statistic using these probabilities as the reference. In the second case, integrate over the prior distributions. There is no nice formula for that, but the integration (actually a sum for these discrete variables) can be carried out exactly if you wish. In the example given in the edited section -- independent uniform distributions of $n$ from $20$ to $30$ (thus having a one in 11 chance of being any value between $20$ and $30$ inclusive), of $p$ from $0$ to $4$, and of $d$ from $6$ to $12$ -- the result is a probability distribution on the possible numbers of purples ($0, 1, 2, 3, 4$) with values $0: 69728476151/142333251060 = 0.489896$ $1: 8092734193/24540215700 = 0.329774$ $2: 36854/258825 = 0.14239$ $3: 169436/4917675 = 0.0344545$ $4: 17141/4917675 = 0.00348559$. Use a chi-squared test for this situation, too. As usual when conducting chi-squared tests, you will want to lump the last two or three categories into one because their expectations are less than $5$ (for $100$ repetitions). There is no problem with zero values. --- Edit (in response to a followup question) The integrations are performed as multiple sums. In this case, there is some prior distribution for $n$, a prior distribution for $p$, and a prior distribution for $d$. For each possible ordered triple of outcomes $(n,p,d)$ together they give a probability $\Pr(n,p,d)$. (With uniform distributions as above this probability is a constant equal to $1/((30-20+1)(4-0+1)(12-6+1))$.) One forms the sum over all possible values of $(n,p,d)$ (a triple sum in this case) of $$\Pr(n,p,d) \frac{{p \choose k} {n-p \choose d-k} }{{n \choose d}}.$$
null
CC BY-SA 2.5
null
2010-10-15T22:00:06.187
2010-10-18T18:21:25.497
2010-10-18T18:21:25.497
919
919
null
3659
2
null
3652
4
null
Sounds a lot like [meta-analysis](http://en.wikipedia.org/wiki/Meta-analysis) to me. Your assumption that the samples are from the same population means you can use fixed-effect meta-analysis (rather than random-effects meta-analysis). The generic inverse-variance method takes a set of independent estimates and their variances as input, so doesn't require the full data and works even if different estimators have been used for different samples. The combined estimate is then a weighted average of the separate estimates, weighting each estimate by the inverse of its variance. The variance of the combined estimate is the inverse of the sum of the weights (the inverses of the variances). You want to work on a scale where the sampling distribution of the estimate is approximately normal, or at least a scale on which the confidence intervals are approximately symmetric, so a log transformed scale is usual for ratio estimates (risk ratios, odds ratios, rate ratios...). In other cases a [variance-stabilising transformation](http://en.wikipedia.org/wiki/Variance-stabilizing_transformation) would be useful, e.g. a square-root transformation for Poisson data, an arcsin-square-root transformation for binomial data, etc.
null
CC BY-SA 2.5
null
2010-10-15T23:18:18.057
2010-10-15T23:18:18.057
null
null
449
null
3660
2
null
3595
8
null
Most of the algorithms on [Apache Mahout](http://mahout.apache.org/) scale way beyond 20M records, even with high-dimensional data. If you only need to build a prediction model, there are specific tools like Vowpal Wabbit (http://hunch.net/~vw/) that can easily scale to billions of records on a single machine.
null
CC BY-SA 2.5
null
2010-10-16T01:09:31.757
2010-10-16T01:09:31.757
null
null
635
null
3661
1
null
null
6
2904
I have 3 observers that each take 2 measurements (length and weight) on 100 individuals; these procedures are repeated once (i.e., the same measurements are taken on the same 100 individuals by the same 3 observers), so that the data set is duplicated (i.e., early reading and late reading). - What is the best way to figure out how each individual observer's measurements vary between the late and early trial measurements? - How can I best compare how close or different the measurements of length (or weight) differ among the 3 observers?
Repeatability and measurement error from and between observers
CC BY-SA 2.5
null
2010-10-16T01:44:55.913
2010-10-18T07:17:51.227
2010-10-16T08:43:26.653
null
1603
[ "anova", "error", "measurement", "reliability", "agreement-statistics" ]
3663
2
null
3661
8
null
What you describe is a reliability study where each subject is going to be assessed by the same three raters on two occasions. Analysis can be done separately on the two outcomes (length and weight, though I assume they will be highly correlated and you're not interested in how this correlation is reflected in raters' assessments). Estimating measurement reliability can be done in two ways: - The original approach (as described in Fleiss, 1987) relies on the analysis of variance components through an ANOVA table, where we assume no subject by rater interaction (the corresponding SS is constrained to 0) -- of course, you won't look at $p$-values, but at the MSs corresponding to relevant effects; - A mixed-effects model allows to derive variance estimates, considering time as a fixed effect and subject and/or rater as random-effect(s) (the latter distinction depends on whether you consider that your three observers were taken or sampled from a pool of potential raters or not -- if the rater effect is small, the two analyses will yield quite the same estimate for outcome reliability). In both cases, you will be able to derive a single intraclass correlation coefficient, which is a measure of reliability of the assessments (under the Generalizability Theory, we would call them generalizability coefficients), which would answer your second question. The first question deals with a potential effect of time (considered as a fixed effect), which I discussed here, [Reliability in Elicitation Exercise](https://stats.stackexchange.com/questions/1015/reliability-in-elicitation-exercise). More details can be found in Dunn (1989) or Brennan (2001). I have an [R example script](http://gist.github.com/631183) on Github which illustrates both approaches. I think it would not be too difficult to incorporate rater effects in the model. References - Fleiss, J.L. (1987). The design and analysis of clinical experiments. New York: Wiley. - Dunn, G. (1989). Design and analysis of reliability studies. Oxford - Brennan, R.L. (2001). Generalizability Theory. Springer
null
CC BY-SA 2.5
null
2010-10-16T08:43:54.767
2010-10-18T07:17:51.227
2017-04-13T12:44:52.277
-1
930
null
3664
2
null
3661
3
null
You need to repeat the same process separately for length and weight, as these are completely separate outcomes with different units and methods of measurement. I'd start, as so often, by plotting some exploratory graphs. In this case a set of [Bland–Altman](http://en.wikipedia.org/wiki/Bland-Altman_plot) (diffference vs. average) plots, one for each observer. If the plots for each observer look similar, I'd do a combined plot too. I'd look for any patterns in these plots, e.g. does the variability in the difference stay reasonably constant with the mean? (if not, i might consider some variance-stabilizing transformation). For each observer I'd then calculate the mean difference between early and late readings, to quantify whether there's a systematic difference, and the standard deviation of the difference as a way of quantifying how much each observer's measurements vary between late and early readings. I might then conduct a formal statistical test for the equality of the variances of the differences, such as the [Brown–Forsythe test](http://en.wikipedia.org/wiki/Brown%E2%80%93Forsythe_test). If there's no strong evidence that the variances differ substantially between observers, I'd move on to ANOVA as I see has just been described by chl.
null
CC BY-SA 2.5
null
2010-10-16T08:50:42.457
2010-10-16T08:50:42.457
null
null
449
null
3665
2
null
3632
4
null
If your only interest is whether their income is over $\$$5,000 or not, and it doesn't make a difference how far from that threshold their income actually is, then I would recommend using a classification technique (not necessarily logistic regression, try a range of methods and use whatever gives best out-of-sample performance) rather than regression. Vladimir Vapnik (inventor of the support vector machine) says you should always aim to solve the problem at hand directly, rather than use a more general method and post-process the result. That is a reasonable argument; if you are not interested how far above $5000 someones income is, why expend resources modelling the regression function a long way above that threshold? So if you have a classification problem, use a classifier, rather than threshold a regression. Note however it is likely that false-positive and false-negative costs may be different though and factor that into your classifier design. HTH
null
CC BY-SA 2.5
null
2010-10-16T14:58:21.727
2010-10-16T15:08:36.650
2010-10-16T15:08:36.650
919
887
null
3666
2
null
2730
10
null
[Applied Linear Statistical Models](http://rads.stackoverflow.com/amzn/click/0256117365) by Neter, Kutner, Wasserman, and Nachtscheim, has a very exhaustive (and exhausting!) treatment of ANOVA and ANCOVA. It also covers power analysis, linear regression, multilinear regression, and introduces some MANOVA. It's a very long text, but does a very thorough job. I've linked you to the fourth edition. I doubt there's a huge difference over the fifth edition, and it's substantially cheaper.
null
CC BY-SA 2.5
null
2010-10-16T15:47:52.163
2010-10-16T15:47:52.163
null
null
1118
null
3667
2
null
1015
5
null
Maybe I misunderstood the question, but what you are describing sounds like a test-retest reliability study on your Q scores. You have a series of experts each going to assess a number of items or questions, at two occasions (presumably fixed in time). So, basically you can assess the temporal stability of the judgments by computing an intraclass correlation coefficient (ICC), which will give you an idea of the variance attributable to subjects in the variability of observed scores (or, in other words of the closeness of the observations on the same subject relative to the closeness of observations on different subjects). The ICC may easily be obtained from a mixed-effect model describing the measurement $y_{ij}$ of subject $i$ on occasion $j$ as $$ y_{ij}=\mu+u_i+\varepsilon_{ij},\quad \varepsilon\sim\mathcal{N}(0,\sigma^2) $$ where $u_i$ is the difference between the overall mean and subject $i$'s mean measurement, and $\varepsilon_{ij}$ is the measurement error for subject $i$ on occasion $j$. Here, this is a random-effect model. Unlike a standard ANOVA with subjects as factor, we consider the $u_i$ as random (i.i.d.) effects, $u_i\sim\mathcal{N}(0,\tau^2)$, independent of the error terms. Each measurement differ from the overall mean $\mu$ by the sum of the two error terms, among which the $u_i$ is shared between occasion on the same subjects. The total variance is then $\tau^2+\sigma^2$ and the proportion of the total variance that is accounted for by the subjects is $$ \rho=\frac{\tau^2}{\tau^2+\sigma^2} $$ which is the ICC, or the reliability index from a psychometrical point of view. Note that this reliability is sample-dependent (as it depends on the between-subject variance). Instead of the mixed-effects model, we could derive the same results from a two-way ANOVA (subjects + time, as factors) and the corresponding Mean Squares. You will find additional references in those related questions: [Repeatability and measurement error from and between observers](https://stats.stackexchange.com/questions/3661/repeatability-and-measurement-error-from-and-between-observers/3663#3663), and [Inter-rater reliability for ordinal or interval data](https://stats.stackexchange.com/questions/3539/inter-rater-reliability-for-ordinal-or-interval-data/3546). In R, you can use the `icc()` function from the [psy](http://cran.r-project.org/web/packages/psy/index.html) package; the random intercept model described above corresponds to the "agreement" ICC, while incorporating the time effect as a fixed factor would yield the "consistency" ICC. You can also use the `lmer()` function from the [lme4](http://cran.r-project.org/web/packages/lme4/index.html) package, or the `lme()` function from the [nlme](http://cran.r-project.org/web/packages/nlme/index.html) package. The latter has the advantage that you can easily obtain 95% CIs for the variance components (using the `intervals()` function). Dave Garson provided a nice overview (with SPSS illustrations) in [Reliability Analysis](http://faculty.chass.ncsu.edu/garson/PA765/reliab.htm), and [Estimating Multilevel Models using SPSS, Stata, SAS, and R](http://www.indiana.edu/~statmath/stat/all/hlm/hlm.pdf) constitutes a useful tutorial, with applications in educational assessment. But the definitive reference is Shrout and Fleiss (1979), [Intraclass Correlations: Uses in Assessing Rater Reliability](http://www3.uta.edu/faculty/ricard/COED/Shrout%20&%20Fleiss%20%281979%29%20Intraclass%20correlations.pdf), Psychological Bulletin, 86(2), 420-428. I have also added an [example R script](http://gist.github.com/631183) on Githhub, that includes the ANOVA and mixed-effect approaches. Also, should you add a constant value to all of the values taken at the second occasion, the Pearson correlation would remain identical (because it is based on deviations of the 1st and 2nd measurements from their respective means), whereas the reliability as computed through the random intercept model (or the agreement ICC) would decrease. BTW, Cronbach's alpha is not very helpful in this case because it is merely a measure of the internal consistency (yet, another form of "reliability") of an unidimensional scale; it would have no meaning should it be computed on items underlying different constructs. Even if your questions survey a single domain, it's hard to imagine mixing the two series of measurements, and Cronbach's alpha should be computed on each set separately. Its associated 95% confidence interval (computed by bootstrap) should give an indication about the stability of the internal structure between the two test occasions. As an example of applied work with ICC, I would suggest > Johnson, SR, Tomlinson, GA, Hawker, GA, Granton, JT, Grosbein, HA, and Feldman, BM (2010). A valid and reliable belief elicitation method for Bayesian priors. Journal of Clinical Epidemiology, 63(4), 370-383.
null
CC BY-SA 2.5
null
2010-10-16T18:41:49.013
2010-11-01T16:32:27.973
2017-04-13T12:44:37.420
-1
930
null
3668
2
null
3632
3
null
What are you trying to predict? Is your outcome just an indicator of whether income is above 5000 dollars or not? If so, that is the best that you can predict; that is, you can't predict anyone's income, only whether he has high (above 5000) income or not. If this is your outcome and what you'd like to predict, the question is why would we favor using logit versus a linear probability model (standard OLS) for a binary outcome. Logit forces your prediction to be between 0 and 1, while LPM can predict any value for the probability of having high income (which is also the expected value of an indicator for having high income). This distinction is especially important if you have multiple predictors in your model or are trying to extrapolate to values of your predictors outside those of your sample. The downside to logit is the coefficients are difficult to interpret directly; instead, you typically report and consider marginal effects. Marginal effects depend upon the levels of your covariates, which can be confusing. Logit is also less robust to deviations from the normality and identical distribution assumptions about the error terms as I recall.
null
CC BY-SA 2.5
null
2010-10-17T04:02:18.357
2010-10-17T04:02:18.357
null
null
401
null
3669
2
null
3653
2
null
To be more general, if you create a (row) vector for the estimate that you care about $R$ such that your estimator is equal to $R\beta$, then the variance of that estimator is $R\hat{V}R^\prime$, where $\hat{V}$ is the estimated variance-covariance matrix of your regression. Your estimate is distributed Normal or t, depending upon the assumption that you are making (Law of Large Numbers v. assuming normal errors in your regression model). Alternatively, you can test multiple estimates if you let $R$ be a matrix. This is known as a Wald test. The distribution in this case is a $\chi^2_r$, where $r$ is the number of rows in your matrix (assuming that the rows are linearly independent).
null
CC BY-SA 2.5
null
2010-10-17T04:09:30.837
2010-10-17T04:24:19.630
2010-10-17T04:24:19.630
401
401
null
3670
2
null
3542
11
null
You might check out the documentation for the LaTeX package [booktabs](http://www.ctan.org/tex-archive/macros/latex/contrib/booktabs/booktabs.pdf); it gives general guidance and implements its design suggestions in LaTeX tables.
null
CC BY-SA 2.5
null
2010-10-17T04:16:04.893
2010-10-17T04:16:04.893
null
null
401
null
3671
2
null
3542
6
null
I hope this answer is not too off topic, but a couple of days ago I have seen this link on visualizing tables at StackExchange: [Visual Representation of Tabular Information – How to Fix the Uncommunicative Table](http://flowingdata.com/2009/04/21/visual-representation-of-tabular-information-how-to-fix-the-uncommunicative-table/)
null
CC BY-SA 2.5
null
2010-10-17T07:36:38.073
2010-10-17T07:36:38.073
null
null
1607
null
3672
1
3674
null
7
591
Could anyone suggest some statistical measures to describe the distribution of a dendrogram? If I have two dendrograms, how could can I quantify their structural differences?
A measure to describe the distribution of a dendrogram
CC BY-SA 2.5
null
2010-10-17T11:24:28.837
2010-10-18T08:07:08.010
2010-10-18T08:07:08.010
449
1250
[ "distributions", "time-series", "clustering", "dendrogram" ]
3673
2
null
3419
7
null
You asked a difficult question, but I'm a little bit surprised that the various clues that were suggested to you received so little attention. I upvoted all of them because I think they basically are useful responses, though in their actual form they call for further bibliographic work. Disclaimer: I never had to deal with such a problem, but I regularly have to expose statistical results that may differ from physician's a priori beliefs and I learn a lot from unraveling their lines of reasoning. Also, I have some background in teaching human decision/knowledge from the perspective of Artificial Intelligence and Cognitive Science, and I think what you asked is not so far from how experts actually decide that two objects are similar or not, based on their attributes and a common understanding of their relationships. From your question, I noticed two interesting assertions. The first one related to how an expert assess the similarity or difference between two set of measurements: > I don't particularly care if there is some relation between attribute X and Y. What I care about is if a doctor thinks there is a relation between X and Y. The second one, > How can I predict what they think the similarity is? Do they look at certain attributes? looks like it is somewhat subsumed by the former, but it seems more closely related to what are the most salient attributes that allow to draw a clear separation between the objects of interest. To the first question, I would answer: Well, if there is no characteristic or objective relationship between any two subjects, what would be the rationale for making up an hypothetical one? Rather, I think the question should be: If I only have limited resources (knowledge, time, data) to take a decision, how do I optimize my choice? To the second question, my answer is: Although it seems to partly contradicts your former assertion (if there is no relationship at all, it implies that the available attributes are not discriminative or useless), I think that most of the time this is a combination of attributes that makes sense, and not only how a given individual scores on a single attribute. Let me dwell on these two points. Human beings have a limited or [bounded rationality](http://en.wikipedia.org/wiki/Bounded_rationality), and can take a decision (often the right one) without examining all possible solutions. There is also a close connection with [abductive reasoning](http://en.wikipedia.org/wiki/Abduction_(logic)). It is well known that there is some variability between individual judgments, and even between judgments from the same expert at two occasions. This is what we are interested in in reliability studies. But you want to know how these experts elaborate their judgments. There is a huge amount of papers about that in cognitive psychology, especially on the fact that relative judgments are easier and more reliable than absolute ones. Doctors' decisions are interesting in this respect because they are able to take a "good" decision with a limited amount of information, but at the same time they benefit from an ever growing internal knowledge base from which they can draw expected relationships (extrapolation). In other words, they have a built-in inference (assumed to be hypothetico-deductive) machinery and accumulate positive evidence or counterfactuals from there experience or practice. Reproducing this inferential ability and the use of declarative knowledge was the aim of several expert or [production rule](http://en.wikipedia.org/wiki/Production_system) systems in the 70s, the most famous one being [MYCIN](http://en.wikipedia.org/wiki/Mycin), and more generally of Artifical Intelligence earlier in 1946 (Can we reproduce on an artificial system the intelligent behavior observed in man?). Automatic treatment of speech, problem solving, visual shape recognition are still active projects nowadays and they all have to do with identifying salient features and their relationships to make an appropriate decision (i.e., how far should two patterns be to be judged as the emanation of two distinct generating processes?). In sum, our doctors are able to draw an optimal inference from a limited amount of data, compensating from noise that arises simply as a byproduct of individual variability (at the level of the patients). Thus, there is a clear connection with statistics and probability theory, and the question is what conscious or subconscious methodology help doctors forming their judgments. [Semantic networks](http://en.wikipedia.org/wiki/Semantic_network) (SN), [belief networks](http://en.wikipedia.org/wiki/Bayesian_network), and [decision trees](http://en.wikipedia.org/wiki/Decision_tree) are all relevant to the question you asked. The paper you cited is about using an [ontology](http://en.wikipedia.org/wiki/Ontology_(information_science)) as a basis of formal judgments, but it is no more than an extension of SNs, and many projects were initiated in this direction (I can think of the [Gene Ontology](http://www.geneontology.org/) for genomic studies, but many others exist in different domains). Now, look at the following hierarchical classification of diagnostic categories (it is roughly taken from Dunn 1989, p. 25): ![alt text](https://i.stack.imgur.com/y3t62.png) And now take a look at the [ICD classification](http://ICD9Data.com/); I think it is not too far from this schematic classification. Mental disorders are organized into distinct categories, some of them being closer one to each other. What render them similar is the closeness of their expression (phenotype) in any patient, and the fact that they share some similarities in their somatic/psychological etiology. Assessing whether two doctors would make the same diagnostic is a typical example of an inter-rater agreement study, where two psychiatrists are asked to place each of several patients in mutually exclusive categories. The hierarchical structure should be reflected in the disagreement between each doctor, that is they may not agree on the finer distinction between diagnostic classes (the leafs) but if they were to disagree between insomnia and schizophrenia, well it would be little bit disconcerting... How these two doctors decide on which class a given patient belongs to is no more than a clustering problem: How likely are two individuals, given a set of observed values on different attributes, to be similar enough so that I decide they share the same class membership? Now, some attributes are more influential than others, and this is exactly what is reflected in the weight attributed to a given attribute in Latent Class Analysis (which can be thought of as a [probabilistic extension](http://www.statisticalinnovations.com/articles/kmeans2a.htm) of clustering methods like k-means), or the [variable importance](http://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm) in Random Forests. We need to put things into boxes, because at first sight it's simpler. The problem is that often things overlap to some extent, so we need to consider different levels of categorization. In fact, cluster analysis is at the heart of the actual DSM categories, and many papers actually turn around assigning one patient to a specific syndromic category, based on the profile of his response to a battery of neuropsychological assessments. This merely looks like a subtyping approach; each time, we seek to refine a preliminary well-established diagnostic category, by adding exception rules or an additional relevant symptom or impairment. A related topic is decision trees which are by far the most well understood statistical techniques by physicians. Most of the time, they described a nested series of boolean assertions (Do you have a sore throat? If yes, do you have a temperature? etc.; but look at an example of public [influenza diagnostic tree](http://www.crengland.com/content/managementservices/flu/flu-decision-tree.pdf)) according to which we can form a decision regarding patients proximity (i.e. how similar patients are wrt. attributes considered for building the tree -- the closer they are the more likely they are to end up in the same leaf). [Association rules](http://en.wikipedia.org/wiki/Association_rule_learning) and the [C4.5 algorithm](http://en.wikipedia.org/wiki/C4.5_algorithm) rely quite on the same idea. On a related topic, there's the [patient rule-induction method](http://www.ijmijournal.com/article/S1386-5056(07)00122-0/abstract) (PRIM). Now clearly, we must make a distinction between all those methods, that make an efficient use of a large body of data and incorporate bagging or boosting to compensate for model fragility or overfitting issues, and doctors who cannot process huge amount of data in an automatic and algorithmic manner. But, for small to moderate amount of descriptors, I think they perform quite good after all. The yes-or-no approach is not the panacea, though. In behavioral genetics and psychiatry, it is commonly argued that the classification approach is probably not the best way to go, and that common diseases (learning disorders, depression, personality disorders, etc.) reflect a continuum rather than classes of opposite valence. Nobody's perfect! In conclusion, I think doctors actually hold a kind of internalized inference engine that allows them to assign patients into distinctive classes that are characterized by a weighted combination of available evidences; in other words, they are able to organize their knowledge in an efficient manner, and these internal representations and the relationships they share may be augmented throughout experience. [Case-based reasoning](http://en.wikipedia.org/wiki/Case-based_reasoning) probably comes into play at some point too. All of this may be subjected to (a) revision with newly available data (we are not simply acting as definitive binary classifiers, and are able to incorporate new data in our decision making), and (b) subjective biases arising from past experience or wrong self-made association rules. However, they are prone to errors, as every decision systems... All statistical techniques reflecting these steps -- decisions trees, bagging/boosting, cluster analysis, latent cluster analysis -- seems relevant to your questions, although they may be hard to instantiate in a single decision rule. Here are a couple of references that might be helpful, as a first start to how doctors make their decisions: - A clinical decision support system for clinicians for determining appropriate radiologic imaging examination - Grzymala-Busse, JW. Selected Algorithms of Machine Learning from Examples. Fundamenta Informaticae 18 (1993), 193–207 - Santiago Medina, L, Kuntz, KM, and Pomeroy, S. Children With Headache Suspected of Having a Brain Tumor: A Cost-Effectiveness Analysis of Diagnostic Strategies. Pediatrics 108 (2001), 255-263 - Building Better Algorithms for the Diagnosis of Nontraumatic Headache - Jenkins, J, Shields, M, Patterson, C, and Kee, F. Decision making in asthma exacerbation: a clinical judgement analysis. Arch Dis Child 92 (2007), 672–677 - Croskerry, P. Achieving quality in clinical decision making: cognitive strategies and detection of bias. Acad Emerg Med 9(11) (2002), 1184-204. - Cahan, A, Gilon, D, Manor, O, and Paltiel. Probabilistic reasoning and clinical decision-making: do doctors overestimate diagnostic probabilities? QJM 96(10) (2003), 763-769 - Wegwarth, O, Gaissmaier, W, and Gigerenzer, G. Smart strategies for doctors and doctors-in-training: heuristics in medicine. Medical Education 43 (2009), 721–728
null
CC BY-SA 2.5
null
2010-10-17T12:06:38.520
2010-10-17T19:06:12.167
2010-10-17T19:06:12.167
930
930
null
3674
2
null
3672
5
null
See this SO question: [https://stackoverflow.com/questions/2218395/how-do-you-compare-the-similarity-between-two-dendrograms-in-r](https://stackoverflow.com/questions/2218395/how-do-you-compare-the-similarity-between-two-dendrograms-in-r)
null
CC BY-SA 2.5
null
2010-10-17T14:37:56.007
2010-10-17T14:37:56.007
2017-05-23T12:39:26.150
-1
null
null
3675
2
null
3307
3
null
I agree with @ars that you are unlikely to get one answer for this (you may also have more success on [http://mathoverflow.net](http://mathoverflow.net), since our community tends to be more applied, while this technique would have very little real-world usage). The Abrahao/Barbosa paper is a good reference. Just to provide some additional sources: This paper looks at the correlation between fractal dimensions, which seems like a reasonable approach to the problem. - "The correlation of fractal structures in the photospheric and the coronal magnetic field" (Dimitropoulou, Georgoulis, Isliker, Vlahos, Anastasiadis, Strintzi, Moussas 2009) This paper uses the multi-fractal spectra to estimate correlation: - "Continuous wavelet transform based time-scale and multi-fractal analysis of the nonlinear oscillations in a hollow cathode glow discharge plasma" (Nurujjaman, Narayanan, Iyengar 2009) Regarding the "Correlation Fractal Dimension", this paper provides a fast algorithm: - "Faster estimation of the correlation fractal dimension using box-counting" (Attikos, Doumpos 2009)
null
CC BY-SA 2.5
null
2010-10-17T15:45:28.087
2010-10-17T15:59:56.937
2010-10-17T15:59:56.937
5
5
null
3676
1
null
null
5
896
Background: I'm a junior researcher at an institute dealing with regional issues, particularly involving drug policy. Almost two years ago, one of our senior researchers began collecting arrest data about a nearby large city. He had been transcribing newspaper police blotter by hand until I joined a year and change ago and convinced him to switch to automated data collection. We're now ready to start analyzing this dataset. It contains about 20k arrest records with virtually no missing or invalid values (I'd estimate that there are less than 100 such records, and some of those are just parser errors). These records include full name and home address of the arrested person, arresting officers, exact charges, etc. (we've even found SSNs in the data). At this point, we're just exploring the data, but we expect to find differences in number and severity of charges according to race, arrest location, and home location. Question The dataset has two variables that, to my understanding, are categorical multiple-response questions. "Arresting Officer" is two columns, one of which always has a value and one of which may have an a value (for the second officer present at the arrest). "Charges" is five columns, each containing a single charge and in no particular order. To complicate the structure further, a person arrested on more than five charges will be issued multiple records. We messed around with the above dataset for a few hours so far and have gotten as far as being able to obtain useful counts and percentages in SPSS by using the Multiple Response Set feature and plugging that into Custom Tables. This is a fine start, but we'd like to move on to more detailed analysis at some point. Unfortunately, neither of us are aware of the recommended (or anti-recommended) methods for analyzing a categorical multiple-response question. This also applies to our desire to eventually group specific charges (e.g., "possession of controlled substances < 4 oz.") into broader categories such as 'violent crimes' or 'drug crimes'. Note that I'm not looking for just a snap answer here, as we don't have a deadline. I would be glad to do readings, so feel free to point me in the direction of tutorials, textbooks, and so on. I'm also not particularly attached to SPSS - it's just what my coworker is used to. If there are clear disadvantages to using it for this type of problem, I don't mind learning something new.
"Multiple response" analysis of arrest records
CC BY-SA 2.5
null
2010-10-17T16:13:28.657
2017-02-01T23:42:59.623
null
null
1609
[ "dataset", "spss" ]
3677
2
null
633
4
null
[Ingo Ruczinski](http://biostat.jhsph.edu/~iruczins/) has contributed to promote the use of [Logic regression](http://kooperberg.fhcrc.org/logic/documents/intro.html) for data set consisting of binary variables, with an emphasis on higher-order interaction terms. The main advantage compared to usual or penalized GLMs is that it is more parcimonious in terms of degrees of freedom. The outcome may be continuous or categorical, and continuous covariates can be added to the model (or the outcome can first be residualized on them if these are the binary predictors that are of interest). The original paper > Ruczinski I, Kooperberg C, LeBlanc ML (2003). Logic Regression. Journal of Computational and Graphical Statistics, 12(3), 475-511. includes several applications in biomedical studies, and a comparison of LR with [CART](http://en.wikipedia.org/wiki/Classification_and_regression_tree) and [MARS](http://en.wikipedia.org/wiki/Multivariate_adaptive_regression_splines). Although it has mainly been applied in large-scale genetic studies (e.g. genome-wide association studies), it should work with any binary variables whose combinations of interest can be expressed with a set of logical operators. The [LogicReg](http://cran.r-project.org/web/packages/LogicReg/index.html) R package implements this technique; see also the related packages on CRAN and Bioconductor, esp. [LogicForest](http://cran.r-project.org/web/packages/LogicForest/index.html) which shares some ideas with Random Forests.
null
CC BY-SA 2.5
null
2010-10-17T18:21:12.987
2010-11-02T18:47:07.033
2010-11-02T18:47:07.033
930
930
null
3678
2
null
3676
4
null
I can't particularly comment on how to handle multiple response categories, but you need to further refine your question for people on this forum to be able to give useful advice. You mention various interests, such as some sort of drug policy intervention, and differential charges according to race, arrest location, and home location. For the differential charges their is a huge body of criminological literature assessing various aspects of this. Are you interested in discretionary behavior of particular officers (or racially prejudiced treatment)? Are you interested in aspects of disproportionate minority contact with the criminal justice system? There is such a wide variety of potential questions I can not give any advice. Of what nature is the drug policy intervention? Are you interested in criminal histories and the effect of some policy? The nature of your data is pretty typical. Some recent arrest data I worked with had an average of around 3 charges per arrest (I remember 1 case having 20 charges in a single arrest). You will typically have some charges that tend to come together (and some times functionally redundant charges). Often times drug possession charges are not alone because the offender did something else to attract an officers attention (most often another crime), and upon arrest they were searched and drugs were found. You will undoubtedly have a core of prolific offenders in your data, and for any analysis you will want to know their histories, and likely take them into account for your analysis (do you have unique identifiers for individuals or do you have to match based on names, DOBs, and/or SSN's?) You may also have co-offending behavior that may be of interest. Most projects I have been involved in (including my own work) have dealt with the multiple charges in two ways. One is only include the "top" charge according to some ranking criteria, the other is to only analyze a particular subset of charges. This is hardly universal advice though, and without knowing the question you are addressing it is probably not advisable to do either of these at the onset. If you clump any charges together (e.g. treat possession of weed the same as possession of cocaine) I would suggest you do it on theoretical grounds as opposed to using some sort of statistical methods (although again depending on the question some type of stat clustering method may be useful). The more specific questions you have the better this community will be able to give advice. The nature of your data may seem complicated but many people on this forum will have had experience with similar data structures (at least in various aspects).
null
CC BY-SA 2.5
null
2010-10-17T18:37:01.210
2010-10-17T19:27:17.977
2010-10-17T19:27:17.977
1036
1036
null
3679
2
null
3676
2
null
It is not clear what you questions you are trying to answer but here are are several ways to deal with the multiple-response data: - Arresting Officer Convert the two columns into a single count variable (1 or 2) which indicates the no of arresting officers. You will lose the arresting officer's identities but perhaps that is not of interest to you per se? - Charges Again convert to a count variable or perhaps a weighted count with the weights being the severity of the crime. The count variable represents the number of charges against the defendant. The above strategies to deal with multiple-response variables has drawbacks as you lose information (e.g., identity of officers, details of specific crimes charged etc) but perhaps that is acceptable to you given the research questions you want to pursue. A better answer can potentially be given but that will require some knowledge of what you are trying to accomplish with the data.
null
CC BY-SA 2.5
null
2010-10-17T18:41:41.327
2010-10-17T18:41:41.327
2020-06-11T14:32:37.003
-1
null
null
3680
2
null
1621
5
null
There was a read paper last week at the Royal Statistical Society on MCMC techniques over Riemann manifolds, primarily using the Fisher information metric: [http://www.rss.org.uk/main.asp?page=1836#Oct_13_2010_Meeting](http://www.rss.org.uk/main.asp?page=1836#Oct_13_2010_Meeting) The results seem promising, though as the authors point out, in many models of interest (such as mixture models) the Fisher information has no analytic form.
null
CC BY-SA 2.5
null
2010-10-17T20:07:00.620
2010-10-17T20:12:38.687
2010-10-17T20:12:38.687
495
495
null
3681
2
null
3676
2
null
I've examined associations between multiple response categorical variables in the past basically following the log-linear approach for marginal data outlined in the following: - Strategies for Modeling Two Categorical Variables with Multiple Category Choices (Bilder, Loughlin, 2003) Your case may be more complicated since you're looking at more than just Officer by Charges. But the Bilder paper and references within may be a good start for exploring your modeling choices. The nice thing is that I was able to fit this in R without much trouble. The problem you're likely to run into is ending up with a sparse contingency table which can lead to convergence problems when fitting your log-linear model. In this respect, I think Andy's and Srikant's advice will serve you well -- you'll have to make some assumptions or simplifications suitable for your domain. Figure out what question you're interested in and see if you can reduce the dimensions in some way.
null
CC BY-SA 3.0
null
2010-10-17T20:10:30.190
2017-02-01T23:42:59.623
2017-02-01T23:42:59.623
11887
251
null
3682
1
3683
null
10
505
What is the meaning of $\|a\|_p=\left(\sum _{i=1}^n \left|a_i(t)\right|{}^p\right){}^{\frac{1}{p}}$? This formula is called out on the fifth page of An Improved Data Stream Summary: The Count-Min Sketch and its Applications (which can be found [here](http://www.madalgo.au.dk/img/SumSchoo2007_Lecture%20slides/Bibliography/p14_Cormode_JAl_05.pdf)). I'm implementing the Count-Min Sketch and can understand the basic concepts just fine, but some of the finer points are explained in terms of this equation and some other terminology that I'm unacquainted with.
What is the meaning of $\|a\|_p=\left(\sum _{i=1}^n \left|a_i(t)\right|{}^p\right){}^{\frac{1}{p}}$?
CC BY-SA 2.5
null
2010-10-17T20:29:40.313
2020-04-05T00:05:45.660
2020-04-05T00:05:45.660
11887
1515
[ "descriptive-statistics", "notation" ]
3683
2
null
3682
11
null
It's the $L^p$ norm. See for example the Wikipedia articles: - $L^p$ space - Minkowski distance If you use $p = 2$, you'll find it resolves to the more familiar Euclidean norm -- i.e. the most familiar measure used as length of the vector $a$. Other values of p give others ways of measuring length as outlined in the article -- see the sections on Euclidean norm, Taxicab norm, etc.
null
CC BY-SA 2.5
null
2010-10-17T20:36:56.287
2010-10-17T20:42:02.297
2010-10-17T20:42:02.297
251
251
null
3684
2
null
3614
5
null
[Characteristic functions](http://en.wikipedia.org/wiki/Characteristic_function_%28probability_theory%29) can make computations involving the sums and differences of [random variables](http://en.wikipedia.org/wiki/Random_variable) really easy. [Mathematica](http://www.wolfram.com/) has lots of functions to work with statistical distributions, including a builtin to transform a distribution into its characteristic function. I'd like to illustrate this with two concrete examples: (1) Suppose you wanted to determine the results of rolling a collection of dice with differing numbers of sides, e.g., roll two six-sided dice plus one eight-sided die (i.e., 2d6+d8)? Or (2) suppose you wanted to find the difference of two dice rolls (e.g., d6-d6)? An easy way to do this would be to use the characteristic functions of the underlying discrete uniform distributions. If a random variable $X$ has a [probability mass function](http://en.wikipedia.org/wiki/Probability_mass_function) $f$, then its characteristic function $\varphi_X(t)$ is just the discrete [Fourier Transform](http://en.wikipedia.org/wiki/Fourier_transform) of $f$, i.e., $\varphi_X(t) = \mathcal{F}\{f\}(t) = E[e^{i t X}]$. A theorem tells us: If the independent random variables $X$ and $Y$ have corresponding probability mass functions $f$ and $g$, then the pmf $h$ of the sum $X + Y$ of these RVs is the [convolution](http://en.wikipedia.org/wiki/Convolution) of their pmfs $h(n) = (f \ast g)(n) = \sum_{m=-\infty}^\infty f(m) g(n-m)$. We can use the convolution property of Fourier Transforms to restate this more simply in terms of characteristic functions: The characteristic function $\varphi_{X+Y}(t)$ of the sum of independent random variables $X$ and $Y$ equals the product of their characteristic functions $\varphi_{X}(t) \varphi_{Y}(t)$. This Mathematica function will make the characteristic function for an s-sided die: MakeCf[s_] := Module[{Cf}, Cf := CharacteristicFunction[DiscreteUniformDistribution[{1, s}], t]; Cf] The pmf of a distribution can be recovered from its characteristic function, because Fourier Transforms are invertible. Here is the Mathematica code to do it: RecoverPmf[Cf_] := Module[{F}, F[y_] := SeriesCoefficient[Cf /. t -> -I*Log[x], {x, 0, y}]; F] Continuing our example, let F be the pmf that results from 2d6+d8. ` F := RecoverPmf[MakeCf[6]^2 MakeCf[8]] ` There are $6^2 \cdot 8 = 288$ outcomes. The domain of support of F is $S=\{3,\ldots,20\}$. Three is the min because you're rolling three dice. And twenty is the max because $20 = 2 \cdot 6 + 8$. If you want to see the image of F, compute In:= F /@ Range[3, 20] Out= {1/288, 1/96, 1/48, 5/144, 5/96, 7/96, 13/144, 5/48, 1/9, 1/9, \ 5/48, 13/144, 7/96, 5/96, 5/144, 1/48, 1/96, 1/288} If you want to know the number of outcomes that sum to 10, compute In:= 6^2 8 F[10] Out= 30 If the independent random variables $X$ and $Y$ have corresponding probability mass functions $f$ and $g$, then the pmf $h$ of the difference $X - Y$ of these RVs is the [cross-correlation](http://en.wikipedia.org/wiki/Cross-correlation) of their pmfs $h(n) = (f \star g)(n) = \sum_{m=-\infty}^\infty f(m) g(n+m)$. We can use the cross-correlation property of Fourier Transforms to restate this more simply in terms of characteristic functions: The characteristic function $\varphi_{X-Y}(t)$ of the difference of two independent random variables ${X,Y}$ equals the product of the characteristic function $\varphi_{X}(t)$ and $\varphi_{Y}(-t)$ (N.B. the negative sign in front of the variable t in the second characteristic function). So, using Mathematica to find the pmf G of d6-d6: ` G := RecoverPmf[MakeCf[6] (MakeCf[6] /. t -> -t)] ` There are $6^2 = 36$ outcomes. The domain of support of G is $S=\{-5,\ldots,5\}$. -5 is the min because $-5=1-6$. And 5 is the max because $6-1=5$. If you want to see the image of G, compute In:= G /@ Range[-5, 5] Out= {1/36, 1/18, 1/12, 1/9, 5/36, 1/6, 5/36, 1/9, 1/12, 1/18, 1/36}
null
CC BY-SA 2.5
null
2010-10-17T20:44:19.483
2010-10-17T20:44:19.483
null
null
null
null
3685
1
3692
null
76
97745
Hierarchical clustering can be represented by a dendrogram. Cutting a dendrogram at a certain level gives a set of clusters. Cutting at another level gives another set of clusters. How would you pick where to cut the dendrogram? Is there something we could consider an optimal point? If I look at a dendrogram across time as it changes, should I cut at the same point?
Where to cut a dendrogram?
CC BY-SA 2.5
null
2010-10-17T21:57:55.460
2020-03-28T07:00:29.263
2010-10-17T22:01:28.690
930
1250
[ "clustering", "dendrogram" ]
3686
2
null
3685
6
null
Perhaps one of the simplest methods would be a graphical representation in which the x-axis is the number of groups and the y-axis any evaluation metric as the distance or the similarity. In that plot you usually can observe two differentiated regions, being the x-axis value at the 'knee' of the line the 'optimal' number of cluster. There are also some statistics that could hepl on this task: Hubert' gamma, pseudo-t², pseudo-F or cubic clustering criteria (CCC) among others.
null
CC BY-SA 2.5
null
2010-10-17T22:30:44.930
2010-10-18T06:13:33.683
2010-10-18T06:13:33.683
221
221
null
3687
2
null
3682
6
null
This paper does not appear to use $L^p$ norms in any essential way--every one of the results references the $L^1$ norm explicitly. The problem itself determines which norm to use. In this case interest focuses on the cardinality of multisets. A multiset is represented as a vector of counts of its elements, whence its cardinality happens to be the same as its $L^1$ norm. Often results proven for one norm may hold without any change needed in the proof for a wide range of $p$ (typically $1 \le p \le \infty$). The opportunity for greater generality at no cost will lead many papers like this to talk about $L^p$ norms. $L^p$ norms come into their own in discussions of duality in Hilbert and Banach space theory. Advanced, but introductory (it's not a contradiction!) [books on analysis](http://rads.stackoverflow.com/amzn/click/0070542341) usually cover this material thoroughly. For an introduction to some of the relationships among these norms, read about the [Holder Inequality](http://en.wikipedia.org/wiki/H%C3%B6lder%27s_inequality) and the [Minkowski Inequality](http://en.wikipedia.org/wiki/Minkowski_inequality).
null
CC BY-SA 2.5
null
2010-10-18T00:16:28.227
2010-10-18T00:16:28.227
null
null
919
null
3688
2
null
3595
3
null
SAS Enterprise Miner version 6.2 would have no problem handling 20 million observations, and a variety of models which can be adapted to your situation. The issue with SAS is usually the cost however. Here's a summary of what SAS EM can do: [SAS EM 6.2: What's New ](http://support.sas.com/documentation/cdl/en/whatsnew/62580/HTML/default/viewer.htm#emwhatsnew62.htm)
null
CC BY-SA 2.5
null
2010-10-18T03:05:26.233
2010-10-18T03:12:18.987
2010-10-18T03:12:18.987
null
null
null
3689
1
3690
null
6
268
I'm interested in exploring statistical models (or modifications thereof) designed to handle a specific type of problem. Due to my ignorance of statistical terminology, I can only describe this type of problem by (contrived) examples: Suppose we're interested in estimating the likelihood that a given cell phone customer will drop his/her service (churn) over the course of the next month (October). We are given cell phone user data from the last three months (July, August, September). In estimating churn probability for October, we'll want to make use of a model that emphasizes inputs that are consistently predictive of churn each month, suppressing the importance of those whose coefficients vary heavily on a month-by-month basis. For example, "market" may be consistently predictive of churn, but an externality specific to one of the training datasets (e.g., marketing of new phone by a competitor in a specific market in August) may affect a coefficient in a way that won't necessarily apply to October. The market coefficient should be corrected/compressed/averaged when predicting October's churn likelihood. Here's a less contrived example. In estimating the likelihood that a voter will vote in the 2012 election, we may use voter-level data from the 2000, 2004, and 2008 elections to train a logistic regression on turnout. If "party" is an input, 2000 and 2004 coefficients may differ significantly from 2008 party coefficient due to exogenous circumstances (a unique political environment). In estimating 2012 vote likelihood for a given voter, we'll want to trim/compress/average the party coefficient to "smooth out" election-year-specific differences that can't be easily quantified. Can anyone point me in the right direction? Any guidance on how to better ask this question? What terminology should I be using? Thanks in advance for your help.
Coefficient / model averaging to control for exogenous circumstances in prediction
CC BY-SA 3.0
null
2010-10-18T03:08:15.450
2011-08-08T02:26:47.610
2011-08-08T02:26:47.610
919
1611
[ "time-series", "logistic", "classification", "forecasting" ]
3690
2
null
3689
5
null
I am not sure you need any special tricks as long as the relevant factors are captured by the model. To keep things simple I will discuss the issue in the context of linear regression. The same intuition carries over to the time series setting. Suppose, that you want to predict the monthly sales of cell phones for brand X and to illustrate the ideas suppose that you use linear regression to do so. The issue you have is that there are some factors that impact sales every month (e.g., weather of the month to take a silly example) and some factors that are idiosyncratic to a month (e.g., launch of a new cell phone). You wish to account for both factors consistently. Denote the factors that impact sales every month by $C_m$ (where 'C' stands for common and 'm' for month) and idiosyncratic factors by $I_m$ (where I stands for idiosyncratic and m for month). Then, your model would be: $S_m = \beta_c C_M + \beta_i I_m + \epsilon$ In the months where there is no idiosyncratic factors $I_m$ would be 0 and hence the impact on sales would be captured by $\beta_c$ and in the months when you do have idiosyncratic factors the impact on sales would be $\beta_c + \beta_i$ with $\beta_i$ presumably being negative if the idiosyncratic factor dampens sales for brand X. Thus, the coefficients 'automatically' adjust themselves provided all the necessary factors are present in the model. Thus, it seems to me you do not have to worry about the issue you raise if you do account for all factors whether they are idiosyncratic to a time period or common across time periods. Edit in response to comment Your example of concert and increased cell phone usage by young consumers is an example of an interaction effect. To put it in different words your example is saying that the effect of the concert on cell phone use is higher for younger consumers rather than older consumers. Basically, your model is saying the following: Minutes = beta1 age + beta2 * concert + beta3 * age * concert + error Thus, the beta3 parameter will capture the differential impact how younger/older consumers respond to the concert.
null
CC BY-SA 2.5
null
2010-10-18T03:53:26.107
2010-10-18T14:21:03.917
2010-10-18T14:21:03.917
null
null
null
3691
2
null
3685
12
null
There isn't really an answer. It's somewhere between 1 and N. However, you can think about it from a profit perspective. For example, in marketing one uses segmentation, which is much like clustering. A message (an advertisement or letter, say) that is tailored for each individual will have the highest response rate. A generic message tailored to the average will have the lowest response rate. Having say three messages tailored to three segments will be somewhere in between. This is the revenue side. A message that is tailored to each individual will have the highest cost. A generic message tailored to the average will have the lowest cost. Three messages tailored to three segments will be somewhere in between. Say paying a writer to write a custom message costs 1000, two cost 2000 and so on. Say by using one message, your revenue will be 5000. If you segmented your customers into 2 segments, and wrote tailored messages to each segment, your response rate will be higher. Say revenues are now 7500. With three segments, a slightly higher response rate, and your revenues are 9000. One more segment, and you're at 9500. To maximize profit, keep segmenting until the marginal revenue from segmenting equals the marginal cost of segmenting. In this example, you would use three segments to maximize profit. ``` Segments Revenue Cost Profit 1 5000 1000 4000 2 7500 2000 5500 3 9000 3000 6000 4 9500 4000 5500 ```
null
CC BY-SA 3.0
null
2010-10-18T04:49:46.067
2011-12-24T20:01:26.503
2011-12-24T20:01:26.503
74
74
null
3692
2
null
3685
58
null
There is no definitive answer since cluster analysis is essentially an exploratory approach; the interpretation of the resulting hierarchical structure is context-dependent and often several solutions are equally good from a theoretical point of view. Several clues were given in a related question, [What stop-criteria for agglomerative hierarchical clustering are used in practice?](https://stats.stackexchange.com/questions/2597/what-stop-criteria-for-agglomerative-hierarchical-clustering-are-used-in-practice) I generally use visual criteria, e.g. silhouette plots, and some kind of numerical criteria, like Dunn’s validity index, Hubert's gamma, G2/G3 coefficient, or the corrected Rand index. Basically, we want to know how well the original distance matrix is approximated in the cluster space, so a measure of the [cophenetic correlation](http://en.wikipedia.org/wiki/Cophenetic_correlation) is also useful. I also use k-means, with several starting values, and the [gap statistic](http://gremlin1.gdcb.iastate.edu/MIP/gene/MicroarrayData/gapstatistics.pdf) ([mirror](https://web.archive.org/web/20110124070213/http://gremlin1.gdcb.iastate.edu/MIP/gene/MicroarrayData/gapstatistics.pdf)) to determine the number of clusters that minimize the within-SS. The concordance with Ward hierarchical clustering gives an idea of the stability of the cluster solution (You can use `matchClasses()` in the [e1071](http://cran.r-project.org/web/packages/e1071/index.html) package for that). You will find useful resources in the CRAN Task View [Cluster](http://cran.r-project.org/web/views/Cluster.html), including [pvclust](http://cran.r-project.org/web/packages/pvclust/index.html), [fpc](http://cran.r-project.org/web/packages/fpc/index.html), [clv](http://cran.r-project.org/web/packages/clValid/index.html), among others. Also worth to give a try is the [clValid](http://cran.r-project.org/web/packages/clValid/index.html) package ([described](http://www.jstatsoft.org/v25/i04/paper) in the Journal of Statistical Software). Now, if your clusters change over time, this is a bit more tricky; why choosing the first cluster-solution rather than another? Do you expect that some individuals move from one cluster to another as a result of an underlying process evolving with time? There are some measure that try to match clusters that have a maximum absolute or relative overlap, as was suggested to you in your preceding question. Look at [Comparing Clusterings - An Overview](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.164.6189&rep=rep1&type=pdf) from Wagner and Wagner.
null
CC BY-SA 3.0
null
2010-10-18T05:56:44.473
2013-12-05T19:21:11.280
2017-04-13T12:44:28.813
-1
930
null
3694
2
null
3614
4
null
Here's another way to calculate the probability distribution of the sum of two dice by hand using convolutions. To keep the example really simple, we're going to calculate the probability distribution of the sum of a three-sided die (d3) whose random variable we will call X and a two-sided die (d2) whose random variable we'll call Y. You're going to make a table. Across the top row, write the probability distribution of X (outcomes of rolling a fair d3). Down the left column, write the probability distribution of Y (outcomes of rolling a fair d2). You're going to construct the [outer product](http://en.wikipedia.org/wiki/Outer_product) of the top row of probabilities with the left column of probabilities. For example, the lower-right cell will be the product of Pr[X=3]=1/3 times Pr[Y=2]=1/2 as shown in the accompanying figure. In our simplistic example, all the cells equal 1/6. Next, you're going to sum along the oblique lines of the outer-product matrix as shown in the accompanying diagram. Each oblique line passes through one-or-more cells which I've colored the same: The top line passes through one blue cell, the next line passes through two red cells, and so on. ![alt text](https://i.stack.imgur.com/WQWgE.png) Each of the sums along the obliques represents a probability in the resulting distribution. For example, the sum of the red cells equals the probability of the two dice summing to 3. These probabilities are shown down the right side of the accompanying diagram. This technique can be used with any two discrete distributions with finite support. And you can apply it iteratively. For example, if you want to know the distribution of three six-sided dice (3d6), you can first calculate 2d6=d6+d6; then 3d6=d6+2d6. There is a free (but closed license) programming language called [J](http://www.jsoftware.com/). Its an array-based language with its roots in APL. It has builtin operators to perform outer products and sums along the obliques in matrices, making the technique I illustrated quite simple to implement. In the following J code, I define two verbs. First the verb `d` constructs an array representing the pmf of an s-sided die. For example, `d 6` is the pmf of a 6-sided die. Second, the verb `conv` finds the outer product of two arrays and sums along the oblique lines. So `conv~ d 6` prints out the pmf of 2d6: d=:$% conv=:+//.@(*/) |:(2+i.11),:conv~d 6 2 0.0277778 3 0.0555556 4 0.0833333 5 0.111111 6 0.138889 7 0.166667 8 0.138889 9 0.111111 10 0.0833333 11 0.0555556 12 0.0277778 As you can see, J is cryptic, but terse.
null
CC BY-SA 2.5
null
2010-10-18T07:19:52.593
2010-10-18T07:19:52.593
null
null
null
null
3695
1
3697
null
15
25795
When plotting a boxplot with python matplotblib, the lines halfway the plot is the median of the distribution. Is there a possibility to instead have the line at the average. Or to plot it next to it in a different style. Also, because it is common for the line to be the median, will it really confuse my readers if I make it the average (off course I will add a note what the middle line is)?
Show average instead of median in boxplot
CC BY-SA 2.5
null
2010-10-18T07:51:12.160
2010-10-18T12:02:05.757
2010-10-18T12:02:05.757
8
190
[ "data-visualization", "python", "matplotlib", "boxplot" ]
3696
2
null
3695
20
null
To answer your second question: Yes, I think it will be confusing to put the line at the mean instead of the median. The precise rules controlling the length of the 'whiskers' (if any) and treatment of outliers vary, but everyone keeps to Tukey's use of the box as displaying the median and lower and upper quartiles. For highly skew distributions, the mean could be outside the box, which would look very odd. Common usage is that the median goes with the interquartile range, while the mean goes with standard deviation (or standard error of the mean if you're interested in inference rather than data description). If you want to show the mean visually, i'd use a different symbol to display it to avoid confusion.
null
CC BY-SA 2.5
null
2010-10-18T08:05:49.140
2010-10-18T08:28:14.397
2010-10-18T08:28:14.397
449
449
null
3697
2
null
3695
26
null
This code makes the boxplots then places a circle marking the mean for each box. You can use a different symbol by specifying the [marker](http://matplotlib.sourceforge.net/api/pyplot_api.html?highlight=scatter#matplotlib.pyplot.scatter) argument in the call to `scatter`. ``` import numpy as np import pylab # 3 boxes data = [[np.random.rand(100)] for i in range(3)] pylab.boxplot(data) # mark the mean means = [np.mean(x) for x in data] pylab.scatter([1, 2, 3], means) ``` ![alt text](https://i.stack.imgur.com/nTpJH.png)
null
CC BY-SA 2.5
null
2010-10-18T08:40:10.110
2010-10-18T08:40:10.110
null
null
251
null
3698
1
3770
null
2
1002
I am trying to understand a published analysis. This is the data of interest: D1>0 D1<0 D2>0 7 2 9 D2<0 9 15 24 total 16 17 33 The author notes that 17/33 is 51.5% and states: > "we expect about 50% of the D1's to be negative, and that is what we actually observe here (z=.08, p=n.s)". Now I assume that the z comparison is 17/33 vs 16.5/33 (chance?) and that the statistic employed is the z-ratio for the significance of the difference between two proportions. However my own calculation for this comes out as z=.123. Can anyone help with the discrepancy? UPDATE: The author has responded and admits that the published z is an error. He gives the corrected answer as z=0.17. I am still not sure how he worked it out though: > We are not considering the difference between two proportions, since they are not independent 16/33 = 17/33 -1. We are interested whether the proportion of D1<0 significantly deviated from the expected value of 0.5 The nearest I can get is R prop.test function. Is this directly comparable? prop.test(c(17, 16.5), c( 33, 33)) 2-sample test for equality of proportions with continuity correction data: c(17, 16.5) out of c(33, 33) X-squared = 0, df = 1, p-value = 1 alternative hypothesis: two.sided 95 percent confidence interval: -0.2411995 0.2715025 sample estimates: prop 1 prop 2 0.5151515 0.5000000
Understanding published data: z-ratio for proportions
CC BY-SA 2.5
null
2010-10-18T09:08:51.310
2010-10-28T10:40:02.167
2010-10-28T10:40:02.167
null
1614
[ "contingency-tables" ]
3700
2
null
3636
0
null
Based on Shane's answer I entered some data into Excel. What I came up with is that I need to multiple the square of the different from the mean by the number of seconds until the next sample. This assumes the reading was steady just until the next reading.
null
CC BY-SA 2.5
null
2010-10-18T12:24:04.177
2010-10-18T12:24:04.177
null
null
1595
null
3701
2
null
3698
2
null
I'm not sure the discrepancy is worth worrying about. The exact p-value is clearly 1 for a 2-sided test of p=0.5 given 17 positive responses out of 33, as there's no integer closer to 33/2 than 17. With small or moderate N as here there's no good reason for not doing the exact test (even without a PC, as the cdf of the binomial is commonly included in statistical tables). With a modern PC and decent software it's no problem even for large N, but for large N the resulting p-value will be very similar to that from a z-test anyway.
null
CC BY-SA 2.5
null
2010-10-18T12:27:47.530
2010-10-18T12:27:47.530
null
null
449
null
3702
1
null
null
1
1132
I am newbee and i am trying with functional data analysis. I have a 8x11 matrix data, how can i input into R as an object in this form: ``` $hgtm boy01 boy02 boy03 boy04 boy05 1 81.3 76.2 76.8 74.1 74.2 1.25 84.2 80.4 79.8 78.4 76.3 1.5 86.4 83.2 82.6 82.6 78.3 1.75 88.9 85.4 84.7 85.4 80.3 2 91.4 87.6 86.7 88.1 82.2 3 101.1 97.0 94.2 98.6 89.4 4 109.5 104.6 100.4 104.4 96.9 5 115.8 112.3 107.1 111.0 104.1 $hgtf girl01 girl02 girl03 girl04 girl05 1 76.2 74.6 78.2 77.7 76 1.25 80.4 78.0 81.8 80.5 80 1.5 83.3 82.0 85.4 83.3 83 1.75 85.7 86.9 87.9 87.0 86 2 87.7 90.0 89.6 90.3 89 3 96.0 94.9 97.1 98.6 96 4 103.8 102.1 109.2 106.3 103 5 110.7 109.2 116.2 113.9 110 $age [1] 1.00 1.25 1.50 1.75 2.00 3.00 4.00 5.00 ``` very much appreciated,
Data manipulation in R for functional data analysis
CC BY-SA 2.5
null
2010-10-18T13:48:33.180
2019-12-03T03:17:04.480
2019-12-03T03:17:04.480
11887
1615
[ "r", "panel-data", "functional-data-analysis" ]
3703
2
null
3702
3
null
It looks like you want to use a named list, since each object is of different dimensions above. Here is an example with some dummy data: ``` hgtm <- matrix(1:100, ncol=5) hgtf <- matrix(1:100, ncol=5) age <- 1:10 namvan <- list(hgtm=hgtm, hgtf=hgtf, age=age) ``` Now you can reference each object by name: ``` > str(namvan) List of 3 $ hgtm: int [1:20, 1:5] 1 2 3 4 5 6 7 8 9 10 ... $ hgtf: int [1:20, 1:5] 1 2 3 4 5 6 7 8 9 10 ... $ age : int [1:10] 1 2 3 4 5 6 7 8 9 10 > namvan$age [1] 1 2 3 4 5 6 7 8 9 10 ```
null
CC BY-SA 2.5
null
2010-10-18T13:56:05.490
2010-10-18T13:56:05.490
null
null
5
null
3704
1
3712
null
3
5393
I posted this question earlier and am rewriting it in hopes of getting some guidance. I am using a weighted regression (after propensity score matching) to obtain estimates of the effects of a treatment (treat) on an outcome for different income quintiles. I include interactions of the quintiles and the treatment in the equation, as shown below: ``` regress outcome treat quintile2 quin3 quin4 quin5 treatXquin2 treatXquin3 treatXquin4 treatXquin5 ``` To obtain the effect of treatment on quintile2 I add the coefficient on treat with the coefficient on treatXquin2. However, what do I do with the standard errors? Can I simply add the standard errors on the two coefficients together? Can I add the t-stats together? Any advice would be much appreciated. I'm not a stats expert but have a basic understanding of econometrics.
Adding coefficients to obtain interaction effects - can I add standard errors?
CC BY-SA 2.5
null
2010-10-18T14:21:06.687
2010-10-18T22:08:54.650
2010-10-18T17:48:59.000
449
834
[ "regression", "standard-error", "multivariate-analysis", "stata" ]
3705
2
null
3702
4
null
As to how you would get the data into R and into one of these formats, we'd need to know more about what your input file looks like and the format that it is in. However, here are some general tips on formatting the type of data that you have for analysis in any system. Singer (Applied Longitudinal Data Analysis) suggests two generally useful layouts for the statistical analysis of longitudinal data: the person-level (mutlivariate) structure or the person-period (univariate) structure. The latter is generally preferrable for a number of reasons. The person-level data structure (or the multivariate format) contains one row of data for each observational unit (such as persons) and a variable for each measurement period. Age would not be included in the data set and would be implicit in the levels of your time factor (e.g., in a repeated measures ANOVA). This structure can lead naturally to summaries that aren't very meaningful, is less efficient than it could be, and cannot account for your unequally spaced observations (differing age intervals between observations) or time-varying covariates. That data setup might look something like this... ``` Mutlivariate ID Gender height1 height2 height3 height4 height5 1 Boy 76.2 74.6 78.2 77.7 76 2 Boy 80.4 78.0 81.8 80.5 80 3 Boy 83.3 82.0 85.4 83.3 83 4 Girl 96.0 94.9 97.1 98.6 96 5 Girl 87.7 90.0 89.6 90.3 89 6 Girl 85.7 86.9 87.9 87.0 86 ``` A preferable layout is often the person-period layout (or the univariate format) where each individual has a record for each time for which they were observed. The person-period dataset has a number of advantages. First, it leads to more natural summaries of the data, e.g. getting an average by group, by time or by group and time is now straight forward. Second, the dataset will accommodate entry of unequal intervals in the time dimension, such as you have here. In addition, if you have them, you could add columns for any other demographic covariates and these could differ over time. Also, data in this format is prepared for modern analytical techniques such as multilevel modeling. Finally, the univariate data structure is consistent with good practice in database design and normalization, increasing efficiency and making it appropriate for the typical query structure. The univariate layout would look something like this... ``` Univariate ID Age Gender Height 1 1 Boy 76.2 1 1.5 Boy 74.6 1 3 Boy 78.2 1 5 Boy 77.7 2 1 Girl 80.4 2 1.5 Girl 81.8 2 3 Girl 80.5 2 5 Girl 80 3 1 Boy 115.8 3 1.5 Boy 112.3 3 3 Boy 111.0 3 5 Boy 104.1 ```
null
CC BY-SA 2.5
null
2010-10-18T14:59:44.807
2010-10-18T18:24:25.100
2010-10-18T18:24:25.100
485
485
null
3706
2
null
3704
-1
null
I am quite sure it's fine to just add the standard errors on the two coefficients together. I can then obtain the t-stats by dividing the sum of the coefficients by the sum of the standard errors. If anyone disagrees please let me know.
null
CC BY-SA 2.5
null
2010-10-18T15:09:48.413
2010-10-18T15:09:48.413
null
null
834
null