Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
3281
1
3282
null
12
744
I am looking for books / textbooks on statistics based on fractal mathematics. I know it is not a very well known area and it is rather difficult to find good literature. Any suggestions are welcome (books, textbooks, online materials).
Statistics based on fractal mathematics
CC BY-SA 2.5
null
2010-10-03T12:58:13.803
2015-11-04T11:20:18.663
2015-11-04T11:20:18.663
22468
1250
[ "references", "fractal" ]
3282
2
null
3281
10
null
- Tutorials in Contemporary Nonlinear Methods for the Behavioral Sciences Web Book It has several chapters discussing fractals.
null
CC BY-SA 2.5
null
2010-10-03T13:15:14.493
2010-10-03T13:15:14.493
null
null
183
null
3283
2
null
3275
7
null
Let $p_W$ be the probability of a win, $p_T$ that of a tie, and $p_L$ of a loss. Because the probabilities sum to 1, so does their tenth power. Expanding this out expresses the tenth power as a linear combination of the monomials $p_W^W p_T^T p_L^L$ where each term picks out the probability of $W$ wins, $T$ ties, and $L$ losses. In more detail, $$(p_W + p_T + p_L)^{10} = \sum_{W+T+L=10} {W+T+L \choose {W \quad T \quad L}}p_W^W p_T^T p_L^L$$ where the multinomial coefficient is computed as $${W+T+L \choose {W \quad T \quad L}} = \frac{(W+T+L)!}{W! T! L!}\text{.}$$ For example, the probability of 7 wins, 2 ties, and 1 loss is given by $$\Pr[W=7, T=2, L=1] = {7+2+1 \choose {7 \quad 2 \quad 1}}p_W^7 p_T^2 p_L^1$$ $$=\frac{10!}{7! 2! 1!}p_W^7 p_T^2 p_L^1 = 360p_W^7 p_T^2 p_L^1\text{.}$$ This is called the [Multinomial Distribution](http://en.wikipedia.org/wiki/Multinomial_distribution). When the probabilities vary from one game to the other, there is nothing simpler than enumerating all the possible sequences that produce a given number of wins, ties, and losses. The multinomial coefficients count the numbers of such sequences. The value of 360 in the example indicates there are 360 distinct ways you can arrive at 7 wins, 2 ties, and a loss. Each one of those ways contributes its own distinct sequence of probabilities as given by each of the ten games. To obtain a complete table of the 66 possible answers requires computing all $(1+1+1)^{10} = 59049$ products of ten probabilities.
null
CC BY-SA 2.5
null
2010-10-03T13:58:51.183
2010-10-03T14:08:45.343
2010-10-03T14:08:45.343
919
919
null
3284
2
null
3271
3
null
It seems you just want to make a fair standard analysis, so I am not a best person to answer your question; yet I would suggest you to dive deeper into Bioconductor; it has a lot of useful stuff, nevertheless finding what you want is painful. For instance [Mfuzz](http://www.bioconductor.org/packages/release/bioc/html/Mfuzz.html) package looks promising.
null
CC BY-SA 2.5
null
2010-10-03T16:49:21.400
2010-10-03T16:49:21.400
null
null
null
null
3285
2
null
3271
5
null
In complement to @mbq's response (`Mfuzz` looks fine), I'll just put some references (PDFs) about clustering of time-course gene expression data: - Futschik, ME and Charlisle, B (2005). Noise robust clustering of gene expression time-course data. Journal of Bioinformatics and Computational Biology, 3(4), 965-988. - Luan, Y and Li, H (2003). Clustering of time-course gene expression data using a mixed-effects model with B-splines. Bioinformatics, 19(4), 474-482. - Tai YC and Speed, TP (2006). A multivariate empirical Bayes statistic for replicated microarray time course data. The Annals of Statistics, 34, 2387–2412. - Schliep, A, Steinhoff, C, and Schönhuth, A (2004). Robust inference of groups in gene expression time-courses using mixtures of HMMs. Bioinformatics, 20(1), i283-i228. - Costa, IG, de Carvalho, F, and de Souto, MCP (2004). Comparative analysis of clustering methods for gene expression time course data. Genetics and Molecular Biology, 27(4), 623-631. - Inoue, LYT, Neira, M, Nelson, C, Gleave, M, and Etzioni, R (2006). Cluster-based network model for time-course gene expression data. Biostatistics, 8(3), 507-525. - Phang, TL, Neville, MC, Rudolph, M, and Hunter, L (2003). Trajectory Clustering: A Non-Parametric Method for Grouping Gene Expression Time Courses with Applications to Mammary Development. Pacific Symposium on Biocomputing, 8, 351-362. Did you try the `timecourse` package (as suggested by @csgillespie in his [handout](http://www.mas.ncl.ac.uk/~ncsg3/microarray/Spaper.pdf))?
null
CC BY-SA 2.5
null
2010-10-03T20:34:43.010
2010-10-03T22:59:40.033
2010-10-03T22:59:40.033
930
930
null
3286
1
3288
null
7
5331
I'm not a statistician, but I sometimes need to play around with data. I have two data sets, lists of values in the unit interval. I've plotted them as histograms, so I have an intuitive idea of how "far apart" they are. But I want to do something a little more formal. My first thought was to just sum the differences of the values in the bins, but this isn't that satisfactory. Then I thought of taking a three-bin average and sum differences over these. (Apologies if I'm mangling statistics terminology) But I was thinking I'm probably reinventing the wheel, so I came here. Similar questions seem to point to "Kolmogorov Smirnov tests" or something like that. So my question is: is this the right method to calculate how far these data sets are apart? And is there an easy way to do this in R? Ideally just `KStest(data1,data2)` or something? Edit To emphasize, I'm particularly interested in ways to measure how far the data are apart directly rather than fitting a distribution to each and then measuring the distance between distributions. [Does that even make sense? I guess numerical calculations in R will be done by sampling from a distribution anyway.]
Distance between empirically generated distributions (in R)
CC BY-SA 4.0
null
2010-10-03T21:10:14.903
2021-09-16T13:39:41.300
2021-09-16T13:39:41.300
11887
5141
[ "r", "distributions", "distance" ]
3287
1
3417
null
6
1119
I'd like to ensure that I understand the process correctly. This is a follow-up question to [Interpreting 2D correspondence analysis plots](https://stats.stackexchange.com/questions/3270/interpreting-2d-correspondence-analysis-plots) ``` library(reshape) library(ca) df <- read.csv(file="http://www.bertelsen.ca/R/smokers.csv") colnames(df)[7] <- "value" ## make reshape smart df <- cast(df, SMOKER ~ GEO) ## reshape data row.names(df) <- df$SMOKER ## rename rows df <- df[2:ncol(df)] ## reset df df <- df[-4,] ## Let's only look at people who have smoked df <- df[c("AB","BC","ON","QC")] ## and only the biggest 4 provinces (KISS) plot(ca(df)) summary(ca(df)) ``` Output ``` Principal inertias (eigenvalues): dim value % cum% scree plot 1 0.002523 99.9 99.9 ************************* 2 3e-06000 0.1 100.0 3 00000000 0.0 100.0 -------- ----- Total: 0.002526 100.0 Rows: name mass qlt inr k=1 cor ctr k=2 cor ctr 1 | Crrn | 265 1000 191 | -43 1000 191 | 1 0 43 | 2 | Dlys | 201 1000 351 | -66 1000 351 | -1 0 70 | 3 | Frmr | 470 1000 432 | 48 1000 432 | -1 0 98 | 4 | Occs | 65 1000 26 | 31 964 25 | 6 36 789 | Columns: name mass qlt inr k=1 cor ctr k=2 cor ctr 1 | AB | 116 1000 146 | -56 1000 146 | -1 0 34 | 2 | BC | 142 1000 775 | 118 1000 776 | -1 0 41 | 3 | ON | 434 1000 7 | -6 909 6 | 2 91 540 | 4 | QC | 308 1000 72 | -24 994 72 | -2 6 385 | ``` Looking at `summary(ca(df))` I see that nearly 100% of the inertia is described by the row profile for both modalities (Type of smoker and Province, respectively). ![CA of Smoker Types in ON, QC, AB, and BC](https://i.stack.imgur.com/ygmU7.jpg) What (I think) should be immediate takeaways are: - You are more likely to be a daily smoker if you live in AB, QC, or ON - You are more likely to be a former smoker if you live in BC - You are least likely to be a daily smoker if you live in BC (this fits with Canadian wide understanding of BC's "active lifestyle" culture) What could we say about occasional smokers? What would your analysis tell us through this correspondence plot and it's associated summary? Data Source: Statistics Canada, Canadian Community Health Survey (CCHS 3.1), 2005. The CANSIM table 105-0427 was an update of CANSIM table [105-0227](http://cansim2.statcan.gc.ca/cgi-win/cnsmcgi.pgm?Lang=E&RootDir=CII/&Array_Pick=1&ArrayId=105-0227&C2DB=PRD&ResultTemplate=CII%2FCII___). More current data are in CANSIM tables [105-0501](http://cansim2.statcan.gc.ca/cgi-win/cnsmcgi.pgm?Lang=E&RootDir=CII/&Array_Pick=1&ArrayId=105-0501&C2DB=PRD&ResultTemplate=CII%2FCII___) and [105-0502](http://cansim2.statcan.gc.ca/cgi-win/cnsmcgi.pgm?Lang=E&RootDir=CII/&Array_Pick=1&ArrayId=105-0502&C2DB=PRD&ResultTemplate=CII%2FCII___).
Interpreting 2D correspondence analysis plots (Part II)
CC BY-SA 3.0
null
2010-10-03T21:24:06.223
2011-06-15T20:09:27.880
2017-04-13T12:44:26.710
-1
776
[ "r", "biplot", "correspondence-analysis" ]
3288
2
null
3286
7
null
You can do a Kolmogorov-Smirnov test using the `ks.test` function. See `?ks.test`. In general, when you are looking for a function in R (and you don't know its name) try using `??`. For instance, `??"Kolmogorov Smirnov"`. If nothing comes up `RSiteSearch("whatever you're looking for")` should help :)
null
CC BY-SA 2.5
null
2010-10-03T21:59:50.240
2010-10-03T21:59:50.240
null
null
582
null
3289
1
null
null
3
195
a recent posting on [math.stackexchange](https://math.stackexchange.com/questions/5922/probability-of-card-decks-being-in-the-same-order-for-n-shufflers-over-x-amount-o) reminds me of a somewhat less ambitious$^1$ question i have been meaning to ask. suppose 3 people each have a deck of $M$ cards numbered from 1 to $M$, in random order. together, they turn over one card at a time. on any such trial, a match occurs if at least 2 cards show the same number. let $X$ be the number of matches that occur in the $M$ trials. what is the probability distribution of $X$? [for starters, what is ${\rm P}(X=0$)?] an historical note: the solution for 2 players is well-known; cf $ $ [Feller, vol.1,3$^{rd}$ Ed., p 107](http://rads.stackoverflow.com/amzn/click/0471257087): $\quad {\rm P}(X=0) := p_{0} = 1 - 1 + \frac{1}{2!} - \frac{1}{3!} + \cdots \pm \frac{1}{M!}$, which, for $M$ not too small, is about 1/e. [for $M$ = 10, the two numbers already agree, on rounding, to 5 decimal places.] more generally, $\quad {\rm P}(X=k) := p_{k} = \frac{1}{k!} \\{ 1 - 1 + \frac{1}{2!} - \frac{1}{3!} + \cdots \pm \frac{1}{(N-k)!}\\} \sim \frac{e^{-1}}{k!}.$ [so as $M \to\infty, X$ tends in law to a poisson variable with mean 1.] for $k=M-1$ this gives: $p_{M-1} = 0$ [which is obvious]. also, $p_M= \frac{1}{M!}$. Feller obtains these formulae using the inclusion-exclusion principle. --- $^1$ in the sense that it involves only 3 people, rather than everyone in the world, and they shuffle their decks only once, rather than repeatedly since the beginning of time.
A matching problem for 3 decks of cards
CC BY-SA 2.5
null
2010-10-03T22:28:59.097
2010-10-04T09:07:35.723
2017-04-13T12:19:38.447
-1
1112
[ "probability", "games" ]
3290
2
null
3268
5
null
What you describe in your example is not only a network of relationships, but a network of "flows" between all groups. Like you suggested in a) (and as Jeromy said as well) your graphic will likely be a visualization of one group (or node) linked to other groups. Most of my knowledge of this subject is visualizing flows between geographic spaces, but many of the same issues still apply. I think this paper does a good job summarizing visualization techniques in regards to mapping flows. [From spatial interaction data to spatial interaction information? Geovisualisation and spatial structures of migration from the 2001 UK census](http://dx.doi.org/10.1016/j.compenvurbsys.2009.01.007) by: Alasdair Rae Computers, Environment and Urban Systems, Vol. 33, No. 3. (May 2009), pp. 161-178. (PDF [here](http://mediamapping.wikischolars.columbia.edu/file/view/Rae+-+2009+-+From+spatial+interaction+data+to+spatial+interacti.pdf)) Typically visualizing flows in geographic space has three main problems. One is that it is difficult to distinguish between in-flows and out-flows. The second is that long lines tend to dominate the graphic. Three is that over-lapping or too many flows tend to make the graphic look very noisy. The second problem may be solved by however you organize the nodes on your graphic (like Jeromy suggested cluster nodes together with strong relationships). It may also be easier to use small multiple graphs to distinguish between in-flows, out-flows, and reciprocal flows (i.e. map your nodes to a specific space and then have seperate graphics displaying in-flows, out-flows). I have not seen any examples of flows in networks like you describe, so I do not know if the self-organizing graphics have the problem of over-lapping lines. If you have experience programming in Python you may want to check out the [NetworkX](http://networkx.lanl.gov/) package. (The Gephi package Ars linked to looks pretty awesome as well). This is similar in nature to questions brought up on the GIS stackexchange forum, and [here](https://gis.stackexchange.com/questions/778/representation-of-network-flows) is a question with answers you may be interested in. Good Luck
null
CC BY-SA 2.5
null
2010-10-04T02:22:18.003
2010-10-04T03:08:41.037
2017-04-13T12:33:47.693
-1
1036
null
3291
1
3312
null
11
523
Suppose you had $200 US to build a (very) small library of statistics books. What would your choices be? You may assume free shipping from Amazon, and any freely available texts from the internet are fair game, but assume a 5 cent charge per page to print. (I was inspired by a mailing from Dover books, but most of their offerings seem a bit out of date).
Statistics library with knapsack constraint
CC BY-SA 3.0
null
2010-10-04T04:31:35.903
2015-11-04T01:34:18.227
2015-11-03T22:34:24.613
22468
795
[ "references", "puzzle" ]
3292
2
null
3291
1
null
A tad pricey, but [Bruning and Kintz's Computational Handbook of Statistics](http://rads.stackoverflow.com/amzn/click/0673990850) ($95.80) would certainly fit in your knapsack.
null
CC BY-SA 2.5
null
2010-10-04T04:48:07.010
2010-10-04T20:29:00.157
2010-10-04T20:29:00.157
795
830
null
3293
2
null
3291
5
null
- Harrell, FE. Regression Modeling Strategies (Springer, 2010, 2nd ed.) - Izenman, AJ. Modern Multivariate Statistical Techniques: Regression, Classification, and Manifold Learning (Springer, 2008) You should have money left to print part of The [Handbook of Computational Statistics](http://rads.stackoverflow.com/amzn/click/3540404643) (Gentle et al., Springer 2004) and [The Elements of Statistical Learning](http://www-stat.stanford.edu/~tibs/ElemStatLearn/) (Hastie et al., Springer 2009 2nd ed.) that are circulating on the web. As the latter mostly covers the same topics than Izenman's book (as pointed by [@kwak](https://stats.stackexchange.com/users/603/kwak)), either may be replaced by one of the [Handbook of Statistics](http://www.elsevier.com/wps/find/simple_search.cws_home?boost=true&needs_keyword=true&adv=false&action=simple_search&default=default&keywords=handbook+of+statistics&submitTopNav=Search) published by North-Holland, depending on your field of interests.
null
CC BY-SA 2.5
null
2010-10-04T07:15:14.340
2010-10-04T15:49:02.887
2017-04-13T12:44:48.803
-1
930
null
3294
1
null
null
59
14461
I am looking for resources (tutorials, textbooks, webcast, etc) to learn about Markov Chain and HMMs. My background is as a biologist, and I'm currently involved in a bioinformatics-related project. Also, what are the necessary mathematical background I need to have a sufficient understanding of Markov models & HMMs? I've been looking around using Google but, so far I have yet to find a good introductory tutorial. I'm sure somebody here knows better.
Resources for learning Markov chain and hidden Markov models
CC BY-SA 3.0
null
2010-10-04T08:33:05.990
2021-04-14T14:31:30.577
2015-11-03T10:25:08.487
22468
1495
[ "references", "markov-process", "hidden-markov-model", "bioinformatics" ]
3295
2
null
3271
3
null
Just to add to the other answers (which look like they should solve your problem), did you try using standard clustering algorithms for you data when constructing your dendrogram? For example, ``` heatmap.2(dataset, <standard args>, hclustfun = function(c){hclust(c, method= 'average')} ) ``` Instead of using the average distance for clustering, you can also use "ward", "single", "median", ... See `?hclust` for a full list. To extract clusters, use the `hclust` command directly and then use the `cutree` command. For example, ``` hc = hclust(dataset) cutree(hc) ``` More details can be found at my [webpage](http://www.mas.ncl.ac.uk/~ncsg3/microarray/).
null
CC BY-SA 2.5
null
2010-10-04T09:04:29.530
2010-10-04T21:13:21.687
2010-10-04T21:13:21.687
8
8
null
3296
1
3308
null
8
391
I have a pretty large data set (~300 cases with ~40 continuous attributes, binary labeled) which I used to create several alternative predictive models. To do this, the set was divided to training and validation subsets (~60:40%, respectively). I have noticed that there are several samples (both in the training and the validation subsets) that are being misclassified by all or most of the alternative models that I test. I suspect that there is something special about these "trouble making" samples. What are the general guidelines for discovering the possible reasons behind the misbehavior of the models on specific samples? Update 1. I'm using logistic regression for this task. The parameter selection is done by exhaustively searching combinations of up to 4 predictors with 10-fold cross validation. It is worth mentioning that the p-values that is calculated by the model for the misclassified samples are usually very different from the default classification threshold of 0.5. In other words, not only is the model wrong about those cases, it is also very confident about itself. Update 2 - what I have already done. I agree that insights from the study domain are crucial, but to date we have failed to discover anything significant. Also, I tried to remove the "bad" samples from the training set, and keeping the validation set and the parameter selection algorithm untouched. This led to better performance on the training set (naturally), but also improved significantly the performance on the validation set. Is this an indication that the "bad" samples were actually "bad"?
Dealing with "trouble maker" samples
CC BY-SA 3.0
null
2010-10-04T09:36:25.393
2016-08-10T13:56:49.377
2016-08-10T13:56:49.377
22468
1496
[ "hypothesis-testing", "logistic", "modeling", "outliers", "large-data" ]
3297
2
null
3286
4
null
A standard way to compare distributions is to use the [Kullback-Leibler divergence](http://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence). As usual, there's an R package that does this for you! From the `?KLdiv` help page in the [flexmix](http://cran.r-project.org/web/packages/flexmix/) package, we get the following bit of code: ``` ## Gaussian and Student t are much closer to each other than ## to the uniform: > library(flexmix) > x = seq(-3, 3, length=200) > y = cbind(u=dunif(x), n=dnorm(x), t=dt(x, df=10)) > matplot(x, y, type="l") > round(KLdiv(y),3) u n t u 0.000 1.082 1.108 n 4.661 0.000 0.004 t 4.686 0.005 0.000 ``` Notice that the comparison isn't symmetric: so uniform vs Normal is different from Normal vs Uniform. You didn't explain why you wanted to compare distributions. Giving a use-case may get you more specific answers.
null
CC BY-SA 2.5
null
2010-10-04T09:49:33.380
2010-10-04T09:49:33.380
null
null
8
null
3298
2
null
3294
19
null
Here are some tutorials (available as PDFs): - Dugad and Desai, A tutorial on hidden markov models - Valeria De Fonzo1, Filippo Aluffi-Pentini2 and Valerio Parisi (2007). Hidden Markov Models in Bioinformatics. Current Bioinformatics, 2, 49-61. - Smith, K. Hidden Markov Models in Bioinformatics with Application to Gene Finding in Human DNA Also take a look at [Bioconductor](http://www.bioconductor.org) tutorials. I assume you want free resources; otherwise, [Bioinformatics](http://rads.stackoverflow.com/amzn/click/3540241663) from Polanski and Kimmel (Springer, 2007) provides a nice overview (§2.8-2.9) and applications (Part II).
null
CC BY-SA 3.0
null
2010-10-04T10:16:23.660
2012-10-11T01:28:00.303
2012-10-11T01:28:00.303
12318
930
null
3299
2
null
2956
3
null
Although the reasoning about calculating the LR from the SS values is quite fair, a least-squares method is equivalent but not the same as a likelihood estimate. (The difference can be illustrated eg in the calculation of the se, which is divided by (n-1) in a least-squares approach and divided by n in a maximum-likelihood. The maximum likelihood estimate is thus consistent, but slightly biased). This has some implications: you can calculate the LR as the likelihood is proportional to $\frac{1}{s}$, but that doesn't give you the likelihood of your anova model itself. It just tells you something about the ratio. As the AIC is classically defined in terms of the likelihood, I'm not sure if you can use the AIC as you intend. I've looked at the spreadsheet, but the values for the "uncorrected LR within"(I'm also not completely following what exactly you're trying to calculate there) seem highly unlikely to me. On a side note, the power of LR testing is that you can just contrast the models you want, you don't have to do that for all of them (which lowers the multitesting error). If you do this for every term, your LR is completely equivalent to an F test, and in the case of least squares, as far as I know even numerically about the same. Your mile may vary, but I've never been confident mixing concepts of two different frameworks (i.e. least squares versus maximum likelihood). Personally, I'd report the F statistics and implement the LR in a function that allows to compare models (eg. the anova function for lme models which does exactly that). My 2 cents. PS : I did look at your code, but couldn't really figure out all the variables. If you would annotate your code using comments, that would make life a bit easier again. The EXCEL sheet is also not the easiest to figure out. I'll check again later to see if I can make something from it.
null
CC BY-SA 2.5
null
2010-10-04T12:45:23.117
2010-10-04T13:04:25.893
2010-10-04T13:04:25.893
1124
1124
null
3300
2
null
3262
7
null
I teach undergraduate biology students, and The Fear is rife among them. I generally start by telling them three things: 1) Statistics is not maths, it's logic. And if you're doing a science degree at a respected university, you eveidently don't have any problems with using logic to solve problems. 2) If you can add, subtract, multiply, divide and tell whether one number is bigger than another, you can do all the maths necessary for an undergrad stats course. 3) People learn differently, so if you don't understand one lecturer/textbook/explanation, ask or find another one. (I try to give 2-3 types of explanation for a given idea where I can and tell them to remember the one that makes sense to them). Finally, I err on the side of visual explanations as opposed to purely verbal or mathematical ones, as this seems work for the majority of students.
null
CC BY-SA 2.5
null
2010-10-04T13:02:30.140
2010-10-04T13:02:30.140
null
null
266
null
3301
2
null
3291
1
null
What topics are you interested in? I learned from [KNNL](http://rads.stackoverflow.com/amzn/click/007310874X) ($157.50), but oh gosh I couldn't imagine carrying this thing around -- you'd be asking for a reading list on scoliosis correction. "General Statistics" is certainly an area of interest, but I'm curious if you're more interested in depth, breadth, or some mix of both.
null
CC BY-SA 2.5
null
2010-10-04T14:08:55.490
2010-10-04T20:26:32.840
2010-10-04T20:26:32.840
795
1499
null
3302
2
null
3291
3
null
As a social scientist I would have to vouch for the [Sage Green Books](http://www.sagepub.com/booksSeries.nav?forceIgnore=True&sortBy=bookPublishDate&prodTypes=books&sizeBooks=&reverseFlag=true&series=Series486&&start=1&_requestid=381729). If you are a bargain shopper you would be able to get between 10 to 20 books for 200 dollars (assuming no shipping). For those not familiar these are all introductions to various methodologies aimed at people with no more than an introduction to statistical inference in most circumstances.
null
CC BY-SA 2.5
null
2010-10-04T15:04:32.177
2010-10-04T15:04:32.177
null
null
1036
null
3303
2
null
3262
10
null
[Frederick Mosteller said:](http://www.edwardtufte.com/tufte/mosteller_p1) > When I think of teaching a class, I think of five main components, not all ordinarily used in one lecture. They are Large-scale application Physical demonstration Small-scale application (specific) Statistical or probabilistic principle Proof or plausibility argument Tufte also mentioned (I don't have the source here but I think it was from Mosteller as well) the PGP framework: - Particular - General - Particular The idea is that you should start with an example (it helps if the example is relevant to the students), then develop the general solution, then close with another example.
null
CC BY-SA 2.5
null
2010-10-04T15:40:44.507
2010-10-04T15:40:44.507
null
null
666
null
3306
2
null
3286
4
null
First thing: define "distance". That sounds like a stupid question, but what do you mean by distance? Is the data paired? Then -and only then- it makes sense to look at the sum of (squared) differences to decide about the distance between two datasets. If not, you have to resort to other means. The next question is: is the data distributed in the same manner? If so, you can see the difference between the means as the "location shift" of your data (or the distance between both datasets). But if neither of both is true, how do you define the distance between datasets then? Do you take shape of the distribution into account for example? You really have to think about those issues before trying to calculate a distance. This said : One (naive) possibility is to use the mean of the differences between all possible x-y combinations. Formalized this is : $$Dist=\sqrt{\frac{1}{n_1 n_2}\sum_{i=1}^{n_1} \sum_{j=1}^{n_2}(X_i - Y_j)^2}$$ In R : ``` x <- rnorm(10) y <- rnorm(10,2) sqrt(mean(outer(x,y,"-")^2)) ``` If you allow for negative distances, you can drop the sqrt and the ^2 : ``` mean(outer(x,y,"-")) ``` A simulation shows easily that this will give indeed the difference between the means in the example, as both distributions are equal in this case. But be warned that negative distances are not allowed in many applications. In the first scenario, the number will always be a bit larger than the difference between the mean. In any case, if you're interested in the difference between the center of your datasets, define the center and calculate the difference between those centers. That might very well be what you're after. Contrary to the other suggestions, this approach does not make any assumptions about the distribution of your data. Which makes it applicable in all situations, but also difficult to interpret.
null
CC BY-SA 4.0
null
2010-10-04T16:33:09.597
2021-09-16T12:57:18.510
2021-09-16T12:57:18.510
265676
1124
null
3307
1
null
null
7
647
I am looking for a fractal-based statistical measure which could be used as alternative to correlation between two variables (I know that hurst exponent can be used for auto-correlation). Is anyone aware of such measures?
Fractal alternative to correlation
CC BY-SA 2.5
null
2010-10-04T16:44:53.327
2010-10-17T15:59:56.937
2010-10-07T22:52:00.950
1499
1250
[ "correlation", "fractal" ]
3308
2
null
3296
9
null
I think this will require domain expertise. If I were you, I would spend time examining these samples and their provenance, to figure out what (if anything) is wrong with them. If the samples were collected by a colleague working in some application domain, they may be able to help you with this. Sometimes, samples can indeed be 'bad'. For example, they could be mislabelled, collected under different circumstances from the rest, collected using off-calibration equipment, or there may be many other reasons why they're outliers. However, you shouldn't just say "these are probably bad" and delete them; much better to identify what's wrong with them so that you can verify that they're bad and justify their deletion. One reason for caution is that they might not actually be bad, just drawn from part of your sample space that's not well represented in the data. In that case, you shouldn't toss them out, you should (if possible) collect more like them. Another reason is that the samples at the extremities of a concept are the ones that may be hardest to classify correctly, but if they are not actually bad and you remove them, you just end up with new samples at the extremities. To take an artificial example, suppose you are classifying samples as Hot/NotHot, and everything above 50 degrees should be hot. Samples at 49.9 degrees and 50.1 degrees are quite similar even though they're different sides of your decision boundary, so they're just hard to classify and they're not outliers that should be tossed. Also, if you remove them, you may find that two new samples (49.8 and 50.2 degrees) that were previously being classified correctly are now getting misclassified. One final point: when you say that samples in the training set are generally being misclassified, do you mean under a cross-validation scheme or literally that when you test on the training data that they are misclassified? If the latter, it could be that the classification methods you are using are not able to capture the data variance sufficiently well. Hope this helps a little ...
null
CC BY-SA 2.5
null
2010-10-04T17:39:16.307
2010-10-04T17:39:16.307
null
null
1436
null
3309
1
3310
null
5
764
For example: ``` k <- 100 R <- 10000 max.g <- numeric(R) for(i in 1:R) max.g [i] <- max(rnorm(k)) hist(max.g) # We can see it's right tailed... ``` I remember once encountering that there is a name for this type of distributions, but the name alludes me.
Is there an analytical expression for the distribution of the max of a normal k sample?
CC BY-SA 2.5
null
2010-10-04T17:43:27.613
2010-10-04T20:03:08.383
2010-10-04T20:03:08.383
null
253
[ "distributions", "extreme-value", "normal-distribution" ]
3310
2
null
3309
6
null
Properly normalized, it's closely approximated by a [Gumbel distribution](http://en.wikipedia.org/wiki/Gumbel_distribution) as shown by [Extreme value theory](http://en.wikipedia.org/wiki/Extreme_value_theory). Alternative names are provided in the links.
null
CC BY-SA 2.5
null
2010-10-04T17:47:51.937
2010-10-04T17:47:51.937
null
null
919
null
3311
2
null
3252
14
null
I just drop some references about data dredging and clinical studies for the interested reader. This is intended to extend [@onestop](https://stats.stackexchange.com/users/449/onestop)'s fine answer. I tried to avoid articles focusing only on multiple comparisons or design issues, although studies with multiple endpoints continue to present challenging and controversial discussions (long after Rothman's claims about [useless adjustments](http://www.ncbi.nlm.nih.gov/pubmed/2081237?dopt=AbstractPlus&holding=f1000,f1000m,isrctn), Epidemiology 1990, 1: 43-46; or see Feise's review in [BMC Medical Research Methodology](http://www.biomedcentral.com/1471-2288/2/8/) 2002, 2:8). My understanding is that, although I talked about exploratory data analysis, my question more generally addresses the use of data mining, with its potential pitfalls, in parallel to hypothesis-driven testing. - Koh, HC and Tan, G (2005). Data Mining Applications in Healthcare. Journal of Healthcare Information Management, 19(2), 64-72. - Ioannidis, JPA (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124. - Anderson, DR, Link, WA, Johnson, DH, and Burnham, KP (2001). Suggestions for Presenting the Results of Data Analysis. The Journal of Wildlife Management, 65(3), 373-378. -- this echoes @onestop's comment about the fact that we have to acknowledge the data-driven exploration/modeling beyond the initial set of hypotheses - Michels, KB and Rosner, BA (1996). Data trawling: to fish or not to fish. Lancet, 348, 1152-1153. - Lord, SJ, Gebski, VJ, and Keech, AC (2004). Multiple analyses in clinical trials: sound science or data dredging?. The Medical Journal of Australia, 181(8), 452-454. - Smith, GD and Ebrahim, S (2002). Data dredging, bias, or confounding. BMJ, 325, 1437-1438. - Afshartous, D and Wolf, M (2007). Avoiding ‘data snooping’ in multilevel and mixed effects models. Journal of the Royal Statistical Society A, 170(4), 1035–1059 - Anderson, DR, Burnham, KP, Gould, WR, and Cherry, S (2001). Concerns about finding effects that are actually spurious. Widlife Society Bulletin, 29(1), 311-316.
null
CC BY-SA 2.5
null
2010-10-04T17:56:18.403
2010-11-01T10:11:29.257
2017-04-13T12:44:42.893
-1
930
null
3312
2
null
3291
11
null
[All of Statistics: A Concise Course in Statistical Inference](http://rads.stackoverflow.com/amzn/click/1441923225) - US$ 79.11 [Statistical Models: Theory and Practice](http://rads.stackoverflow.com/amzn/click/0521743850) - US$ 40.00 [Data Analysis Using Regression and Multilevel/Hierarchical Models](http://rads.stackoverflow.com/amzn/click/052168689X) -US$ 47.99 Grand total of US$ 167.11 As chl suggested, you still have money left to print nearly all of [Hastie](http://www-stat.stanford.edu/~tibs/ElemStatLearn/) (~746 pages, or $37).
null
CC BY-SA 2.5
null
2010-10-04T18:24:36.603
2010-10-05T01:56:29.007
2010-10-05T01:56:29.007
666
666
null
3313
1
3319
null
5
745
I have a S-Plus [library](https://home.zhaw.ch/~brw/index.html?tools) which I'd like to convert to R. I am a programmer, but I don't know anything about S-Plus or R. From my research it seems that they are highly compatibile. Is that true? The code I want to convert only uses core S-Plus libraries. I have attached a picture of the library as seen in the S-Plus 8.0 Object Explorer. Besides function source files, there are a few entries which I'm not sure how to transpose to R. For example the last 5 (oneDay, ...), which seem to be some sort of global variables, and have specific values assigned to them. What would be the equivalent of them in R? ![alt text](https://i.stack.imgur.com/102FK.png)
How hard is to convert a library from S-PLUS 8.0 to R?
CC BY-SA 2.5
null
2010-10-04T19:47:23.197
2010-12-15T08:30:42.657
2010-10-04T21:15:49.663
8
749
[ "r", "splus" ]
3314
2
null
3309
7
null
You will find exact expressions for the full pdf of the $n^{th}$ order statistics (as a function of $n$, the sample size) in the following paper: Percentage Points and Modes of Order Statistics from the Normal Distribution Shanti S. Gupta Source: Ann. Math. Statist. Volume 32, Number 3 (1961), 888-893. Also includes exact expression for the medians and means of $n^{th}$ order statistics as a function of $n$ (i could type a few here but the paper is un-gated). Some of these expressions are surprisingly simple. H/T to John D Cook for the pointer.
null
CC BY-SA 2.5
null
2010-10-04T19:52:45.160
2010-10-04T19:52:45.160
null
null
603
null
3315
2
null
3313
2
null
Your S+ functions should work in R, except if they have dependencies on some Libraries that are not currently supported on R (most of the time, it is the reverse, though). About your "global" variables, the easiest way is to use single assignment in your script file (although you can explicitely can write to the global environment from within a function, see `help("<<-")`, but this is not recommended), unless you plan to make an R package. In this case, you should look at the R documentation, [Writing R Extensions](http://cran.r-project.org/doc/manuals/R-exts.html). Maybe you can also take a look at [Using R and S-Plus together: The Best of Both Worlds](http://www.omegahat.org/RinS/Docs/overview.pdf)
null
CC BY-SA 2.5
null
2010-10-04T20:09:29.433
2010-10-04T20:09:29.433
null
null
930
null
3316
1
3325
null
15
2414
Supposing one has a time series from which one can take various measurements such as period, maximum, minimum, average etc. and then use these to create a model sine wave with the same attributes, are there any statistical approaches one can use that could quantify how closely the actual data fit the assumed model? The number of data points in the series would range between 10 and 50 points. A very simplistic first thought of mine was to ascribe a value to the directional movement of the sine wave, i.e. +1 +1 +1 +1 -1 -1 -1 -1 -1 -1 -1 -1 +1 +1 +1 +1, do the same to the actual data, and then somehow quantify the degree of similarity of directional movement. Edit: Having given more thought to what I really want to do with my data, and in light of responses to my original question, what I need is a decision making algorithm to choose between competing assumptions: namely that my data is basically linear (or trending) with noise that could possibly have cyclic elements; my data is basically cyclic with no directional trend to speak of; the data is essentially just noise; or it is transitioning between any of these states. My thoughts now are to maybe combine some form of Bayesian analysis and Euclidean/LMS metric. The steps in this approach would be Create the assumed sine wave from data measurements Fit a LMS straight line to the data Derive an Euclidean or LMS metric for departures from the original data for each of the above Create a Bayesian prior for each based on this metric i.e. 60 % of the combined departures attach to one, 40 % to the other, hence favour the 40 % slide a window one data point along the data and repeat the above to obtain new % metrics for this slightly changed data set - this is the new evidence - do the Bayesian analysis to create a posterior and change the probabilities that favour each assumption repeat along the whole data set (3000+ data points) with this sliding window (window length 10-50 data points). The hope/intent is to identify the predominant/favoured assumption at any point in the data set and how this changes with time Any comments on this potential methodology would be welcome, particularly on how I could actually implement the Bayesian analysis part.
Statistical similarity of time series
CC BY-SA 2.5
null
2010-10-04T20:21:16.290
2013-02-09T06:17:31.850
2010-10-07T01:12:03.433
226
226
[ "time-series", "classification" ]
3317
2
null
3200
20
null
I think this is a very good question; it gets to the heart of the contentious multiple testing "problem" that plagues fields ranging from epidemiology to econometrics. After all, how can we know if the significance we find is spurious or not? How true is our multivariable model? In terms of technical approaches to offset the likelihood of publishing noise variables, I would heartily agree with 'whuber' that using part of your sample as training data and the rest as test data is a good idea. This is an approach that gets discussed in the technical literature, so if you take the time you can probably find out some good guidelines for when and how to use it. But to strike more directly at the philosophy of multiple testing, I suggest you read the articles I reference below, some of which support the position that adjustment for multiple testing is often harmful (costs power), unnecessary, and may even be a logical fallacy. I for one do not automatically accept the claim that our ability to investigate one potential predictor is inexorably reduced by the investigation of another. The family-wise Type 1 error rate may increase as we include more predictors in a given model, but so long as we do not go beyond the limits of our sample size, the probability of Type 1 error for each individual predictor is constant; and controlling for family-wise error does not illuminate which specific variable is noise and which is not. Of course, there are cogent counter-arguments as well. So, as long as you limit your list of potential variables to those which are plausible (ie, would have known pathways to the outcome) then the risk of spuriousness is already handled fairly well. However, I would add that a predictive model is not as concerned with the "truth-value" of its predictors as a causal model; there may be a great deal of confounding in the model, but so long as we explain a large degree of the variance then we aren't too concerned. This makes the job easier, at least in one sense. Cheers, Brenden, Biostatistical Consultant PS: you may want to do a zero-inflated Poisson regression for the data you describe, instead of two separate regressions. - Perneger, T.V. What's wrong with Bonferroni adjustments. BMJ 1998; 316 : 1236 - Cook, R.J. & Farewell, V.T. Multiplicity considerations in the design and analysis of clinical trials. Journal of the Royal Statistical Society, Series A 1996; Vol. 159, No. 1 : 93-110 - Rothman, K.J. No adjustments are needed for multiple comparisons. Epidemiology 1990; Vol. 1, No. 1 : 43-46 - Marshall, J.R. Data dredging and noteworthiness. Epidemiology 1990; Vol. 1, No. 1 : 5-7 - Greenland, S. & Robins, J.M. Empirical-Bayes adjustments for multiple comparisons are sometimes useful. Epidemiology 1991; Vol. 2, No. 4 : 244-251
null
CC BY-SA 3.0
null
2010-10-04T20:40:30.743
2012-06-05T19:04:53.177
2012-06-05T19:04:53.177
7290
1501
null
3318
2
null
175
13
null
Sharpie, Taking your question literally, I would argue that there are no statistical tests or rules of thumb can be used as a basis for excluding outliers in linear regression analysis (as opposed to determining whether or not a given observation is an outlier). This must come from subject-area knowledge. I think the best way to start is to ask whether the outliers even make sense, especially given the other variables you've collected. For example, is it really reasonable that you have a 600 pound woman in your study, which recruited from various sports injury clinics? Or, isn't it strange that a person is listing 55 years or professional experience when they're only 60 years old? And so forth. Hopefully, you then have a reasonable basis for either throwing them out or getting the data compilers to double-check the records for you. I would also suggest robust regression methods and the transparent reporting of dropped observations, as suggested by Rob and Chris respectively. Hope this helps, Brenden
null
CC BY-SA 2.5
null
2010-10-04T21:29:50.943
2010-10-04T21:29:50.943
null
null
1501
null
3319
2
null
3313
4
null
I would say that your functions will mostly work in R, but that is far from being universally true. I went through the same conversion before myself, and it was actually quite painful. Especially given all the functions that are actually writing in a foreign language (like C). As an example: it looks like you have some S-Plus `timeSeries` objects in there. It can be painful to convert functions that use or rely on the timeSeries object in S-Plus, and you will have a slightly rude awakening because the R time series libraries simply aren't as standardized or as user friendly. This is not to criticize the R libraries, but simply to point out that S-Plus was created by one company and there is only one `timeSeries` object and everything uses it consistently.
null
CC BY-SA 2.5
null
2010-10-04T21:31:19.277
2010-10-04T21:31:19.277
null
null
5
null
3321
1
null
null
52
71031
I am looking to rank data that, in some cases, the larger value has the rank of 1. I am relatively new to R, but I don't see how I can adjust this setting in the rank function. ``` x <- c(23,45,12,67,34,89) rank(x) ``` generates: ``` [1] 2 4 1 5 3 6 ``` when I want it to be: ``` [1] 5 3 6 2 4 1 ``` I assume this is very basic, but any help you can provide will be very much appreciated.
Rank in R - descending order
CC BY-SA 2.5
null
2010-10-04T21:57:45.903
2010-10-04T22:43:34.287
null
null
569
[ "r" ]
3322
2
null
3321
98
null
You could negate `x`: ``` > rank(-x) [1] 5 3 6 2 4 1 ```
null
CC BY-SA 2.5
null
2010-10-04T22:43:23.163
2010-10-04T22:43:23.163
null
null
1390
null
3324
1
null
null
12
18705
I have inherited some data analysis code that, not being an econometrician, I am struggling to understand. One model runs an instrumental variables regression with the following Stata command ``` ivreg my_dv var1 var2 var3 (L.my_dv = D2.my_dv D3.my_dv D4.my_dv) ``` This dataset is a panel with multiple sequential observations for this set of variables. Why is this code using the lagged values of the DV as instruments? As I understand it (from digging into an old textbook), IV estimation is used when there is a problem because of a regressor being correlated with the error term. However, nothing is mentioned of choosing lags of the DV as instruments. A comment on this line of the code mentions "causality". Any help in figuring out what was the goal here would be most welcome.
Why use a lagged DV as an instrumental variable?
CC BY-SA 3.0
null
2010-10-04T22:44:57.123
2013-07-20T22:59:43.043
2013-07-20T22:59:43.043
22047
1007
[ "regression", "stata", "instrumental-variables" ]
3325
2
null
3316
7
null
The Euclidean distance is a common metric in machine learning. The following slides provide a good overview of this area along with references: - Making Time-series Classification More Accurate Using Learned Constraints - Introduction to Machine Learning Research on Time Series Also see the references on Keogh's benchmarks page for time series classification: - UCR Time Series Classification/Clustering Page
null
CC BY-SA 2.5
null
2010-10-05T00:45:41.003
2010-10-05T00:45:41.003
null
null
251
null
3326
2
null
3109
2
null
I believe that your use of the word "tracking" may be leading your web queries off topic. Are you interested in time-series? Could you provide a simple example of "tracks clusters of datapoints"? Using the word "tracks" do you mean over time, through space, or perhaps in reference to a chemical or physical process? Further, with "clusters of datapoints", are you talking about objects (image recognition), are you talking about survey data & searching for trends, or something more akin to tracking the correlated movements of stock prices? Answer: You might consider googling "Spatial Temporal Cluster Algorithms" or perhaps "Agent based modeling" or "Spatio-temporal point processes". But again, the suggestions can be refined with a bit of insight on your desired application or context.
null
CC BY-SA 2.5
null
2010-10-05T03:20:46.547
2010-10-05T03:20:46.547
null
null
1499
null
3327
2
null
3316
5
null
If you have a specific model you wish to compare against: I would recommend Least-squares as a metric to minimize and score possible parameter values against a specific dataset. All you basically have to do is plug in your parameter estimates, use those to generate predicted values, and compute the average squared deviation from the true values. However, You might consider turning your question around slightly: "Which model would best fit my data?" In which case I would suggest making an assumption of a normally distributed error term ~ something one could argue is akin to the least squares assumption. Then, depending on your choice of model, you could make an assumption about how you think the other model parameters are distributed (assigning a Bayesian prior) and the use something like the MCMC package from R to sample from the distribution of the parameters. Then you could look at posterior means & variances to get an idea of which model has the best fit.
null
CC BY-SA 2.5
null
2010-10-05T03:34:51.777
2010-10-05T03:34:51.777
null
null
1499
null
3328
1
3334
null
11
1695
Question: With a 10 dimensional MCMC chain, let's say I'm prepared to hand you a matrix of the draws: 100,000 iterations (rows) by 10 parameters (columns), how best can I identify the posterior modes? I'm especially concerned with multiple modes. Background: I consider myself a computationally savvy statistician, but when a colleague asked me this question, I was ashamed that I couldn't come up with a reasonable answer. The primary concern is that multiple modes may appear, but only if at least eight or so of the ten dimensions are considered. My first thought would be to use a kernel density estimate, but a search through R revealed nothing promising for problems of greater than three dimensions. The colleague has proposed an ad-hoc binning strategy in ten dimensions and searching for a maximum, but my concern is that bandwidth might either lead to significant sparsity problems or to a lack of resolution to discern multiple modes. That said, I'd happily accept suggestions for automated bandwidth suggestions, links to a 10 kernel density estimator, or anything else which you know about. Concerns: - We believe that the distribution may be quite skewed; hence, we wish to identify the posterior mode(s) and not the posterior means. - We are concerned that there may be several posterior modes. - If possible, we'd prefer an R based suggestion. But any algorithm will do as long as it isn't incredibly difficult to implement. I guess I'd prefer not to implement a N-d kernel density estimator with automated bandwidth selection from scratch.
Given a 10D MCMC chain, how can I determine its posterior mode(s) in R?
CC BY-SA 2.5
null
2010-10-05T04:00:11.767
2011-04-18T19:47:30.297
2011-01-17T23:59:07.313
449
1499
[ "r", "bayesian", "markov-chain-montecarlo", "k-nearest-neighbour", "mode" ]
3329
2
null
2635
12
null
A Gamma distribution is not a conjugate prior for a Gamma distribution. There is a conjugate prior for the Gamma distribution developed by Miller (1980) whose details you can find on [Wikipedia](http://en.wikipedia.org/wiki/Conjugate_prior) and also in the pdf linked in footnote 6. Checkout section 3.2 on page 25 of [this paper](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.157.5540), there is a prior with four parameters: p, q, r, & s
null
CC BY-SA 3.0
null
2010-10-05T04:15:12.117
2013-08-02T02:39:59.817
2013-08-02T02:39:59.817
1499
1499
null
3330
2
null
3324
5
null
I don't know Stata, so I can't comment on the specific model. But the use of lagged variables is a fairly common approach when dealing with simultaneity bias in general and creating instrumental variables in particular. Say you have a feedback between two variables in your model: the independent variable (such as price) and the dependent variable (such as quantity). Then both are endogeneous (their causes arise from within the model) and perturbations to the error term will affect both variables. To solve this, you want to make the independent variable (price) exogeneous so that perturbations in the error affect only the dependent variable (quantity). This is accomplished by creating new exogeneous variables by regressing the other exogeneous variables in your model on price. These new exogeneous variables are your instrumental variables (IVs). The IVs are derived from exogeneous terms and thus not correlated with the error. But to do this, you need to figure out which variables are exogeneous so they can be used to derive the IVs. We can note that lagged variables "occurred" in the past and thus can't be correlated with the error in the present. Lagged variables are thus exogeneous and become convenient candidates for deriving IVs. (However, note that the preceding argument fails when the errors are autocorrelated.) A good introduction and reference to this is [Introductory econometrics: a modern approach](http://books.google.com/books?id=64vt5TDBNLwC) by Wooldridge.
null
CC BY-SA 2.5
null
2010-10-05T05:28:39.143
2010-10-05T05:39:47.353
2010-10-05T05:39:47.353
159
251
null
3331
1
3363
null
56
27955
I have sales data for a series of outlets, and want to categorise them based on the shape of their curves over time. The data looks roughly like this (but obviously isn't random, and has some missing data): ``` n.quarters <- 100 n.stores <- 20 if (exists("test.data")){ rm(test.data) } for (i in 1:n.stores){ interval <- runif(1, 1, 200) new.df <- data.frame( var0 = interval + c(0, cumsum(runif(49, -5, 5))), date = seq.Date(as.Date("1990-03-30"), by="3 month", length.out=n.quarters), store = rep(paste("Store", i, sep=""), n.quarters)) if (exists("test.data")){ test.data <- rbind(test.data, new.df) } else { test.data <- new.df } } test.data$store <- factor(test.data$store) ``` I would like to know how I can cluster based on the shape of the curves in R. I had considered the following approach: - Create a new column by linearly transforming each store's var0 to a value between 0.0 and 1.0 for the entire time series. - Cluster these transformed curves using the kml package in R. I have two questions: - Is this a reasonable exploratory approach? - How can I transform my data into the longitudinal data format that kml will understand? Any R snippets would be much appreciated!
Is it possible to do time-series clustering based on curve shape?
CC BY-SA 2.5
null
2010-10-05T07:45:19.550
2022-07-04T14:33:43.710
2010-10-05T23:05:43.947
179
179
[ "r", "time-series", "clustering" ]
3332
2
null
3328
2
null
Have you considered 'PRIM / bump hunting' ? (see e.g. Section 9.3. of 'The Elements of Statistical Learning' by Tibshirani et al. or ask your favourite search engine). Not sure whether that's implemented in R though. [ As far as I understood are you trying to find the mode of the probability density from which your 100'000 rows are drawn. So your problem would be partially solved by finding an appropriate `density estimation` method ].
null
CC BY-SA 2.5
null
2010-10-05T07:54:19.357
2010-10-05T07:54:19.357
null
null
961
null
3333
1
null
null
2
8270
I asked a question earlier on [here](https://stats.stackexchange.com/questions/2483/how-to-model-and-test-a-decision-support-system-e-g-a-terrorist-warning-system). Essentially, I am trying to evaluate a warning system that consists of a light bulb (of a specific color) being switched on, to indicate a predicted warning threat level. Srikant came up with a simple solution which involved calculating the posterior probabilities of the warning system, using bayes theorem. I am now contemplating implementing this in R. However, I am not quite sure how to proceed, since I am an R newbie. The data looks like this: ``` warning light, event level occured? red, yes none, yes green, no etc ... ``` I would like to write a simple R script that will help me to calculate: a). the posterior probablity of an event occuring, for a given state of the light bulb b). a C.I to attach to the posterior probability obtained in (a) above Since I am new to R, I would be grateful for the steps (and commands) required to do the above. My data will be in a simple csv file in the format described, so I can simply scan() it into R. How to proceed from there, is what I need help with..
Calculating posterior probabilities in R?
CC BY-SA 2.5
null
2010-10-05T07:54:48.897
2010-10-05T19:41:51.290
2017-04-13T12:44:24.947
-1
1216
[ "r", "bayesian" ]
3334
2
null
3328
9
null
Have you considered using a nearest neighbour approach ? e.g. building a list of the `k` nearest neighbours for each of the 100'000 points and then consider the data point with the smallest distance of the `kth` neighbour a mode. In other words: find the point with the 'smallest bubble' containing `k` other points around this point. I'm not sure how robust this is and the choice for `k` is obviously influencing the results.
null
CC BY-SA 2.5
null
2010-10-05T07:58:19.880
2010-10-05T07:58:19.880
null
null
961
null
3335
2
null
3328
4
null
This is only a partial answer. I recently used [figtree](http://www.umiacs.umd.edu/~morariu/figtree/) for multidimensional kernel density estimates. It's a C package and I got it to work fairly easily. However, I only used it to estimate the density at particular points, not calculate summary statistics.
null
CC BY-SA 2.5
null
2010-10-05T09:39:19.830
2010-10-05T09:39:19.830
null
null
8
null
3336
2
null
3324
7
null
Edit: Given the clarification on the stata code provided by Andy W below, i changed my answer to better adress the question. You will find the old version of my answer below the current one. It seems your code is a clumsy attempt at DIY'ing the Arellano-Bond estimator (assuming ivreg estimates with 2SOLS). You can find more details on the use and logic of the A/B estimator [in this nice review paper](http://www.cemmap.ac.uk/wps/cwp0209.pdf) as well as in [this](http://www.nyu.edu/its/pubs/connect/fall03/pdfs/yaffee_primer.pdf) broader introduction. In a nutshell and within 3 lines: although the A/B estimator is indeed an (generalized) IV estimator, it is not used to address any issue of causality. The IV's in this context are used to provide efficient estimation of the AR coefficient in the context of panel data. I would recommend against re-inventing the wheel here, and instead using ready made toolbox to perform such estimations. For stata, you can use the [XTABOND2](http://ideas.repec.org/c/boc/bocode/s435901.html) (or XTABOND if you are running STAT11) package. --- old response: A simple example will help you here. Suppose you have two variable $x_t$ and $y_t$ sampled over time such that the correlation between $x_t$ and $y_t$ is very high. You would like to make a claim about $x_t$ causing $y_t$ but unfortunately there is a very good competing and credible theory under which $y_t$ causes $x_t$. To disentangle the two competing models, you regress $y_t$ on $x_{t-1}$ (instead of $x_t$). Often, you will lose in precision (i.e. the correlation between variables sampled at different times is usually lower than the correlation between variable sampled simultaneously). The way the two competing models - $y_t\leftarrow x_{t-1}$ and $x_{t-1} \leftarrow y_{t}$ - are now disentangled is that, presumably, there is not a good theory under which an $x$ from one period ago can be caused by a current $y$ ('the past cannot be caused by the future'), excluding the second sense of causality. Note that the use of this trick is only valid if both variables (the $y_t$ and $x_{t-1}$ are stationary $I(0)$).
null
CC BY-SA 2.5
null
2010-10-05T10:31:48.563
2010-10-07T08:45:27.100
2010-10-07T08:45:27.100
603
603
null
3337
1
247768
null
15
4110
After reading [one of the "Research tips"](http://robjhyndman.com/researchtips/crossvalidation) of R.J. Hyndman about cross-validation and time series, I came back to an old question of mine that I'll try to formulate here. The idea is that in classification or regression problems, the ordering of the data is not important, and hence k-fold cross-validation can be used. On the other hand, in time series, the ordering of the data is obviously of a great importance. However, when using a machine learning model to forecast time series, a common strategy is to reshape the series $\{y_1, ..., y_T\}$ into a set of "input-output vectors" which, for a time $t$, have the form $(y_{t-n+1}, ..., y_{t-1}, y_{t}; y_{t+1})$. Now, once this reshaping has been done, can we consider that the resulting set of "input-output vectors" do not need to be ordered? If we use, for example, a feed-forward neural network with n inputs to "learn" these data, we would arrive at the same results no matter the order in which we show the vectors to the model. And therefore, could we use k-fold cross-validation the standard way, without the need to re-fit the model each time?
Ordering of time series for machine learning
CC BY-SA 3.0
null
2010-10-05T10:47:46.470
2016-11-25T06:17:43.577
2016-05-09T10:40:48.363
22228
1504
[ "time-series", "machine-learning", "cross-validation" ]
3338
1
null
null
4
124
I'm using a genetic algorithm to generate a string that produces certain results I map into a line/bar graph. I'm trying to rate how closely the results produced by the genetic string compares to a graph I have produce manually by hand. Currently I am comparing the graphs be finding the area of difference between the two and getting a percentage representing where it sits between 0 difference and a worst case difference being where they don't overlap at all. Though I don't feel this rating really represents how closely the shape matches the other. As this is what concerns me the most, as two different results could have the same difference in terms of area but one may be a closer shape then the other, and I would like the one with the closer shape to have a higher rating! My question is, is there a standard way of rating graphs where i can get a percentage of how closely one matches the other and can you provide the way of going about this?
Rating how closely one graph models another
CC BY-SA 2.5
null
2010-10-05T11:28:09.010
2010-10-05T11:51:35.757
null
null
1505
[ "data-visualization", "model-comparison", "genetic-algorithms" ]
3339
2
null
3338
3
null
You question is very closely related to this [question](https://stats.stackexchange.com/questions/3286/distance-between-empirically-generated-distributions-in-r) about differences between distributions. Have a look at the three answers. That should give you some R code to get started. BTW, I don't think your question is duplicate. It's a just a question that shares answers.
null
CC BY-SA 2.5
null
2010-10-05T11:51:35.757
2010-10-05T11:51:35.757
2017-04-13T12:44:52.277
-1
8
null
3340
2
null
2348
1
null
I use a slice sampler - originally proposed by Neal(2003), which I tune through heuristic optimization.
null
CC BY-SA 2.5
null
2010-10-05T12:11:53.590
2010-10-05T12:11:53.590
null
null
1499
null
3341
2
null
725
4
null
Not an R package, but D. A. Landgrebe from Purdue (author of Signal theory methods in multispectral remote sensing) has sponsored the [MultiSpec](https://engineering.purdue.edu/~biehl/MultiSpec/) freeware. Its a rather clunky GUI but gets the job done for most of the common hyperspectral algorithms.
null
CC BY-SA 2.5
null
2010-10-05T12:18:20.193
2010-10-05T12:18:20.193
null
null
179
null
3342
1
3343
null
7
6869
I have some data that I've built histograms out of in R. Now I want to play around with the data, but first I want to summarise it in the same way the histogram does. That is, I want to take my data vector, and count how many points fall in each bin interval in the same way that R's `hist` function does. I was about to do this from scratch, but then I thought: R already knows how to do this, I just need to find out how to take the first half of the `hist` function and just run that. So how do I do this?
Recreating R's hist function's bin counting
CC BY-SA 2.5
null
2010-10-05T12:32:27.393
2010-10-06T14:31:44.757
2010-10-06T14:31:44.757
8
5141
[ "r", "histogram" ]
3343
2
null
3342
10
null
Perhaps I've misunderstood what you want, but `hist()` returns all the details required to produce the histogram that is plotted. But you don't need to plot it and you can capture the returned object for subsequent use. So if the histogram contains the relevant summary you are after, this should be all you need. Here's an example: ``` > h <- hist(islands, plot = FALSE) > str(h) List of 7 $ breaks : num [1:10] 0 2000 4000 6000 8000 10000 12000 14000 16000 18000 $ counts : int [1:9] 41 2 1 1 1 1 0 0 1 $ intensities: num [1:9] 4.27e-04 2.08e-05 1.04e-05 1.04e-05 1.04e-05 ... $ density : num [1:9] 4.27e-04 2.08e-05 1.04e-05 1.04e-05 1.04e-05 ... $ mids : num [1:9] 1000 3000 5000 7000 9000 11000 13000 15000 17000 $ xname : chr "islands" $ equidist : logi TRUE - attr(*, "class")= chr "histogram" ``` Note the use of `plot = FALSE` to avoid the superfluous plot side-effect.
null
CC BY-SA 2.5
null
2010-10-05T12:38:30.410
2010-10-05T12:38:30.410
null
null
1390
null
3345
2
null
3333
1
null
By the tone of your question, it may be handy to read a little bit about generating draws from a posterior. Two introductory springer books (full of R codes) comes to mind (in order of preference): a) Bayesian computation with R. Springer, 2007. J. Albert. b) Introducing Monte Carlo Methods with R. Springer 2010. C. P. Robert, G. Casella You will find more detailed explanations and ways to circumvent important captcha's in these that you could possibly get from a 15 lines answer on here.
null
CC BY-SA 2.5
null
2010-10-05T13:21:50.150
2010-10-05T13:21:50.150
null
null
603
null
3346
2
null
125
7
null
I simply must to include [MCMC in Practice](http://rads.stackoverflow.com/amzn/click/0412055511). It provides an excellent introduction to MCMC, perhaps not as general as other books mentioned, but excellent for gaining insight and intuition. I would recommend reading it after (or in parallel with) [Bayesian Computation with R](http://bayes.bgsu.edu/bcwr/).
null
CC BY-SA 2.5
null
2010-10-05T13:54:13.333
2010-10-05T13:54:13.333
null
null
1499
null
3347
2
null
1904
1
null
Though it hasn't been around for decades, I'm currently looking forward to [MCMSki](http://madison.byu.edu/mcmski/). One could also mention the [Valencia Meetings](http://www.uv.es/valenciameeting), but those only happen every four years (and you've already missed the 2010 meeting).
null
CC BY-SA 2.5
null
2010-10-05T15:08:21.867
2010-10-05T15:08:21.867
null
null
1499
null
3348
2
null
3199
5
null
You could also consider [Moran's I](http://en.wikipedia.org/wiki/Moran%27s_I) which is available in the R package "ape". And then simply use a weighting based on distance: ``` nRows <- 30 nCols <- 15 nPixels <- nRows * nCols # Create a Random Image image <- matrix(sample.int(256, nPixels, replace=TRUE), nrow=nRows, ncol=nCols) - 1L # 1D to 2D Index Function reverseIndex <- function ( vectorIdx, nRows, nCols ) { # If you're using row major for some odd reason, you'll # need to flip these. J <- floor((vectorIdx - 1L) / nCols) I <- (vectorIdx - 1L) - nCols*J # Return: c(I+1L, J+1L) } # Distance Function distFunc <- function(I, J) { idx1 <- reverseIndex(I, nRows, nCols) idx2 <- reverseIndex(J, nRows, nCols) idDiff <- idx1 - idx2 # Return: sqrt(idDiff %*% idDiff) } # Create Distance Matrix matrix(mapply(distFunc, rep(seq_len(nPixels), nPixels), rep(seq_len(nPixels), each=nPixels)), nrow=nPixels, ncol=nPixels) # Invert Distance for Moran's I invDist <- 1 / dist diag(invDist) <- 0 # Compute Moran's I: ape::Moran.I(as.vector(image), dist) ``` Note that this will simply provide a measure & test of association, it will not identify where that association is in your matrix.
null
CC BY-SA 4.0
null
2010-10-05T17:06:35.777
2019-07-30T10:42:07.273
2019-07-30T10:42:07.273
189336
1499
null
3349
2
null
3064
2
null
This is not a complete answer but I hope it gives you some ideas as to how to model the situation in a coherent manner. Assumptions - The values at the lower end of the scale follow a normal distribution truncated from below. - The values at the upper end of the scale follow a normal distribution truncated from above. (Note: I know that you said that the data is not normal but I am assuming that you are referring to the distribution of all the values whereas the above assumptions pertain to the values at the lower and the upper end of the scale.) - A person's underlying state (whether they have TB or not) follows a first-order markov chain. Model Let: - $D_i(t)$ be 1 if at time $t$ the $i^\mbox{th}$ person has TB and 0 otherwise, - $RTB_i(t)$ be the test response to the TB test at time $t$ of the the $i^\mbox{th}$ person, - $RN_i(t)$ be the test response to the NILL test at time $t$ of the the $i^\mbox{th}$ person, - $f(RN_i(t) | D_i(t)=0) \sim N(\mu_l,\sigma_l^2) I(RN_i(t) > R_l)$ - $f(RN_i(t) | D_i(t)=1) \sim N(\mu_l,\sigma_l^2) I(RN_i(t) > R_l)$ Points 4 and 5 capture the idea that a person's response to the NILL test is not dependent on disease status. - $f(RTB_i(t) | D_i(t)=0) \sim N(\mu_l,\sigma_l^2) I(RTB_i(t) > R_l)$ - $f(RTB_i(t) | D_i(t)=1) \sim N(\mu_u,\sigma_u^2) I(RTB_i(t) < R_u)$ - $\mu_u > \mu_l$ Points 6, 7 and 8 capture the idea that a person's response to the TB test is dependent on disease status. - $p(t)$ be the probability that a person's catches TB during the 6 months preceding time $t$ given that they were disease free during the previous test period. Thus, the state transition matrix would like the one below: $\begin{bmatrix} 1-p(t) & p(t) \\ 0 & 1 \end{bmatrix}$ In other words, $Prob(D_i(t)=1 | D_i(t-1) = 0) = p(t)$ $Prob(D_i(t)=0 | D_i(t-1) = 0) = 1-p(t)$ $Prob(D_i(t)=1 | D_i(t-1) = 1) = 1$ $Prob(D_i(t)=0 | D_i(t-1) = 1) = 0$ Your test criteria states that: $\hat{D}_i(t) = \begin{cases} 1, & RTB_i(t) - RN_i(t) \ge 0.35 \\ 0, & otherwise \end{cases}$ However, as you see from the structure of the model you can actually parameterize the cut-offs and change the whole problem to that of what should be your cut-offs to accurately diagnose patients. Thus, the wobbler problem seems to be more an issue with your choice of cut-offs rather than anything else. In order to choose the 'right' cut-offs, you can take historical data about patients definitively identified as having TB and estimate the resulting parameters of the above setup. You could use some criteria such as number of patients correctly classified as having TB or not as a metric to identify the 'best' model. For simplicity, you could assume that $p(t)$ to be a time invariant parameter which seems reasonable in the absence of epidemics etc. Hope that is useful.
null
CC BY-SA 2.5
null
2010-10-05T17:08:52.073
2010-10-06T23:46:47.417
2010-10-06T23:46:47.417
null
null
null
3350
2
null
2611
3
null
Disclaimer: This idea might be foolish & I'm not going to pretend to understand the theoretical implications of what I'm proposing. "Suggestion" : Why don't you simply impute 100 (I know you normally do 5) datasets, run the lme4 or nmle, get the confidence intervals (you have 100 of them) and then: Using a small interval width (say range / 1000 or something), test over the range of possible values of each parameter and include only those small intervals which appear in at least 95 of the 100 CIs. You would then have a Monte Carlo "average" of your confidence intervals. I'm sure there are issues (or perhaps theoretical problems) with this approach. For instance, you could end up with a set of disjoint intervals. This may or may not be a bad thing depending on your field. Note that this is only possible if you have at least two completely non-overlapping confidence intervals which are separated by a region with less than 95% coverage. You might also consider something closer to the Bayesian treatment of missing data to get a posterior credible region which would certainly be better formed & more theoretically support than my ad-hoc suggestion.
null
CC BY-SA 2.5
null
2010-10-05T17:34:51.493
2010-10-05T17:34:51.493
null
null
1499
null
3351
2
null
3333
2
null
This should actually be a fairly simple operation to do in R. ``` bulbs <- read.csv("bulbs.csv") bulbs <- table(bulbs) ``` Now just sum along the rows or columns to get the marginal values. ``` sum(bulbs[,1]) # counts of no event occurring sum(bulbs[1,]) # counts of green light ``` Now just take these frequencies and plug them to get any posterior you want. ``` bulbs[1,1]/sum(bulbs[,1]) # probability of green light given no event occurred ```
null
CC BY-SA 2.5
null
2010-10-05T19:41:51.290
2010-10-05T19:41:51.290
null
null
1392
null
3352
1
3353
null
5
540
If $X$ is a categorical variable, and I am interested in the posterior distributions of $\beta_1$, where $\beta_1$ is a vector of coefficients, one for each level of X, are these equivalent models? Model 1: $$Y \sim ( \beta_0 + \beta_1X_1,\sigma^2)$$ $$\beta_1 \sim N(0,\tau)$$ Model 2: $$Y \sim(\beta_1X_1,\sigma^2)$$ $$\beta_1 \sim N(\beta_0,\tau)$$ In both models, $\beta_0$ has an informed prior, $\tau$ has a weak prior: $$\beta_0\sim \text{Gamma}(6,3)$$ $$\tau \sim \text{Gamma}(0.1,0.1)$$ If these are not equivalent, why not? I ask because although I thought these would be the same, but when implemented in JAGS, the MCMC chain for $\beta_0$ from the first model has much higher autocorrelation than the second. --- I (originally) put my R code here [SO.test.tgz](https://netfiles.uiuc.edu/dlebauer/www/SO.test.tgz), but it is no longer available. I apologize ...
Are these equivalent representations of the same hierarchical Bayesian model?
CC BY-SA 3.0
null
2010-10-05T20:18:07.143
2015-04-15T14:19:02.867
2015-04-15T14:19:02.867
1381
1381
[ "modeling", "bayesian", "multilevel-analysis" ]
3353
2
null
3352
6
null
Updated Response: You still don't have a full specification for model #2. However, I can sort of guess what you mean -- correct me if I'm wrong. The trouble is that the statements $Y = \beta_1 X$ & $Y = \beta_0 + \beta_1 X$ are not probabilistic. [ Aside: In a mathematical sense, you're defining a set of linear equations. If you have only one datapoint then you can solve $Y = \beta_1 X$ for the only value of $\beta_1$ which satisfies this equation. If you have more than one value of $Y$ and $X_1$ then this represents an overconstrained system -- without a solution. ] My hunch is that you mean the following: $$Y \sim N( \beta_1 X_1, \sigma^2)$$ for some value of $\sigma^2$ which is assumed to be known -- (or perhaps you also want to estimate it). This places your random effects model within the context of a standard linear mixed effects model. If we then assume that $Y$ is distributed as I have stated, then: Answer: No, they are not equal. In the first model, $\beta_0$ is essentially an intercept (Imagine multiplying $\beta_0$ by $X_0$ where $X_0$ is always equal to 1). In the second model, $\beta_0$ represents a random offset from $\beta_1$. Note that this model (#2) wouldn't make sense to someone who doesn't do Bayes... It would be identical to running a linear regression where you have two predictors that are perfectly multi-collinear, but since you've made distributional assumptions within a Bayesian model, you can estimate it. That said, I'm not sure you should. Note: You can run the first model without the added distributional assumption (which I included above more for model #2). However, I've never seen this sort of specification. I think it would be identical to stating that $(Y - \beta_1 X_1) \sim Gamma(6,3)$. In other words, an error term which is distributed as a gamma random variable. My hunch is that you instead want a random intercept model (with normally distributed errors). If that be the case, use model #1, don't use model #2. Yes $\beta_0$ and $\beta_1$ will exhibit autocorrelation until you standardize the levels of your categorical variable -- make sure that the summation across all individuals is equal to zero (which you can do by subtracting the mean of $X_1$ from every single observation of $X_1$).
null
CC BY-SA 2.5
null
2010-10-05T20:37:30.390
2010-10-06T03:49:19.403
2010-10-06T03:49:19.403
1499
1499
null
3354
2
null
3337
6
null
Interesting question! The approach you describe is certainly very widely used by people using standard ML methods that require fixed-length feature vectors of attributes, to analyse time series data. In the post that you link to, Hyndman points out that there are correlations between the reshaped data vectors (samples). This could be problematic, as k-CV (or other evaluation methods that divide data at random into training and testing sets) assumes that all samples are independent. However, I don't think this concern is relevant for the case of a standard ML methods, that treat attributes separately. For explanation, let me simplify your notation by assuming $n=3$, so the the first few data vectors (labelled alphabetically) will be: \begin{align} A&: (y_1, y_2, y_3; y_4) \\ B&: (y_2, y_3, y_4; y_5) \\ C&: (y_3, y_4, y_5; y_6) \\ \end{align} Clearly, A and B have terms such as $y_2$ in common. But, for A, this is the value of its second attribute whereas for B this is the value of its first attribute.
null
CC BY-SA 3.0
null
2010-10-05T21:16:56.510
2016-05-09T13:18:46.970
2016-05-09T13:18:46.970
109618
1436
null
3355
2
null
3352
4
null
The only similarity in the two models is the general type of models they belong to, otherwise they are not similar in general as pointed out by M. Tibbits. Both these models belong the class of hierarchical models with varying slope (cf [Gelman and Hill 2006](http://rads.stackoverflow.com/amzn/click/052168689X) for detailed treatment) The answer for "why not" are many and one of them is pointed out by M. Tibbits. A few more are: - In the model 2 the slope on an average is 6/3, where as it is 0 for model 1.(This description can be made much better if we had data but it is "almost" accurate given the limited description) - You would expect the data in model 1 to be distributed randomly along a approximately horizontal line where as in model 2 you would expect data to be randomly distributed along a line of slope approx. beta_0 and zero intercept. These answers can be best varified if you similated the data based on your hierarchical setting and look at the results of the analysis. Thanks, S.
null
CC BY-SA 2.5
null
2010-10-05T21:56:23.063
2010-10-05T21:56:23.063
null
null
1307
null
3356
2
null
3324
5
null
For those not familiar with the following code snippet from Stata the OP provided `ivreg my_dv var1 var2 var3 (L.my_dv = D2.my_dv D3.my_dv D4.my_dv)` this equation can be read as $Y_t = \alpha + \beta_1 (Var1) + \beta_2 (Var1) + \beta_3 (Var1) + \beta_4 (\tilde{Y}_{t-1})$ where $\tilde{Y}_{t-1}$ is estimated by $\tilde{Y}_{t-1} = \alpha + Z_1(\Delta^{2}Y_t) + Z_2(\Delta^{3}Y_t) + Z_3(\Delta^{4}Y_t)$ (i.e. the first stage of the IV equation is within the parenthesis in the Stata code) The deltas represent second, third, and fourth order differences, and they are used as excluded instruments to estimate the lag of the dependent variable. In Stata code, the `L.` indicates lagging that variable by $t-1$, and `D.` signifies first order differences of that variable, and hence `D2.` signifies second order differencing. Intially I could not think of any logical reasoning why someone would do this. But Kwak pointed out (referencing this [paper](http://www.cemmap.ac.uk/wps/cwp0209.pdf)) that the Arellano-Bond method uses the differences as instruments to estimate the auto-regressive component of the model. (Also intially I had assumed that the differences would only have an effect if the series is non-stationary, which Bond states in that linked paper the differences will only be weak instruments in the case the series is a random walk, on pg. 21) As suggestions on further reading material as introductions to instrumental variables, Another poster in this [response](https://stats.stackexchange.com/questions/563/what-is-an-instrumental-variable/709#709) (Charlie) linked to some [slides](http://cgibbons.berkeley.edu/Courses/ECON141_S10/IV.pdf) he prepared that I like and would suggest are worth looking into for an intro to instrumental variables. I would also suggest this [powerpoint](http://csda.albany.edu/colloquium/IVEstimation050509_CSDA.ppt) a professor of mine prepared for a workshop as an introduction as well. As a last suggestion for anyone instrested in learning more about instrumental variables you should look up the work of Joshua Angrist. Here is my initial answer --- While I agree with everything that Kwak and ars have stated, I still can not think of any reason why someone would use the differences of the dependent variable as instruments to estimate the lag of the dependent variable (if people do not know Stata code, the `L.` indicates lagging that variable by $t-1$, and `D.` signifies first order differences of that variable, and hence `D2.` signifies second order differencing). In all applications I have seen, people use the lag of independent variables as instruments to estimate the lag of the dependent variable (for reasons ars talks about). But this is based on the assumption that the lagged independent variables are exogenous to the error term in the time period they are being applied. I do not know of any reasoning in which the differences of the dependent variable would be considered exogenous. As far as I'm aware it is not accepted practice to difference only one side of the equation, and would produce rather illogical results ([here](http://www.griffith.edu.au/__data/assets/pdf_file/0017/88100/Greenberg-2001.pdf) is a paper that critiques someone about the reverse situation in which they included a variables level as a predictor of a differenced series.) If you rearrange the terms in the IV equation it actually looks similar to an augmented Dickey Fuller test. While the simplest answer would be to ask the person who wrote the code, can anybody give an example in which this procedure would be acceptable, or any situation in which this procedure would return some meaningful results? As is I can not think of any logical reasoning why the differences would have an effect on the levels except in the case the series is non-stationary.
null
CC BY-SA 2.5
null
2010-10-06T02:22:05.297
2010-10-07T03:14:40.720
2017-04-13T12:44:49.837
-1
1036
null
3357
2
null
3275
-1
null
I've figured out how to do this in O(n^3) time (where n is the number of games): Use a 4-dimensional array: probabilityOfOutcome[gamesLookedAt][wins][losses][ties] Initially, we have "looked at" zero games. After looking at zero games, there is a 100% probability of being 0-0-0, so set: probabilityOfOutcome[0][0][0][0] = 1; "Look at" one additional game at a time, basing each outcome probability on the last step: for every outcome in probabilityOfOutcome[g-1]: - consider the probability of winning game g - consider the probability of losing game g - consider the probability of tying game g For g games, there will be (g^2 + 3g + 2)/2 possible outcomes, and you will have to iterate over g games... therefore O(g^3) I can go into more detail with the for-loop if people still need clarification (it is actually a triple nested for-loop in practice). :-)
null
CC BY-SA 2.5
null
2010-10-06T02:23:39.773
2010-10-06T02:23:39.773
null
null
null
null
3358
2
null
2611
7
null
Repeated comment from above: i'm not sure that a proper analytical solution to this problem even exists. I've looked at some additional literature, but this problem is elegantly overlooked everywhere. I've also noticed that Yucel & Demirtas (in the article i mentioned, page 798) write: > These multiply imputed datasets were used to estimate the model […] using the R package lme4 leading to 10 sets of (beta, se(beta)), (sigma_b, se(sigma_b)) which were then combined using the MI combining rules defined by Rubin. It seems they used some kind of shortcut to estimate the SE of the variance component (which is, of course, inappropriate, since the CI is asymmetrical) and then applied the classic formula.
null
CC BY-SA 2.5
null
2010-10-06T09:35:15.300
2010-10-06T14:31:47.217
2010-10-06T14:31:47.217
930
1266
null
3359
1
3360
null
9
6171
I am looking at trigger efficiencies, meaning that I have some device that fires on $k$ out of $n$ events. In the end I am interested in some estimate of the efficiency $\epsilon$ which is the probability to fire on a randomly given event. Using a Bayesian approach with a uniform prior over $[0,1]$ I can model the probability distribution for $\epsilon$ as a Beta distribution $\beta(\epsilon; k+1, n-k+1)$. Now comes the question: I calculate the efficiency using "bootstrapping" which means that the final trigger efficiency is the product of two trigger efficiencies, both of which can be modelled as Beta distributions. How can I calculate this product of the two Beta PDFs for large values of $k_{1,2}$ and $n_{1,2}$ efficiently? Is there a closed form of the product (AFAIK not)? At the moment I am doing this numerically, but this is rather slow. [This question](https://stats.stackexchange.com/questions/2066/how-can-i-numerically-approximate-values-for-a-beta-distribution-with-large-alp) has the answer how to evaluate integrals of the Beta distribution for large argument values, but this does not help here. I hope my question is clear and not completely stupid...
Product of beta distributions
CC BY-SA 2.5
null
2010-10-06T11:52:36.327
2022-05-06T22:37:45.897
2017-04-13T12:44:25.243
-1
1512
[ "distributions", "beta-distribution" ]
3360
2
null
3359
9
null
According to the abstract of [this paper](http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=SMJMAP000018000004000721000001&idtype=cvips&gifs=yes&ref=no), > The density function of products of random beta variables is a Meijer $G$-function which is expressible in closed form when the parameters are integers. However, I imagine the closed form requires a great deal of combinatorial calculation and hence would not be practically useful. The slow numerical algorithm you mentioned is probably faster. This paper may be more useful since it does not require integer parameters. [The distribution of product of independent beta random variables with application to multivariate analysis](https://www.semanticscholar.org/paper/The-distribution-of-product-of-independent-beta-to-Bhargava-Khatri/86c3cc7928b24e4ad8a927ed1e20567272f9486f) I haven't read the paper, but the abstract sounds promising.
null
CC BY-SA 4.0
null
2010-10-06T12:19:12.380
2022-05-06T22:37:45.897
2022-05-06T22:37:45.897
79696
319
null
3361
2
null
3291
8
null
You might want to spend $1.20 printing out Matthias Vallentin's [probability and statistics cheat sheet](http://statistics.zone/).
null
CC BY-SA 3.0
null
2010-10-06T12:22:32.077
2015-11-04T01:34:18.227
2015-11-04T01:34:18.227
805
319
null
3362
1
3375
null
6
3474
I'd like to assess the impact of an upcoming policy implementation, as measured by changes in questionnaire response to a Likert-scale question. I understand I could use a difference-in-difference approach. However, in my situation there is no single obvious comparison, non-treated population. I think I'd like to use the "Synthetic Control Method for Comparative Case Studies" as described by Abadie et al and implemented as Synth in R. - Alberto Abadie, Alexis Diamond, Jens Hainmueller. Journal of the American Statistical Association. June 1, 2010, 105(490): 493-505. doi:10.1198/jasa.2009.ap08746. full text As summarized in the R help for synth: > synth estimates the effect of an intervention of interest by comparing the evolution of an aggregate outcome for a unit affected by the intervention to the evolution of the same aggregate outcome for a synthetic control group. synth constructs this synthetic control group by searching for a weighted combination of control units chosen to approximate the unit affected by the intervention in terms of the outcome predictors. The evolution of the outcome for the resulting synthetic control group is an estimate of the counterfactual of what would have been observed for the affected unit in the absence of the intervention. [..] the synth function routinely searches for the set of weights that generate the best fitting convex combination of the control units. In other words, the predictor weight matrix V is chosen among all positive definite diagonal matrices such that MSPE is minimized for the pre-intervention period. See also the useful summary by Srikant Vadali in answers below. Is this method appropriate for survey/sampled data? Is there anything I need to do differently, or just use my Likert-response mean as the dependent variable? Any suggestions about how I'd power such a beast? Thank you!
Can I use Synthetic Control Method for Comparative Case Studies with survey data?
CC BY-SA 2.5
null
2010-10-06T13:27:08.537
2010-10-07T12:59:02.717
2010-10-07T12:59:02.717
644
644
[ "econometrics", "survey", "causality", "panel-data" ]
3363
2
null
3331
32
null
Several directions for analyzing longitudinal data were discussed in the link provided by @Jeromy, so I would suggest you to read them carefully, especially those on functional data analysis. Try googling for "Functional Clustering of Longitudinal Data", or the PACE Matlab toolbox which is specifically concerned with model-based clustering of irregularly sampled trajectories (Peng and Müller, [Distance-based clustering of sparsely observed stochastic processes, with applications to online auctions](http://anson.ucdavis.edu/~mueller/distance_AOAS.pdf), Annals of Applied Statistics 2008 2: 1056). I can imagine that there may be a good statistical framework for financial time series, but I don't know about that. The `kml` package basically relies on k-means, working (by default) on euclidean distances between the $t$ measurements observed on $n$ individuals. What is called a trajectory is just the series of observed values for individual $i$, $y_i=(y_{i1},y_{i2},\dots,y_{it})$, and $d(y_i,y_j)=\sqrt{t^{-1}\sum_{k=1}^t(y_{ik}-y_{jk})^2}$. Missing data are handled through a slight modification of the preceding distance measure (Gower adjustment) associated to a nearest neighbor-like imputation scheme (for computing Calinski criterion). As I don't represent myself what you real data would look like, I cannot say if it will work. At least, it work with longitudinal growth curves, "polynomial" shape, but I doubt it will allow you to detect very specific patterns (like local minima/maxima at specific time-points with time-points differing between clusters, by a translation for example). If you are interested in clustering possibly misaligned curves, then you definitively have to look at other solutions; [Functional clustering and alignment](http://homes.stat.unipd.it/mgri/SIS2010/Program/contributedpaper/658-960-1-RV.pdf), from Sangalli et al., and references therein may provide a good starting point. Below, I show you some code that may help to experiment with it (my seed is generally set at 101, if you want to reproduce the results). Basically, for using `kml` you just have to construct a `clusterizLongData` object (an `id` number for the first column, and the $t$ measurements in the next columns). ``` library(lattice) xyplot(var0 ~ date, data=test.data, groups=store, type=c("l","g")) tw <- reshape(test.data, timevar="date", idvar="store", direction="wide") parallel(tw[,-1], horizontal.axis=F, scales=list(x=list(rot=45, at=seq(1,ncol(tw)-1,by=2), labels=substr(names(tw[,-1])[seq(1,ncol(tw)-1,by=2)],6,100), cex=.5))) library(kml) names(tw) <- c("id", paste("t", 1:(ncol(tw)-1))) tw.cld <- as.cld(tw) cld.res <- kml(tw.cld,nbRedrawing=5) plot(tw.cld) ``` The next two figures are the raw simulated data and the five-cluster solution (according to Calinski criterion, also used in the [fpc](http://cran.r-project.org/web/packages/fpc/index.html) package). I don't show the [scaled version](https://i.stack.imgur.com/vySFu.png). ![alt text](https://i.stack.imgur.com/x5MR1.png) ![alt text](https://i.stack.imgur.com/4W6Lo.png)
null
CC BY-SA 2.5
null
2010-10-06T14:00:52.570
2010-10-06T15:18:24.760
2010-10-06T15:18:24.760
930
930
null
3364
1
3365
null
4
1570
I have conducted an IRT analysis using package [ltm](http://cran.r-project.org/web/packages/ltm/index.html) in R 2.10. ``` fit1 <- grm(data) ``` When producing plots of item response characteristic curves (ICCs), ``` plot(fit1) ``` the text "Item response characteristic curves - Item:NAME" is printed as the main title of each plot. NAME is substituted with the actual name of the items in the dataset. A common practice is to present all the individual ICC plots on a single page, and then all that is needed in the title of each plot is the NAME (or at the most "Item: NAME"). I have not been able to achieve this, if I create a vector with the actual item names (e.g. v1, v2, v3...Vn) they are all used as the title in each of the plots. What I would like to achieve is to have the title of each plot simply read "Item: NAME" where NAME is replaced/pasted with the names of the actual variables in my data. The variable names themselves are available in the coefficients `fit1` object, and can be retrieved with `names(fit1$coefficients)`. Anyone have a suggestion for how to achieve this? Reproducible sample (thanks @Chi) ``` dat <- Science[c(1,3,4,7)] fit1 <- grm(dat) par(mfrow = c(2,2)) plot(fit1) ``` This plots each of the four ICCs on the same device, but all titles contain the abovementioned text that I wish to remove. It also contains the text "Item: NAME" (i.e. comfort, environment, work, benefit) which I would like to retain.
R package ltm: How to manipulate title on item response category characteristic curve plot
CC BY-SA 2.5
0
2010-10-06T15:15:25.997
2010-10-18T07:50:52.467
2010-10-18T07:50:52.467
449
913
[ "r", "data-visualization", "psychometrics" ]
3365
2
null
3364
3
null
Try using the argument `main=` when calling `plot()`, e.g. ``` dat <- Science[c(1,3,4,7)] fit1 <- grm(dat) plot(fit1, items=1, main=paste("Item", names(dat)[1], sep=": ")) ``` See `help(plot.grm)`. Also, you can embed all ICC curves in the same figure by using `par()`, e.g. ``` opar <- par(mfrow=c(2,2)) for (i in 1:4) plot(fit1, items=i, main=paste("Item", names(dat)[i], sep=": ")) par(opar) ```
null
CC BY-SA 2.5
null
2010-10-06T15:31:00.033
2010-10-06T15:31:00.033
null
null
930
null
3367
2
null
152
9
null
[Gilles Celeux](https://web.archive.org/web/20070219055605/http://www.math.u-psud.fr/select/people/celeux/Welcome.html) also worked on the problem of label switching, e.g. > G. Celeux, Bayesian inference for Mixture: the label switching problem. Proceedings Compstat 98, pp. 227-232, Physica-Verlag (1998). As a complement to @darrenjw's fine answer, here are two online papers that reviewed alternative strategies: - Jasra et al., Markov Chain Monte Carlo Methods and the Label Switching Problem in Bayesian Mixture Modelling - Sperrin et al., Probabilistic relabelling strategies for the label switching problem in Bayesian mixture models
null
CC BY-SA 4.0
null
2010-10-06T16:57:44.897
2022-11-25T10:01:05.537
2022-11-25T10:01:05.537
362671
930
null
3368
1
null
null
4
293
Suppose I have three time series with the same length and the same temporal interval. If I upsample each of the three time series through interpolation, that will cause aliasing. However, if I'm interested in the information embedded in the average of the three time series, is such an upsampling justified? In other words, does the above upsamping and then averaging among three time series increase the Nyquist frequency by three times? Thanks!
Does averaging across multiple time series render higher Nyquist frequency?
CC BY-SA 2.5
null
2010-10-06T19:52:48.773
2010-10-06T21:44:56.557
null
null
1513
[ "time-series" ]
3369
1
null
null
0
841
> Possible Duplicate: What are the differences between Factor Analysis and Principal Component Analysis I am trying to understand the difference between PCA and FA. Through google research, I have come to understand that PCA accounts for all variance, while FA accounts for only common variance and ignores unique variance. However, I am having a difficult time wrapping my head around how exactly this occurs. I know PCA rotates the axis used to describe the data in order to eliminate all covariance. Does this step still occur in FA? If not, what differentiates FA from PCA? Thanks in advance.
Difference between FA and PCA
CC BY-SA 2.5
0
2010-10-06T20:05:29.200
2010-10-06T21:51:18.863
2017-04-13T12:44:29.013
-1
1514
[ "pca", "factor-analysis" ]
3370
2
null
1611
11
null
I think Bayesian statistics come into play in two different contexts. On the one hand, some researchers/statisticians are definitely convinced of the "Bayesian spirit" and, acknowledging the limit of the classical frequentist hypothesis framework, have decided to concentrate on Bayesian thinking. Studies in experimental psychology highlighting small effect sizes or borderline statistical significance are now increasingly relying on the Bayesian framework. In this respect, I like to cite some of the extensive work of Bruno Lecoutre (1-4) who contributed to developing the use of fiducial risk and Bayesian (M)ANOVA. I think the fact we can readily interpret a confidence interval in terms of probabilities applied on the parameter of interest (i.e. depending on the prior distribution) is a radical turn in statistical thinking. I can also imagine that everybody is actually aware of the ever growing work of Andrew Gelman in this domain, as pointed by @Skrikant, or of the incentive given by the [International Society for Bayesian Analysis](http://www.bayesian.org/) to use bayesian models. Frank Harrell also provides interesting outlines of [Bayesian Methods for Clinicians](http://biostat.mc.vanderbilt.edu/wiki/pub/Main/ClinStat/bayes.pdf), as applied to [RCTs](http://biostat.mc.vanderbilt.edu/wiki/pub/Main/ClinStat/essence.rct.html). On the other hand, the Bayesian approach has proved successful in diagnostic medicine (5), and is often used as an ultimate alternative where traditional statistics would fail, if applicable at all. I am thinking of a psychometrical paper (6) where authors were interested in assessing the agreement between radiologists about the severity of hip fractures from a very limited data set (12 doctors x 15 radiography) and use an item response model for polytomous items. Finally, a recent 45-pages paper published in Statistics in Medicine provides an interesting overview of the "penetrance" of bayesian modeling in biostatistics: > Ashby, D (2006). Bayesian statistics in medicine: a 25 year review. Statistics in Medicine, 25(21), 3589-631. References - Rouanet H., Lecoutre B. (1983). Specific inference in ANOVA: From significance tests to Bayesian procedures. British Journal of Mathematical and Statistical Psychology, 36, 252-268. - Lecoutre B., Lecoutre M.-P., Poitevineau J. (2001). Uses, abuses and misuses of significance tests in the scientific community: Won't the Bayesian choice be unavoidable? International Statistical Review, 69, 399-418. - Lecoutre B. (2006). Isn't everyone a Bayesian?. Indian Bayesian Society News Letter, III, 3-9. - Lecoutre B. (2006). And if you were a Bayesian without knowing it? In A. Mohammad-Djafari (Ed.): 26th Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering. Melville : AIP Conference Proceedings Vol. 872, 15-22. - Broemeling, L.D. (2007). Bayesian Biostatistics and Diagnostic Medicine. Chapman and Hall/CRC. - Baldwin, P., Bernstein, J., and Wainer, H. (2009). Hip psychometrics. Statistics in Medicine, 28(17), 2277-92.
null
CC BY-SA 3.0
null
2010-10-06T20:10:59.000
2013-08-06T10:41:15.660
2013-08-06T10:41:15.660
6029
930
null
3371
2
null
1576
33
null
You are right about your first point, although in FA you generally work with both (uniqueness and communality). The choice between PCA and FA is a long-standing debate among psychometricians. I don't quite follow your points, though. Rotation of principal axes can be applied whatever the method is used to construct latent factors. In fact, most of the times this is the VARIMAX rotation (orthogonal rotation, considering uncorrelated factors) that is used, for practical reasons (easiest interpretation, easiest scoring rules or interpretation of factor scores, etc.), although oblique rotation (e.g. PROMAX) might probably better reflect the reality (latent constructs are often correlated with each other), at least in the tradition of FA where you assume that a latent construct is really at the heart of the observed inter-correlations between your variables. The point is that PCA followed by VARIMAX rotation somewhat distorts the interpretation of the linear combinations of the original variables in the "data analysis" tradition (see the work of Michel Tenenhaus). From a psychometrical perspectice, FA models are to be preferred since they explicitly account for measurement errors, while PCA doesn't care about that. Briefly stated, using PCA you are expressing each component (factor) as a linear combination of the variables, whereas in FA these are the variables that are expressed as linear combinations of the factors (including communalities and uniqueness components, as you said). I recommend you to read first the following discussions about this topic: - What are the differences between Factor Analysis and Principal Component Analysis - On the use of oblique rotation after PCA -- see reference therein
null
CC BY-SA 3.0
null
2010-10-06T20:33:36.393
2013-06-25T00:17:50.460
2017-04-13T12:44:37.420
-1
930
null
3372
1
3376
null
25
8816
I must clarify immediately that I am a practicing software developer, not a statistician, and that my college stats class was a very long time ago… That said, I would like to know if there is a method for accumulating a set of descriptive statistics that could then be used to produce a boxplot, that does not entail storing a bunch of individual samples? What I am trying to do is produce a graphical summary of queue service times within a complex multi-queue process. I have in the past used a package called tnftools that allowed large samples to be accumulated and then post-processed into a nice graph of response times and outliers… But tnftools are not available for my current platform. Ideally I would like to be able to accumulate a set of descriptive statistics "on the fly" as the process runs, and then extract the data for analysis on demand. But I cannot simply have the process accumulate samples as the memory / IO involved in doing so would have an unacceptable impact on the performance of the system.
Is it possible to accumulate a set of statistics that describes a large number of samples such that I can then produce a boxplot?
CC BY-SA 2.5
null
2010-10-06T21:16:06.060
2012-12-17T21:54:45.480
2010-10-07T08:54:50.567
null
1515
[ "algorithms", "median", "quantiles" ]
3373
2
null
3368
3
null
Simple Answer: No. (At least I don't think so -- It's been 10 years since my EE classes). Suggested Reading: You might check out the Wikipedia page on [Oversampling](http://en.wikipedia.org/wiki/Oversampling) or also this page on [Reducing the effects of noise](http://zone.ni.com/devzone/cda/tut/p/id/3947). In general, multiple samples with independent white noise will, when averaged, raise the signal to noise ratio. But they will not increase the Nyquist frequency. Short Example: Construct three cell towers equidistant from a point source. Make a cell call in the center and record it independently on each of the three towers at a common sampling rate, say the 450 MHz GSM band. Then average each sample across the three towers and viola, triple the S/N (assuming that local noise characteristics of each tower are independent and all received the same signal at exactly the same strength). However, even though you upped the S/N by a factor of three, you didn't change the [Nyquist frequency](http://en.wikipedia.org/wiki/Nyquist_frequency). Reference included for those interested. Note: In my example, I'm assuming that the samples collected at the three towers are "in phase" -- equidistant -- if so, then what I've stated above is correct. If the samples you're collecting are out of phase, then this might in fact raise the Nyquist frequency. That said, you might want to ask this question somewhere else -- Unfortunately the corresponding [Signal, Image, & Video Processing](http://area51.stackexchange.com/proposals/1691/signal-image-video-processing) and [Electronics & Electrical Engineering](http://area51.stackexchange.com/proposals/18854/electronics-electrical-engineering) boards have yet to really take off. You might post your question there, but I'm not sure if it will ever be answered.
null
CC BY-SA 2.5
null
2010-10-06T21:29:23.210
2010-10-06T21:44:56.557
2010-10-06T21:44:56.557
1499
1499
null
3374
2
null
1576
54
null
From my response here: [Is PCA followed by a rotation (such as varimax) still PCA?](https://stats.stackexchange.com/questions/612/is-psychprincipal-function-still-pca-when-using-rotation) Principal Component Analysis (PCA) and Common Factor Analysis (CFA) are distinct methods. Often, they produce similar results and PCA is used as the default extraction method in the SPSS Factor Analysis routines. This undoubtedly results in a lot of confusion about the distinction between the two. The bottom line is that these are two different models, conceptually. In PCA, the components are actual orthogonal linear combinations that maximize the total variance. In FA, the factors are linear combinations that maximize the shared portion of the variance--underlying "latent constructs". That's why FA is often called "common factor analysis". FA uses a variety of optimization routines and the result, unlike PCA, depends on the optimization routine used and starting points for those routines. Simply there is not a single unique solution. In R, the factanal() function provides CFA with a maximum likelihood extraction. So, you shouldn't expect it to reproduce an SPSS result which is based on a PCA extraction. It's simply not the same model or logic. I'm not sure if you would get the same result if you used SPSS's Maximum Likelihood extraction either as they may not use the same algorithm. For better or for worse in R, you can, however, reproduce the mixed up "factor analysis" that SPSS provides as its default. Here's the process in R. With this code, I'm able to reproduce the SPSS Principal Component "Factor Analysis" result using this dataset. (With the exception of the sign, which is indeterminate). That result could also then be rotated using any of R's available rotation methods. ``` data(attitude) # Compute eigenvalues and eigenvectors of the correlation matrix. pfa.eigen <- eigen(cor(attitude)) # Print and note that eigenvalues are those produced by SPSS. # Also note that SPSS will extract 2 components as eigenvalues > 1 = 2. pfa.eigen$values # Set a value for the number of factors (for clarity) kFactors <- 2 # Extract and transform two components. pfa.eigen$vectors[, seq_len(kFactors)] %*% diag(sqrt(pfa.eigen$values[seq_len(kFactors)]), kFactors, kFactors) ```
null
CC BY-SA 3.0
null
2010-10-06T21:51:18.863
2017-03-05T05:08:09.727
2017-04-13T12:44:51.060
-1
485
null
3375
2
null
3362
3
null
[Caveat: I have not read the paper so the below may be nonsense for all I know ...] Based on the summary of the R package I would venture to guess that you could use the proposed methodology for the survey data provided the following conditions are met: - You have survey data from control groups during pre-intervention periods. These control groups need not be identical to the treatment groups. - The data you have is time series data. Provided points 1 and 2 are met my best intuition as to how the method works is as follows: - First, construct a 'hypothetical' (synthetic in their words) control group that behaves as identical as possible as the treatment group. The hypothetical group is constructed by taking a convex combination of the control group data you have. As an example, suppose that you want to measure student performance on math. Your control groups could be different sections whereas the treatment group is one specific section. You construct the hypothetical control group such that the weighted (with the weights summing to 1 and hence convex) average of the scores of the control group sections is as close as possible to the scores of the treatment group before the intervention (i.e., use MSPE which is Mean Squared Prediction Error). - Second, extrapolate the hypothetical group's scores into the post-intervention period using the parameter estimates from step 1. Since, the hypothetical group has been constructed to be identical to the treatment group pre-intervention, the post-intervention scores of the hypothetical group provides an appropriate counter-factual evidence to the treatment group's post-intervention scores to assess the effectiveness of the intervention.
null
CC BY-SA 2.5
null
2010-10-07T00:02:32.620
2010-10-07T00:02:32.620
2020-06-11T14:32:37.003
-1
null
null
3376
2
null
3372
32
null
For 'on the fly' boxplot, you will need 'on the fly' min/max (trivial) as well as 'on the fly' quartiles (0.25,0.5=median and 0.75). A lot of work has been going on recently in the problem of online (or 'on the fly') algorithm for median computation. A recent developements is binmedian. As a side-kick, it also enjoy better worst case complexity than quickselect (which is neither online nor single pass). You can find the associated paper as well as C and FORTRAN code online [here](http://www-stat.stanford.edu/~ryantibs/median/). You may have to check the licencing details with the authors. You will also need a single pass algorithm for the quartiles, for which you can use the approach above and the following recursive characterization of the quartiles in terms of medians: $Q_{0.75}(x) \approx Q_{0.5}(x_i:x_i > Q_{0.5}(x))$ and $Q_{0.25}(x) \approx Q_{0.5}(x_i:x_i < Q_{0.5}(x))$ i.e. the 25 (75) percent quartile is very close to the median of those observations that are smaller (larger) than the median. Addendum: There exist a host of older multi-pass methods for computing quantiles. A popular approach is to maintain/update a deterministically sized reservoir of observations randomly selected from the stream and recursively compute quantiles (see [this](http://www.cs.ucsb.edu/~suri/psdir/ency.pdf) review) on this reservoir. This (and related) approach are superseded by the one proposed above.
null
CC BY-SA 2.5
null
2010-10-07T00:33:55.410
2010-10-07T08:41:44.510
2010-10-07T08:41:44.510
603
603
null
3377
1
null
null
17
5029
I have a little problem that is making me freaking out. I have to write procedure for an online acquisition process of a multivariate time series. At every time interval (for example 1 second), I get a new sample, which is basically a floating point vector of size N. The operation I need to do is a little bit tricky: - For each new sample, I compute the percentuals for that sample (by normalizing the vector so that the elements will sum to 1). - I calculate the average percentuals vector in the same way, but using the past values. - For each past value, I compute the absolute deviation of the percentuals vector related to that sample with the global average percentuals vector computed at step 2. This way, the absolute deviation is always a number between 0 (when the vector is equal to the average vector) and 2 (when it is totaly different). - Using the average of the deviations for all the previous samples, I compute the mean absolute deviation, which is again a number between 0 and 2. - I use the mean absolute deviation to detect if a new sample is compatible with the other samples (by comparing its absolute deviation with the mean absolute deviation of the whole set computed at step 4). Since every time a new sample is collected the global average changes (and so the mean absolute deviation changes as well), is there a way to compute this value without scanning the whole data set multiple times? (one time for computing the global average percentuals, and one time for collecting the absolute deviations). Ok, I know it's absolutely easy to calculate the global averages without scanning the whole set, since I just have to use a temporary vector for storing the sum of each dimension, but what about the mean absolute deviation? Its calculation includes the `abs()` operator, so I need to access to all the past data! Thanks for your help.
Online algorithm for mean absolute deviation and large data set
CC BY-SA 2.5
null
2010-10-07T03:26:23.003
2010-11-09T05:36:27.843
2010-10-08T16:04:53.503
8
667
[ "algorithms", "quantiles", "online-algorithms", "large-data" ]
3378
2
null
3377
6
null
If you can accept some inaccuracy, this problem can be solved easily by binning counts. That is, pick some largeish number $M$ (say $M = 1000$), then initialize some integer bins $B_{i,j}$ for $i = 1\ldots M$ and $j = 1\ldots N$, where $N$ is the vector size, as zero. Then when you see the $k$th observation of a percentual vector, increment $B_{i,j}$ if the $j$th element of this vector is between $(i-1)/M$ and $i/M$, looping over the $N$ elements of the vector. (I am assuming your input vectors are non-negative, so that when you compute your 'percentuals', the vectors are in the range $[0,1]$. ) At any point in time, you can estimate the mean vector from the bins, and the mean absolute deviation. After observing $K$ such vectors, the $j$th element of the mean is estimated by $$\bar{X}_j = \frac{1}{K} \sum_i \frac{i - 1/2}{M} B_{i,j},$$ and the $j$th element of the mean absolute deviation is estimated by $$\frac{1}{K} \sum_i | \bar{X_j} - \frac{i - 1/2}{M} | B_{i,j}$$ edit: this is a specific case of a more general approach where you are building an empirical density estimate. This could be done with polynomials, splines, etc, but the binning approach is the easiest to describe and implement.
null
CC BY-SA 2.5
null
2010-10-07T05:17:09.267
2010-10-07T16:44:47.327
2010-10-07T16:44:47.327
795
795
null
3379
2
null
3377
2
null
MAD(x) is just two concurrent median computation, each of which can be made online through the binmedian algorithm. You can find the associated paper as well as C and FORTRAN code online [here](http://www-stat.stanford.edu/~ryantibs/median/). (this is just the use of a clever trick on top of Shabbychef's clever trick, to save on memory). Addendum: There exist a host of older multi-pass methods for computing quantiles. A popular approach is to maintain/update a deterministically sized reservoir of observations randomly selected from the stream and recursively compute quantiles (see [this](http://www.cs.ucsb.edu/~suri/psdir/ency.pdf) review) on this reservoir. This (and related) approach are superseded by the one proposed above.
null
CC BY-SA 2.5
null
2010-10-07T08:40:07.437
2010-10-07T08:40:07.437
null
null
603
null
3380
1
null
null
2
77
I work with amino acid sequences and I want to use a self-made model to tell me something about it, lets call it $f(\text{seq})$. Now i want to know the contribution of every position in the sequence onto the model, i.e. my question is what is the importance/effect of amino acid $A$ occuring at position $I$ in the sequence with respect to the model? How do I visualize something like that? I want to use my model also on several sequences of differing lengths. Somehow this throws a monkeywrench into my plans of using a neural net... My question is simple yet I did not find anything about it. Pointers would be appreciated. Or any comment you might have. This whole idea of mine is pretty unfinished and I don't really know yet what I want. So feel free to criticize, I will update the question accordingly. Ah and if this is the wrong place to put this here please tell me also (: cheers and thanks note: this is a repost from th.stackexchange.
Analysis of variables of varying numbers
CC BY-SA 2.5
null
2010-10-07T10:50:24.480
2010-10-07T11:27:06.060
2010-10-07T11:10:56.880
930
1516
[ "machine-learning", "feature-selection", "bioinformatics" ]
3381
1
null
null
4
1718
Say I make histograms H1, H2, H4 ...of the same set of data with bins 1, 2, 4 ... wide. Then the bins containing a given $x$ have counts and averages ``` n1 av1 in H1, n2 av2 in H2, n4 av4 in H4 ... ``` How should one weight these to estimate data(x) ? One possibility would be $\Sigma w_j \text{av}_j / \Sigma w_j$ with $w_j = n_j / j$, for the first two non-zero $n_j$. The general goals are to fill holes where bins are empty, and to smooth bins with many data points. (H2 etc. can either be built together with H1, or on-the-fly from H1 as needed: a space-time tradeoff.) Added, trying to clarify: there are two related subquestions: - local vs global: If almost all the terms in $\Sigma K( x_j\, \text{near}\, x )$ are 0, there must be a better way. (For 1D it hardly matters, but in 2D or 3D, $N^2$, $N^3$ get big quickly.) One way of finding $x_j$ near $x$ (to sum $K$ at those only) is to grow shells in a regular grid -- poor for clumpy or hole-y data. Another is to find $N_{\text{near}}$ nearest neighbours in a Kd tree -- but what should $N_{\text{near}}$ be ? - adaptive histograms ? A method that adapts to clumpy data, as TINs (triangulations) adapt to terrains, would be nice. (Years later) the magic words are "multiresolution" or "multiscale"; see e.g. [scholar allintitle: (multiscale|multiresolution) (histogram | estimation)](http://scholar.google.com/scholar?q=allintitle%3a+%28multiscale|multiresolution%29+%28histogram+|+estimation%29&as_vis=1) . Does anyone know of comparisons with conventional KDE on real data, 2d and up ?
How to smooth histograms with bins 1, 2, 4, ... wide?
CC BY-SA 3.0
null
2010-10-07T10:59:51.483
2013-06-07T09:25:31.843
2013-06-07T09:25:31.843
557
557
[ "smoothing", "histogram" ]
3382
2
null
3380
1
null
This is a tricky topic in general; because of the size variability you mentioned it rather boils down to motif finding. Yet I can propose four ideas: - Supposing that you can align sequences (ok, you always can, but let's say it yields non nonsense consensus), you can reduce the problem to the conserved region and make attributes out of aligned residues. Finding important residues will then be a simple feature selection. - Look for motifs using MEME or something more fancy to make PWM and just hope it will work. Sequence logo will show you the residue importance based on the predictive power it applies. - Make attributes out of N-gram spectra, i.e. counts of all possible N-residue words for few values of N (till your computer won't explode). Or use SVM with string kernel, it usually boils to the same thing. - << Can't talk about it yet, research in progress ;-) >>
null
CC BY-SA 2.5
null
2010-10-07T11:27:06.060
2010-10-07T11:27:06.060
null
null
null
null
3383
1
3639
null
4
3375
This sounds like an easy setup, and I'm absolutely sure it's not too complicated to do, but somehow I can't figure out how to approach this. The setting is as follows : There are 6 treatments with 16 subjects in each treatment. Weight is measured for every subject before and after treatment. The hypothesis to be tested : is there a difference between treatments in effect (the difference in weight before and after treatment). Very simple repeated measures design, but the thing is that the subjects are not identified. The only thing you know is that it's the same subjects, but you can't use a repeated measures due to the lack of some id variable. Up til now, I constructed the mean difference as the difference between the means at t=0 and t=1 for every tank. Calculating the SE for each of these differences allows me to do a series of T-tests and put a Dunn-Sidak correction on it afterwards. But somehow this is cumbersome and feels not right. Alternatively, I can substract the mean for each treatment at t=0, and do a classical one-way ANOVA. But I am not sure how I should correct the variances in order to count for the fact that I don't substract a fixed number, but a sample mean. What is the appropriate model to use here, and if possible, where do I find that function in R? it is NOT aov(weight~treatment*time), as this will make a comparison between eg t=0/treatment=1 and t=1/treatment=3. That's not what I want.
Compare treatments on mean difference between two times
CC BY-SA 2.5
null
2010-10-07T12:30:23.340
2010-10-15T16:34:25.457
null
null
1124
[ "anova", "mean", "repeated-measures" ]
3384
2
null
3316
3
null
Your "simplistic first thought" of qualitatively representing just the directional movement is similar in spirit to Keogh's SAX algorithm for comparing time series. I'd recommend you take a look at it: [Eamonn Keogh & Jessica Lin: SAX](http://www.cs.ucr.edu/~eamonn/SAX.htm). From your edit, it sounds like you're now thinking about tackling the problem differently, but you might find that SAX provides a piece of the puzzle.
null
CC BY-SA 2.5
null
2010-10-07T13:15:02.057
2010-10-07T13:15:02.057
null
null
1436
null
3385
2
null
3383
2
null
First since you do not know which subjects are which, I think it makes more sense to treat each group of observations (before treatment and after treatment) as seperate observations (i.e. as if you randomly assigned treatment to 96 out of 192 subjects). I know this seems obvious but for heuristic purposes I think it helps clarify the question at hand. While this isn't optimal (regression to the mean) I would say you are still better off than many observational studies (assuming no self-selection into treatment). My initial thought was if treatments are randomly assigned you might as well treat all the pre-treatments as one big control group. You could then use an OLS framework with dummy variables to estimate the treatment effects (with pre-treatments as the reference category). If you know what pre-treatments go to each treatment you could run an anova to see if any non-neglible differences exist between the mean weight in pre-treatment groups. If there are differences in the pre-treatment groups that are not ignorable you should be able to use a multi level framework. Since fish are in the same tank a multi-level framework may be appropriate anyway.
null
CC BY-SA 2.5
null
2010-10-07T13:21:46.283
2010-10-07T13:21:46.283
null
null
1036
null
3386
1
null
null
15
151473
I have to do binary logistic regression with a lot of independent variables. Most of them are binary, but a few of the categorical variables have more than two levels. What is the best way to deal with such variables? For example, for a variable with three possible values, I suppose that two dummy variables have to be created. Then, in a step-wise regression procedure, it is better to test both of the dummy variables at the same time, or to test them separately? I will use SPSS, but I do not remember it very well, so: how does SPSS deal with this situation? Moreover, for an ordinal categorical variable, it is a good thing to use dummy variables which recreate the ordinal scale? (For example, using three dummy variables for a 4-state ordinal variable, put `0-0-0` for level $1$, `1-0-0` for level $2$, `1-1-0` for level $3$ and `1-1-1` for level $4$, instead of `0-0-0`, `1-0-0`, `0-1-0` and `0-0-1` for the 4 levels.)
How to deal with non-binary categorical variables in logistic regression (SPSS)
CC BY-SA 3.0
null
2010-10-07T14:48:10.447
2017-07-12T12:07:47.197
2017-07-12T12:07:47.197
3277
1219
[ "logistic", "categorical-data", "spss", "ordinal-data", "categorical-encoding" ]