Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
3387
2
null
3386
10
null
The UCLA website has a bunch of great tutorials for every procedure broken down by the software type that you're familiar with. Check out [Annotated SPSS Output: Logistic Regression](http://www.ats.ucla.edu/stat/spss/output/logistic.htm) -- the SES variable they mention is categorical (and not binary). SPSS will automatically create the indicator variables for you. There's also a page dedicated to [Categorical Predictors in Regression with SPSS](http://www.ats.ucla.edu/stat/spss/webbooks/reg/chapter3/spssreg3.htm) which has specific information on how to change the default codings and a page specific to [Logistic Regression](http://www.ats.ucla.edu/stat/spss/topics/logistic_regression.htm).
null
CC BY-SA 2.5
null
2010-10-07T15:18:08.673
2010-10-07T15:18:08.673
null
null
1499
null
3388
2
null
3386
8
null
Logistic regression is a pretty flexible method. It can readily use as independent variables categorical variables. Most software that use Logistic regression should let you use categorical variables. As an example, let's say one of your categorical variable is temperature defined into three categories: cold/mild/hot. As you suggest you could interpret that as three separate dummy variables each with a value of 1 or 0. But, the software should let you use a single categorical variable instead with text value cold/mild/hot. And, the logit regression would derive coefficient (or constant) for each of the three temperature conditions. If one is not significant, the software or the user could readily take it out (after observing t stat and p value). The main benefit of grouping categorical variable categories into a single categorical variable is model efficiency. A single column in your model can handle as many categories as needed for a single categorical variable. If instead, you use a dummy variable for each categories of a categorical variable your model can quickly grow to have numerous columns that are superfluous given the mentioned alternative.
null
CC BY-SA 2.5
null
2010-10-07T15:56:05.430
2010-10-07T15:56:05.430
null
null
1329
null
3389
2
null
3377
3
null
There is also a parametric approach. Ignoring the vector nature of your data, and looking only at the marginals, it suffices to solve the problem: find an online algorithm to compute the mean absolute deviation of scalar $X$. If (and this is the big 'if' here) you thought that $X$ followed some probability distribution with unknown parameters, you could estimate the parameters using some online algorithm, then compute the mean absolute deviation based on that parametrized distribution. For example, if you thought that $X$ was (approximately) normally distributed, you could estimate its standard deviation, as $s$, and the mean absolute deviation would be estimated by $s \sqrt{2 / \pi}$ (see [Half Normal Distribution](http://en.wikipedia.org/wiki/Half-normal_distribution)).
null
CC BY-SA 2.5
null
2010-10-07T16:27:19.653
2010-10-07T16:27:19.653
null
null
795
null
3390
1
4057
null
11
2726
The [Cornish-Fisher Expansion](http://www.riskglossary.com/link/cornish_fisher.htm) provides a way to estimate the quantiles of a distribution based on moments. (In this sense, I see it as a complement to the [Edgeworth Expansion](http://en.wikipedia.org/wiki/Edgeworth_expansion#Edgeworth_series), which gives an estimate of the cumulative distribution based on moments.) I would like to know in which situations would one prefer the Cornish-Fisher expansion for empirical work over the sample quantile, or vice-versa. A few guesses: - Computationally, sample moments can be computed online, whereas online estimation of sample quantiles is difficult. In this case, the C-F 'wins'. - If one had the ability to forecast moments, the C-F would allow one to leverage these forecasts for quantile estimation. - The C-F Expansion can possibly give estimates of quantiles outside the range of observed values, whereas the sample quantile probably should not. - I am not aware of how to compute a confidence interval around the quantile estimates given by C-F. In this case, sample quantile 'wins'. - It seems like the C-F Expansion requires one to estimate multiple higher moments of a distribution. The errors in these estimates probably compound in such a way that the C-F Expansion has a higher standard error than the sample quantile. Any others? Does anybody have experience using both of these methods?
Why Use the Cornish-Fisher Expansion Instead of Sample Quantile?
CC BY-SA 2.5
null
2010-10-07T17:00:40.833
2017-09-28T18:28:02.507
2017-09-28T18:28:02.507
60613
795
[ "distributions", "quantiles", "finance" ]
3391
2
null
3331
7
null
You could look at the work of [Eamonn Keogh](http://www.cs.ucr.edu/~eamonn/) (UC Riverside) on time series clustering. His website has a lot of resources. I think he provides Matlab code samples, so you'd have to translate this to R.
null
CC BY-SA 2.5
null
2010-10-07T17:42:05.903
2010-10-07T17:45:59.027
2010-10-07T17:45:59.027
930
1436
null
3392
1
3398
null
53
3656
It seems that lots of people (including me) like to do exploratory data analysis in Excel. Some limitations, such as the number of rows allowed in a spreadsheet, are a pain but in most cases don't make it impossible to use Excel to play around with data. [A paper by McCullough and Heiser](http://www.pages.drexel.edu/~bdm25/excel2007.pdf), however, practically screams that you will get your results all wrong -- and probably burn in hell as well -- if you try to use Excel. Is this paper correct or is it biased? The authors do sound like they hate Microsoft.
Excel as a statistics workbench
CC BY-SA 2.5
null
2010-10-07T17:44:32.840
2022-12-02T14:26:39.963
null
null
666
[ "software", "computational-statistics", "excel" ]
3393
2
null
3294
12
null
There is also a really good book by Oliver Cappe et. al: [Inference in Hidden Markov Models](http://rads.stackoverflow.com/amzn/click/0387402640). However, it is fairly theoretical and very light on the applications. There is another book with examples in R, but I couldn't stand it - [Hidden Markov Models for Time Series](http://rads.stackoverflow.com/amzn/click/1584885734). P.s. The speech recognition community also has a ton of literature on this subject.
null
CC BY-SA 2.5
null
2010-10-07T17:59:07.497
2010-10-07T17:59:07.497
null
null
1499
null
3394
2
null
3392
7
null
Incidently, a question around the use of Google spreadsheets raised contrasting (hence, interesting) opinions about that, [Do some of you use Google Docs spreadsheet to conduct and share your statistical work with others?](https://stats.stackexchange.com/questions/3244/do-some-of-you-use-google-docs-spreadsheet-to-conduct-and-share-your-statistical/3247#3247) I have in mind an older paper which didn't seem so pessimist, but it is only marginally cited in the paper you mentioned: Keeling and Pavur, [A comparative study of the reliability of nine statistical software packages](http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V8V-4JHMGWJ-1&_user=10&_coverDate=05%2F01%2F2007&_rdoc=1&_fmt=high&_orig=search&_origin=search&_sort=d&_docanchor=&view=c&_searchStrId=1488136142&_rerunOrigin=google&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=2babab0b51c03746d5d7a74d31f1498c&searchtype=a) (CSDA 2007 51: 3811). But now, I found yours on my hard drive. There was also a special issue in 2008, see [Special section on Microsoft Excel 2007](https://web.archive.org/web/20100615022001/http://www.pages.drexel.edu/%7Ebdm25/excel-intro.pdf), and more recently in the Journal of Statistical Software: [On the Numerical Accuracy of Spreadsheets](http://www.jstatsoft.org/v34/i04/paper). I think it is a long-standing debate, and you will find varying papers/opinions about Excel reliability for statistical computing. I think there are different levels of discussion (what kind of analysis do you plan to do, do you rely on the internal solver, are there non-linear terms that enter a given model, etc.), and sources of numerical inaccuracy might arise as the result of proper computing errors or design choices issues; this is well summarized in > M. Altman, J. Gill & M.P. McDonald, Numerical Issues in Statistical Computing for the Social Scientist, Wiley, 2004. Now, for exploratory data analysis, there are various alternatives that provide enhanced visualization capabilities, multivariate and dynamic graphics, e.g. [GGobi](http://www.ggobi.org/) -- but see related threads on this wiki. But, clearly the first point you made addresses another issue (IMO), namely that of using a spreadsheet to deal with large data set: it is simply not possible to import a large csv file into Excel (I'm thinking of genomic data, but it applies to other kind of high-dimensional data). It has not been built for that purpose.
null
CC BY-SA 4.0
null
2010-10-07T18:15:35.337
2022-12-02T14:26:39.963
2022-12-02T14:26:39.963
362671
930
null
3395
1
3396
null
4
339
I am currently working on a model which takes two parameters and produces a measurement statistic. Think of it as Z = f(X,Y). Z is a matrix of my statistics and I am creating a surface plot of it in matlab. Basically, I am looking for a mathematical/analytical way of determining if the surface is smooth, or if it is jagged. Do large values tend to be clustered together or are they dispersed throughout the matrix? - that is my question. Basically, how mixed up are the values of my matrix? I need to run the model over different parameter sets and I want to be able to analytically determine which one of my surfaces is the smoothest, has the greatest clustering of large values, and ideally, has no negative values. Any help will be greatly appreciated and please let me know if you need any further information. Cheers
Smoothness of a surface
CC BY-SA 2.5
null
2010-10-07T19:21:25.180
2010-10-08T02:43:53.970
2010-10-07T19:23:34.423
930
null
[ "clustering", "smoothing", "matlab", "spatial", "autocorrelation" ]
3396
2
null
3395
4
null
One model for this situation is to view $Z$ as a realization of a stationary 2D stochastic process. The limiting behavior at zero (distance) of its empirical [variogram](http://en.wikipedia.org/wiki/Variogram) or correlogram provides information about its smoothness: if the limiting correlation is less than one, the process is not even (mean square) continuous. Otherwise (Theorem) > A stationary stochastic process with correlation function $\rho(u)$ is $k$ times mean-square differentiable if and only if $\rho(u)$ is $2k$ times differentiable at $u=0$ (Diggle & Ribeiro, Model-based Geostatistics). Procedures variofit, likfit, and eyefit in the [geoR](http://cran.r-project.org/web/packages/geoR/index.html) package for R provide ways to estimate and visualize the variogram. None of these procedures require that you have all positive values, but they tend to work best when the values are not terrifically skewed. You must also remove any secular trend initially present in the surface; robust regression of $Z$ against $X$ and $Y$ is one way to do that and other ways (that simultaneously estimate the trend and the variogram of the residuals) are available in geoR.
null
CC BY-SA 2.5
null
2010-10-07T19:52:10.740
2010-10-08T02:43:53.970
2010-10-08T02:43:53.970
8
919
null
3397
2
null
3392
11
null
Well, the question whether the paper is correct or biased should be easy: you could just replicate some of their analyses and see whether you get the same answers. McCullough has been taking different versions of MS Excel apart for some years now, and apparently MS haven't seen fit to fix errors he pointed out years ago in previous versions. I don't see a problem with playing around with data in Excel. But to be honest, I would not do my "serious" analyses in Excel. My main problem would not be inaccuracies (which I guess will only very rarely be a problem) but the impossibility of tracking and replicating my analyses a year later when a reviewer or my boss asks why I didn't do X - you can save your work and your blind alleys in commented R code, but not in a meaningful way in Excel.
null
CC BY-SA 2.5
null
2010-10-07T19:57:40.057
2010-10-07T19:57:40.057
null
null
1352
null
3398
2
null
3392
47
null
Use the right tool for the right job and exploit the strengths of the tools you are familiar with. In Excel's case there are some salient issues: - Please don't use a spreadsheet to manage data, even if your data will fit into one. You're just asking for trouble, terrible trouble. There is virtually no protection against typographical errors, wholesale mixing up of data, truncating data values, etc., etc. - Many of the statistical functions indeed are broken. The t distribution is one of them. - The default graphics are awful. - It is missing some fundamental statistical graphics, especially boxplots and histograms. - The random number generator is a joke (but despite that is still effective for educational purposes). - Avoid the high-level functions and most of the add-ins; they're c**p. But this is just a general principle of safe computing: if you're not sure what a function is doing, don't use it. Stick to the low-level ones (which include arithmetic functions, ranking, exp, ln, trig functions, and--within limits--the normal distribution functions). Never use an add-in that produces a graphic: it's going to be terrible. (NB: it's dead easy to create your own probability plots from scratch. They'll be correct and highly customizable.) In its favor, though, are the following: - Its basic numerical calculations are as accurate as double precision floats can be. They include some useful ones, such as log gamma. - It's quite easy to wrap a control around input boxes in a spreadsheet, making it possible to create dynamic simulations easily. - If you need to share a calculation with non-statistical people, most will have some comfort with a spreadsheet and none at all with statistical software, no matter how cheap it may be. - It's easy to write effective numerical macros, including porting old Fortran code, which is quite close to VBA. Moreover, the execution of VBA is reasonably fast. (For example, I have code that accurately computes non-central t distributions from scratch and three different implementations of Fast Fourier Transforms.) - It supports some effective simulation and Monte-Carlo add-ons like Crystal Ball and @Risk. (They use their own RNGs, by the way--I checked.) - The immediacy of interacting directly with (a small set of) data is unparalleled: it's better than any stats package, Mathematica, etc. When used as a giant calculator with loads of storage, a spreadsheet really comes into its own. - Good EDA, using robust and resistant methods, is not easy, but after you have done it once, you can set it up again quickly. With Excel you can effectively reproduce all the calculations (although only some of the plots) in Tukey's EDA book, including median polish of n-way tables (although it's a bit cumbersome). In direct answer to the original question, there is a bias in that paper: it focuses on the material that Excel is weakest at and that a competent statistician is least likely to use. That's not a criticism of the paper, though, because warnings like this need to be broadcast.
null
CC BY-SA 3.0
null
2010-10-07T20:15:27.567
2012-04-03T11:18:05.973
2012-04-03T11:18:05.973
9007
919
null
3399
2
null
3392
7
null
The papers and other participants point out to technical weaknesses. Whuber does a good job of outlining at least some of its strengths. I personally do extensive statistical work in Excel (hypothesis testing, linear and multiple regressions) and love it. I use Excel 2003 with a capacity of 256 columns and 65,000 rows which can handle just about 100% of the data sets I use. I understand Excel 2007 has extended that capacity by a huge amount (rows in the millions). As Whuber mentions, Excel also serves as a starting platform for a multitude of pretty outstanding add-in software that are all pretty powerful and easy to use. I am thinking of Crystal Ball and @Risk for Monte Carlo Simulation; XLStat for all around powerful stats and data analysis; What's Best for optimization. And, the list goes on. It's like Excel is the equivalent of an IPod or IPad with a zillion of pretty incredible Apps. Granted the Excel Apps are not cheap. But, for what they are capable of doing they are typically pretty great bargains. As far as model documentation is concerned, it is so easy to insert a text box where you can literally write a book about your methodology, your sources, etc... You can also insert comments in any cell. So, if anything Excel is really good for facilitating embedded documentation.
null
CC BY-SA 3.0
null
2010-10-07T21:36:51.820
2016-05-26T15:52:14.530
2016-05-26T15:52:14.530
1329
1329
null
3400
1
3411
null
27
6961
Question: From the standpoint of statistician (or a practitioner), can one infer causality using [propensity scores](http://en.wikipedia.org/wiki/Propensity_score) with an observational study (not an experiment)? Please, do not want to start a flame war or a fanatical debate. Background: Within our stat PhD program, we've only touched on causal inference through working groups and a few topic sessions. However, there are some very prominent researchers in other departments (e.g. HDFS, Sociology) who are actively using them. I've already witnessed some pretty heated debate on this issue. It is not my intention to start one here. That said, what references have you encountered? What viewpoints do you have? For example, one argument I've heard against propensity scores as a causal inference technique is that one can never infer causality due omitted variable bias -- if you leave out something important, you break the causal chain. Is this an unresolvable problem? Disclaimer: This question may not have a correct answer -- completely cool with clicking cw, but I'm personally very interested in the responses and would be happy with a few good references which include real-world examples.
From a statistical perspective, can one infer causality using propensity scores with an observational study?
CC BY-SA 2.5
null
2010-10-07T23:27:47.727
2016-12-01T09:49:04.243
null
null
1499
[ "causality", "propensity-scores" ]
3402
1
3403
null
9
699
I'm a newbie at stats, so if I make any mistaken assumptions here please tell me. There's a population `N` of people. (For example `N` can be 1,000,000.) Some of the people are redheads. I take a sample `n` of people (say 10,) and find that `j` of them are redheads. What can I say about the general proportion of redheads in the population? I mean, my best approximation is probably `j/n`, but what would be the standard deviation of that approximation? By the way, what is the accepted term for this?
What's the accuracy of data obtained through a random sample?
CC BY-SA 2.5
null
2010-10-08T00:51:55.783
2010-10-08T23:30:44.833
2010-10-08T02:39:46.587
8
5793
[ "standard-deviation", "sample-size", "binomial-distribution", "standard-error" ]
3403
2
null
3402
8
null
You can think of this as a binomial trial -- your trials are sampling "redhead" or "not readhead". In which case, you can build a confidence interval for your sample proportion ($j/n$) as documented on Wikipedia: - Binomial proportion confidence interval A 95% confidence interval basically says that, using the same sampling algorithm, if you repeated this 100 times, the true proportion would lie in the stated interval 95 times. Update By the way, I think the term you're looking for might be [standard error](http://en.wikipedia.org/wiki/Standard_error_%28statistics%29) which is the standard deviation of the sampled proportions. In this case, it's $\sqrt{{p (1-p)} \over {n}}$ where $p$ is your estimated proportion. Note that as $n$ increases, the standard error decreases.
null
CC BY-SA 2.5
null
2010-10-08T01:01:57.537
2010-10-08T01:12:34.190
2010-10-08T01:12:34.190
251
251
null
3404
2
null
3400
8
null
Only a prospective randomized trial can determine causality. In observational studies, there will always be the chance of an unmeasured or unknown covariate which makes ascribing causality impossible. However, observational trials can provide evidence of a strong association between x and y, and are therefore useful for hypothesis generation. These hypotheses then need to be confirmed with a randomized trial.
null
CC BY-SA 2.5
null
2010-10-08T01:39:26.280
2010-10-08T01:39:26.280
null
null
561
null
3405
2
null
3392
20
null
An interesting paper about using Excel in a Bioinformatics setting is: > Mistaken Identifiers: Gene name errors can be introduced inadvertently when using Excel in bioinformatics, BMC Bioinformatics, 2004 (link). This short paper describes the problem of automatic type conversions in Excel (in particular [date](http://www.biomedcentral.com/1471-2105/5/80/figure/F1) and floating point conversions). For example, the gene name Sept2 is converted to 2-Sept. You can actually find this error in [online databases](http://www.biomedcentral.com/1471-2105/5/80/figure/F2). Using Excel to manage medium to large amounts of data is dangerous. Mistakes can easily creep in without the user noticing.
null
CC BY-SA 2.5
null
2010-10-08T02:35:37.343
2010-10-08T13:01:56.017
2010-10-08T13:01:56.017
919
8
null
3407
1
3410
null
15
4555
I am building an android application that records accelerometer data during sleep, so as to analyze sleep trends and optionally wake the user near a desired time during light sleep. I have already built the component that collects and stores data, as well as the alarm. I still need to tackle the beast of displaying and saving sleep data in a really meaningful and clear way, one that preferably also lends itself to analysis. A couple of pictures say two thousand words: (I can only post one link due to low rep) Here's the unfiltered data, the sum of movement, collected at 30 second intervals [](https://i.stack.imgur.com/byEZJ.png) And the same data, smoothed by my own manifestation of moving average smoothing [](https://i.stack.imgur.com/Q9Lvh.png) edit) both charts reflect calibration- there is a minimum 'noise' filter and maximum cutoff filter, as well as a alarm trigger level (the white line) Unfortunately, neither of these are optimal solutions- the first is a little hard to understand for the average user, and the second, which is easier to understand, hides a lot of what is really going on. In particular the averaging removes the detail of spikes in movement- and I think those can be meaningful. So why are these charts so important? These time-series are displayed throughout the night as feedback to the user, and will be stored for reviewing/analysis later. The smoothing will ideally lower memory cost (both RAM and storage), and make rendering faster on these resource-starved phones/devices. Clearly there is a better way to smooth the data- I have some vague ideas, such as using linear regression to figure out 'sharp' changes in movement and modifying my moving average smoothing according. I really need some more guidance and input before I dive headfirst into something that could be solved more optimally. Thanks!
Smoothing time series data
CC BY-SA 4.0
null
2010-10-08T07:59:32.177
2019-01-11T19:04:52.047
2019-01-11T19:04:52.047
79696
1520
[ "time-series", "smoothing", "signal-processing", "java" ]
3408
2
null
3407
10
null
There are many nonparametric smoothing algorithms including splines and loess. But they will smooth out the sudden changes too. So will low-pass filters. I think you might need a wavelet-based smoother which allows the sudden jumps but still smooths the noise. Check out [Percival and Walden (2000)](http://rads.stackoverflow.com/amzn/click/0521640687) and the associated [R package](http://cran.r-project.org/web/packages/wmtsa/index.html). Although you want a java solution, the algorithms in the R package are open-source and you might be able to translate them.
null
CC BY-SA 2.5
null
2010-10-08T09:35:30.260
2010-10-08T09:35:30.260
null
null
159
null
3409
2
null
3407
3
null
This is somewhat tangential to what you're asking, but it may be worth taking a look at the Kalman filter.
null
CC BY-SA 2.5
null
2010-10-08T09:54:24.820
2010-10-08T09:54:24.820
null
null
439
null
3410
2
null
3407
16
null
First up, the requirements for compression and analysis/presentation are not necessarily the same -- indeed, for analysis you might want to keep all the raw data and have the ability to slice and dice it in various ways. And what works best for you will depend very much on what you want to get out of it. But there are a number of standard tricks that you could try: - Use differences rather than raw data - Use thresholding to remove low-level noise. (Combine with differencing to ignore small changes.) - Use variance over some time window rather than average, to capture activity level rather than movement - Change the time base from fixed intervals to variable length runs and accumulate into a single data point sequences of changes for which some criterion holds (eg, differences in same direction, up to some threshold) - Transform data from real values to ordinal (eg low, medium, high); you could also do this on time bins rather than individual samples -- eg, activity level for each 5 minute stretch - Use an appropriate convolution kernel* to smooth more subtly than your moving average or pick out features of interest such as sharp changes. - Use an FFT library to calculate a power spectrum The last may be a bit expensive for your purposes, but would probably give you some very useful presentation options, in terms of "sleep rhythms" and such. (I know next to nothing about Android, but it's conceivable that some/many/all handsets might have built in DSP hardware that you can take advantage of.) --- * Given how central convolution is to digital signal processing, it's surprisingly difficult to find an accessible intro online. Or at least in 3 minutes of googling. Suggestions welcome!
null
CC BY-SA 2.5
null
2010-10-08T09:57:22.490
2010-10-08T10:11:31.953
2010-10-08T10:11:31.953
174
174
null
3411
2
null
3400
17
null
At the beginning of an article aiming at promoting the use of PSs in epidemiology, Oakes and Church (1) cited Hernán and Robins's claims about confounding effect in epidemiology (2): > Can you guarantee that the results from your observational study are unaffected by unmeasured confounding? The only answer an epidemiologist can provide is ‘no’. This is not just to say that we cannot ensure that results from observational studies are unbiased or useless (because, as @propofol said, their results can be useful for designing RCTs), but also that PSs do certainly not offer a complete solution to this problem, or at least do not necessarily yield better results than other matching or multivariate methods (see e.g. (10)). Propensity scores (PS) are, by construction, probabilistic not causal indicators. The choice of the covariates that enter the propensity score function is a key element for ensuring its reliability, and their weakness, as has been said, mainly stands from not controlling for unobserved confounders (which is quite likely in retrospective or [case-control](http://en.wikipedia.org/wiki/Case-control_study) studies). Others factors have to be considered: (a) model misspecification will impact direct effect estimates (not really more than in the OLS case, though), (b) there may be missing data at the level of the covariates, (c) PSs do not overcome synergistic effects which are know to affect causal interpretation (8,9). As for references, I found Roger Newson's slides -- [Causality, confounders, and propensity scores](http://www.imperial.ac.uk/nhli/r.newson/miscdocs/causconf1.pdf) -- relatively well-balanced about the pros and cons of using propensity scores, with illustrations from real studies. There were also several good papers discussing the use of propensity scores in observational studies or environmental epidemiology two years ago in Statistics in Medicine, and I enclose a couple of them at the end (3-6). But I like Pearl's review (7) because it offers a larger perspective on causality issues (PSs are discussed p. 117 and 130). Obviously, you will find many more illustrations by looking at applied research. I would like to add two recent articles from William R Shadish that came across Andrew Gelman's website (11,12). The use of propensity scores is discussed, but the two papers more largely focus on causal inference in observational studies (and how it compare to randomized settings). References - Oakes, J.M. and Church, T.R. (2007). Invited Commentary: Advancing Propensity Score Methods in Epidemiology. American Journal of Epidemiology, 165(10), 1119-1121. - Hernan M.A. and Robins J.M. (2006). Instruments for causal inference: an epidemiologist's dream? Epidemiology, 17, 360-72. - Rubin, D. (2007). The design versus the analysis of observational studies for causal effects: Parallels with the design of randomized trials. Statistics in Medicine, 26, 20–36. - Shrier, I. (2008). Letter to the editor. Statistics in Medicine, 27, 2740–2741. - Pearl, J. (2009). Remarks on the method of propensity score. Statistics in Medicine, 28, 1415–1424. - Stuart, E.A. (2008). Developing practical recommendations for the use of propensity scores: Discussion of ‘A critical appraisal of propensity score matching in the medical literature between 1996 and 2003’ by Peter Austin. Statistics in Medicine, 27, 2062–2065. - Pearl, J. (2009). Causal inference in statistics: An overview. Statistics Surveys, 3, 96-146. - Oakes, J.M. and Johnson, P.J. (2006). Propensity score matching for social epidemiology. In Methods in Social Epidemiology, J.M. Oakes and S. Kaufman (Eds.), pp. 364-386. Jossez-Bass. - Höfler, M (2005). Causal inference based on counterfactuals. BMC Medical Research Methodology, 5, 28. - Winkelmayer, W.C. and Kurth, T. (2004). Propensity scores: help or hype? Nephrology Dialysis Transplantation, 19(7), 1671-1673. - Shadish, W.R., Clark, M.H., and Steiner, P.M. (2008). Can Nonrandomized Experiments Yield Accurate Answers? A Randomized Experiment Comparing Random and Nonrandom Assignments. JASA, 103(484), 1334-1356. - Cook, T.D., Shadish, W.R., and Wong, V.C. (2008). Three Conditions under Which Experiments and Observational Studies Produce Comparable Causal Estimates: New Findings from Within-Study Comparisons. Journal of Policy Analysis and Management, 27(4), 724–750.
null
CC BY-SA 3.0
null
2010-10-08T11:30:29.323
2013-10-30T21:28:34.690
2013-10-30T21:28:34.690
930
930
null
3412
1
3415
null
17
3674
I have an experiment that I'll try to abstract here. Imagine I toss three white stones in front of you and ask you to make a judgment about their position. I record a variety of properties of the stones and your response. I do this over a number of subjects. I generate two models. One is that the nearest stone to you predicts your response, and the other is that the geometric center of the stones predicts your response. So, using lmer in R I could write. ``` mNear <- lmer(resp ~ nearest + (1|subject), REML = FALSE) mCenter <- lmer(resp ~ center + (1|subject), REML = FALSE) ``` UPDATE AND CHANGE - more direct version that incorporates several helpful comments I could try ``` anova(mNear, mCenter) ``` Which is incorrect, of course, because they're not nested and I can't really compare them that way. I was expecting anova.mer to throw an error but it didn't. But the possible nesting that I could try here isn't natural and still leaves me with somewhat less analytical statements. When models are nested naturally (e.g. quadratic on linear) the test is only one way. But in this case what would it mean to have asymmetric findings? For example, I could make a model three: ``` mBoth <- lmer(resp ~ center + nearest + (1|subject), REML = FALSE) ``` Then I can anova. ``` anova(mCenter, mBoth) anova(mNearest, mBoth) ``` This is fair to do and now I find that the center adds to the nearest effect (the second command) but BIC actually goes up when nearest is added to center (correction for the lower parsimony). This confirms what was suspected. But is finding this sufficient? And is this fair when center and nearest are so highly correlated? Is there a better way to analytically compare the models when it's not about adding and subtracting explanatory variables (degrees of freedom)?
Comparing mixed effect models with the same number of degrees of freedom
CC BY-SA 2.5
null
2010-10-08T12:34:11.673
2022-09-18T20:11:05.137
2011-03-13T16:27:57.167
601
601
[ "r", "mixed-model", "model-selection" ]
3413
1
3414
null
4
3327
I am looking for the Hurst exponent calculation methodology. Please suggest online materials / methodology papers.
Hurst exponent calculation methodology
CC BY-SA 2.5
null
2010-10-08T13:24:33.177
2015-11-18T14:26:04.613
2015-11-18T14:26:04.613
22468
1250
[ "references", "fractal" ]
3414
2
null
3413
8
null
The calculation is covered on [the related wikipedia page](http://en.wikipedia.org/wiki/Hurst_exponent). R has several implementations for this: - The fArma package provides 10 different functions to estimate the Hurst exponent (see LrdModelling). - The Rwave package has the hurst.est() function. - The fractal package has the hurstACVF() function. - The dvfBm package is intended entirely for this purpose: "Hurst exponent estimation of a fractional Brownian motion by using discrete variations methods in presence of outliers and/or an additive noise". The methods covered by fArma are taken from the ["Estimators for Long-Range Dependence: An Empirical Study"](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.55.8251) (Taqqu, Teverovsky, Willinger 1995). Edit: Just to add from the M. Tibbits comment below, you can find the [Hurst exponent](http://www.mathworks.com/matlabcentral/fileexchange/9842) code for Matlab offered under a BSD license. The description: > This is an implementation of the Hurst exponent calculation that is smaller, simpler, and quicker than most others. It does a dispersional analysis on the data and then uses Matlab's polyfit to estimate the Hurst exponent. It comes with a test driver that you can delete.
null
CC BY-SA 2.5
null
2010-10-08T13:33:08.560
2010-10-08T15:05:16.480
2010-10-08T15:05:16.480
5
5
null
3415
2
null
3412
9
null
Still, you can compute confidence intervals for your fixed effects, and report AIC or BIC (see e.g. [Cnann et al.](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.133.2052&rep=rep1&type=pdf), Stat Med 1997 16: 2349). Now, you may be interested in taking a look at [Assessing model mimicry using the parametric bootstrap](http://star.psy.ohio-state.edu/coglab/People/roger/pdf/mathpsy04.pdf), from Wagenmakers et al. which seems to more closely resemble your initial question about assessing the quality of two competing models. Otherwise, the two papers about measures of explained variance in LMM that come to my mind are: - Lloyd J. Edwards, Keith E. Muller, Russell D. Wolfinger, Bahjat F. Qaqish and Oliver Schabenberger (2008). An R2 statistic for fixed effects in the linear mixed model, Statistics in Medicine, 27(29), 6137–6157. - Ronghui Xu (2003). Measuring explained variation in linear mixed effects models, Statistics in Medicine, 22(22), 3527–3541. But maybe there are better options.
null
CC BY-SA 2.5
null
2010-10-08T13:41:22.563
2010-10-08T13:49:37.653
2010-10-08T13:49:37.653
930
930
null
3416
2
null
3412
3
null
I do not know R well enough to parse your code but here is one idea: Estimate a model where you have both center and near as covariates (call this mBoth). Then mCenter and mNear are nested in mBoth and you could use mBoth as a benchmark to compare the relative performance of mCenter and mNear.
null
CC BY-SA 2.5
null
2010-10-08T13:53:52.570
2010-10-08T13:53:52.570
null
null
null
null
3417
2
null
3287
5
null
I'm an ecologist, so I apologise in advance is this sounds a bit strange :-) I like to think of these plots in terms of weighted averages. The region points are at the weighted averages of the smoking status classes and vice versa. The problem with the above figure is the axis scaling and the fact that you can't display all the relationships (chi-square distance between regions and chi-square distance between smoking status) on the one figure. By the looks of it, the figure is using a what is known as symmetric scaling which has been shown to be a good compromise preserving as much of the information in the sets of scores as possible. I'm not familiar with the `ca` package but I am with the vegan package and it's `cca` function: ``` require(vegan) df <- data.frame(df) ord <- cca(df) plot(ord, scaling = 3) ``` The last plot is a bit easier to read than the one you show but AFAICT they are the same (or at least similarly scaled). So I would say that occasional smokers are lower in number than expected in QC, BC and AB, and most associated with ON, but that in all regions, occasional smokers are low in number - they differ markedly from the expected number. However, there is a single dominant "gradient" or axis of variation in these data and as the second axis represents so little variation, I would likely not interpret this component at all.
null
CC BY-SA 2.5
null
2010-10-08T16:16:58.473
2010-10-08T16:22:10.900
2010-10-08T16:22:10.900
1390
1390
null
3419
1
3673
null
7
452
There are umpteen million research papers regarding relationships between various patient attributes (e.g. how does gene x affect condition y?). What I am interested in though is a distance metric between patients in toto. Sort of like if I were constructing a dating site, I'd want to know how similar two people are. (Except in this case "similarity" means health similarity rather than personality similarity or whatever dating sites look at.) Could anyone point me to research regarding this problem? So far the only paper I've found that really attempts to tackle this is: Melton, G. B., S. Parsons, F. P. Morrison, A. S. Rothschild, M. Markatou, and G. Hripcsak. “Inter-patient distance metrics using SNOMED CT defining relationships.” Journal of Biomedical Informatics 39, no. 6 (2006): 697-705. EDIT: To clarify my question (because it's slightly different than many on this site): I am not asking "I have some data set, how can I analyze it?" I am asking "if I gave a doctor a data set, how would they analyze it?" I don't particularly care if there is some relation between attribute X and Y. What I care about is if a doctor thinks there is a relation between X and Y. I.e. my question is: I give a doctor two patient charts. How can I predict what they think the similarity is? Do they look at certain attributes? Is it even possible to make a statement like "they are .9 similar?" Is a better statement "they are .9 similar on dimension X and .8 similar on dimension Y?" How are chronic conditions different than temporary ones? etc. etc. This is maybe on the fringes of what this site is intended for, but I'm hoping someone has dealt with this and can point me in a good direction, even though it's not a question about a given statistical technique per se. EDIT 2: Thank you all for your suggestions. However, I was really looking for people who have done this work before - I don't have access to a lot of data, so I was hoping to find someone who did have these data and utilize their conclusions.
Patient distance metrics
CC BY-SA 2.5
null
2010-10-08T17:52:20.097
2017-11-16T13:21:24.017
2010-10-15T16:18:12.350
900
900
[ "clustering", "biostatistics" ]
3420
2
null
3296
1
null
Addressing the issue mentioned under Update 2. You are dealing with outliers. Those outliers have a significant impact on your Logistic Regression coefficients. By removing them, you found that your models performed better on the validation set. Does it mean that the outliers are "bad"? No. It means that they are influential. There are several measures of statistical distances to confirm how far away and influential such outliers are. Those include Cook's D and DFFITS. Having identified the trouble makers, you are struggling with whether to keep them in or not. Ultimately, this may be a qualitative judgment rather than a statistical question. Here are a couple of investigative questions that may be helpful in making this qualitative decision: 1) First, are the outliers truly bad due to poor measurements? 2) Is it more important for your models to be correct in the tails where outliers reside or be more accurate in the vast majority of the cases?
null
CC BY-SA 2.5
null
2010-10-08T19:38:39.063
2010-10-08T19:38:39.063
null
null
1329
null
3421
2
null
3419
3
null
The whole field of [Cluster Analysis](http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Dstripbooks&field-keywords=Cluster+analysis&x=0&y=0) is relevant to your concept of multi-variable statistical distance. The linked book on the subject is very short and pretty good.
null
CC BY-SA 2.5
null
2010-10-08T20:05:14.780
2010-10-08T20:05:14.780
null
null
1329
null
3422
2
null
3412
12
null
Following ronaf's suggestion leads to a more recent paper by Vuong for a Likelihood Ratio Test on nonnested models. It's based on the KLIC (Kullback-Leibler Information Criterion) which is similar to the AIC in that it minimizes the KL distance. But it sets up a probabilistic specification for the hypothesis so the use of the LRT leads to a more principled comparison. A more accessible version of the Cox and Vuong tests is presented by Clarke et al; in particular see Figure 3 which presents the algorithm for computing the Vuong LRT test. - Likelihood Ratio Tests for Model Selection and Non-nested Hypotheses (Vuong, 1999) - Testing Nonnested Models of International Relations: Reevaluating Realism (Clarke et al, 2000) It seems there are R implementations of the Vuong test in other models, but not lmer. Still, the outline mentioned above should be sufficient to implement one. I don't think you can obtain the likelihood evaluated at each data point from lmer as required for the computation. In a note on sig-ME, Douglas Bates has [some pointers](https://stat.ethz.ch/pipermail/r-sig-mixed-models/2007q3/000246.html) that might be helpful (in particular, the [vignette](https://cran.r-project.org/web/packages/lme4/vignettes/Theory.pdf) he mentions). --- Older Another option is to consider the fitted values from the models in a test of prediction accuracy. The Williams-Kloot statistic may be appropriate here. The basic approach is to regress the actual values against a linear combination of the fitted values from the two models and test the slope: - A Test for Discriminating Between Models (Atikinson, 1969) - Growth and the Welfare State in the EU: A Causality Analysis (Herce et al, 2001) The first paper describes the test (and others), while the second has an application of it in an econometric panel model. --- When using `lmer` and comparing AICs, the function's default is to use the REML method (Restricted Maximum Likelihood). This is fine for obtaining less biased estimates, but when comparing models, you should re-fit with `REML=FALSE` which uses the Maximum Likelihood method for fitting. The [Pinheiro/Bates book](https://rads.stackoverflow.com/amzn/click/com/1441903178) mentions some condition under which it's OK to compare AIC/Likelihood with either REML or ML, and these may very well apply in your case. However, the general recommendation is to simply re-fit. For example, see Douglas Bates' post here: - How can I extract the AIC score from a mixed model object produced using lmer?
null
CC BY-SA 4.0
null
2010-10-08T20:58:17.027
2022-06-19T15:54:27.863
2022-06-19T15:54:27.863
361019
251
null
3423
1
null
null
8
412
I'm working on a web app, and I'm creating some data viz tools for it. For one particular series, I've got an extremely wide variance in data values (0 to millions). We're using a column chart to view the data now, which of course results in some columns that are a pixel high or smaller. We already have some ways to slice the data that helps a bit, but I was wondering if there were different kinds of visualizations out there in common use that deal with this type of situation better. And if so, if there were JS libraries that help implement them.
Recommendations for visualization type when data has an extremely wide variance
CC BY-SA 2.5
null
2010-10-08T21:01:20.033
2010-10-09T15:40:21.837
2010-10-09T15:40:21.837
null
1531
[ "data-visualization" ]
3424
2
null
3423
9
null
A standard approach to dealing with data that has a wide variance is to use a [log scale](http://en.wikipedia.org/wiki/Logarithmic_scale) (or some other kind of scaling approach) regardless of the visualization itself. This could be applied in any graphical package (including a JS library like [Protovis](http://vis.stanford.edu/protovis/)). Another strategy is to use bands, and fold the data over several times (as [in this example](http://vis.stanford.edu/protovis/ex/horizon.html)), although personally I find this approach to be harder to read. This ends up [looking like](http://vis.berkeley.edu/papers/horizon/): ![alt text](https://i.stack.imgur.com/j03UP.png)
null
CC BY-SA 2.5
null
2010-10-08T21:20:22.510
2010-10-08T21:36:33.643
2010-10-08T21:36:33.643
5
5
null
3425
1
3433
null
44
61699
I am not sure how this should be termed, so please correct me if you know a better term. I've got two lists. One of 55 items (e.g: a vector of strings), the other of 92. The item names are similar but not identical. I wish to find the best candidates in the 92 list to the items in the 55 list (I will then go through it and pick the correct fitting). How can it be done? Ideas I had where to: - See all the ones that match (using something list ?match) - Try a distance matrix between the strings vectors, but I am not sure how to best define it (number of identical letters, what about order of strings?) So what package/functions/field-of-research deals with such a task, and how? Update: Here is an example of the vectors I wish to match ``` vec55 <- c("Aeropyrum pernix", "Archaeoglobus fulgidus", "Candidatus_Korarchaeum_cryptofilum", "Candidatus_Methanoregula_boonei_6A8", "Cenarchaeum_symbiosum", "Desulfurococcus_kamchatkensis", "Ferroplasma acidarmanus", "Haloarcula_marismortui_ATCC_43049", "Halobacterium sp.", "Halobacterium_salinarum_R1", "Haloferax volcanii", "Haloquadratum_walsbyi", "Hyperthermus_butylicus", "Ignicoccus_hospitalis_KIN4", "Metallosphaera_sedula_DSM_5348", "Methanobacterium thermautotrophicus", "Methanobrevibacter_smithii_ATCC_35061", "Methanococcoides_burtonii_DSM_6242" ) vec91 <- c("Acidilobus saccharovorans 345-15", "Aciduliprofundum boonei T469", "Aeropyrum pernix K1", "Archaeoglobus fulgidus DSM 4304", "Archaeoglobus profundus DSM 5631", "Caldivirga maquilingensis IC-167", "Candidatus Korarchaeum cryptofilum OPF8", "Candidatus Methanoregula boonei 6A8", "Cenarchaeum symbiosum A", "Desulfurococcus kamchatkensis 1221n", "Ferroglobus placidus DSM 10642", "Halalkalicoccus jeotgali B3", "Haloarcula marismortui ATCC 43049", "Halobacterium salinarum R1", "Halobacterium sp. NRC-1", "Haloferax volcanii DS2", "Halomicrobium mukohataei DSM 12286", "Haloquadratum walsbyi DSM 16790", "Halorhabdus utahensis DSM 12940", "Halorubrum lacusprofundi ATCC 49239", "Haloterrigena turkmenica DSM 5511", "Hyperthermus butylicus DSM 5456", "Ignicoccus hospitalis KIN4/I", "Ignisphaera aggregans DSM 17230", "Metallosphaera sedula DSM 5348", "Methanobrevibacter ruminantium M1", "Methanobrevibacter smithii ATCC 35061", "Methanocaldococcus fervens AG86", "Methanocaldococcus infernus ME", "Methanocaldococcus jannaschii DSM 2661", "Methanocaldococcus sp. FS406-22", "Methanocaldococcus vulcanius M7", "Methanocella paludicola SANAE", "Methanococcoides burtonii DSM 6242", "Methanococcus aeolicus Nankai-3", "Methanococcus maripaludis C5", "Methanococcus maripaludis C6", "Methanococcus maripaludis C7", "Methanococcus maripaludis S2", "Methanococcus vannielii SB", "Methanococcus voltae A3", "Methanocorpusculum labreanum Z", "Methanoculleus marisnigri JR1", "Methanohalobium evestigatum Z-7303", "Methanohalophilus mahii DSM 5219", "Methanoplanus petrolearius DSM 11571", "Methanopyrus kandleri AV19", "Methanosaeta thermophila PT", "Methanosarcina acetivorans C2A", "Methanosarcina barkeri str. Fusaro", "Methanosarcina mazei Go1", "Methanosphaera stadtmanae DSM 3091", "Methanosphaerula palustris E1-9c", "Methanospirillum hungatei JF-1", "Methanothermobacter marburgensis str. Marburg", "Methanothermobacter thermautotrophicus str. Delta H", "Nanoarchaeum equitans Kin4-M", "Natrialba magadii ATCC 43099", "Natronomonas pharaonis DSM 2160", "Nitrosopumilus maritimus SCM1", "Picrophilus torridus DSM 9790", "Pyrobaculum aerophilum str. IM2", "Pyrobaculum arsenaticum DSM 13514", "Pyrobaculum calidifontis JCM 11548", "Pyrobaculum islandicum DSM 4184", "Pyrococcus abyssi GE5", "Pyrococcus furiosus DSM 3638", "Pyrococcus horikoshii OT3", "Staphylothermus hellenicus DSM 12710", "Staphylothermus marinus F1", "Sulfolobus acidocaldarius DSM 639", "Sulfolobus islandicus L.D.8.5", "Sulfolobus islandicus L.S.2.15", "Sulfolobus islandicus M.14.25", "Sulfolobus islandicus M.16.27", "Sulfolobus islandicus M.16.4", "Sulfolobus islandicus Y.G.57.14", "Sulfolobus islandicus Y.N.15.51", "Sulfolobus solfataricus P2", "Sulfolobus tokodaii str. 7", "Thermococcus gammatolerans EJ3", "Thermococcus kodakarensis KOD1", "Thermococcus onnurineus NA1", "Thermococcus sibiricus MM 739", "Thermofilum pendens Hrk 5", "Thermoplasma acidophilum DSM 1728", "Thermoplasma volcanium GSS1", "Thermoproteus neutrophilus V24Sta", "Thermosphaera aggregans DSM 11486", "Vulcanisaeta distributa DSM 14429", "uncultured methanogenic archaeon RC-I" ) ```
How to quasi match two vectors of strings (in R)?
CC BY-SA 4.0
null
2010-10-08T21:31:00.867
2020-10-16T16:12:09.383
2018-12-15T23:43:20.467
11887
253
[ "r", "text-mining" ]
3426
2
null
2948
2
null
I've an java implementation for non-overlapping, weighted/unweighted network that could probably handle 3 million nodes (I've tested it for a million node dataset). However, it works like k-means, and needs the number of partitions to be detected as an input (k in kmeans). You can find more info [here](http://www.google.ca/url?sa=t&source=web&cd=2&sqi=2&ved=0CBgQhgIwAQ&url=http%3A%2F%2Frepository.library.ualberta.ca%2Fdspace%2Fbitstream%2F10048%2F1529%2F1%2Fthesis.pdf&ei=T4-vTM2INpKCsQPGxfH9Aw&usg=AFQjCNE97bt6zDadI3BU0jfgbJz8C2i9ng), and here is the [code](http://www.reirab.com/TopLeader/index.html), [in github](https://github.com/rabbanyk/CommunityEvaluation/tree/master/src/algorithms/communityMining/topleaders) Cheers,
null
CC BY-SA 3.0
null
2010-10-08T21:42:44.397
2017-03-01T19:47:15.617
2017-03-01T19:47:15.617
-1
null
null
3427
2
null
3425
15
null
There are many ways to measure distances between two strings. Two important (standard) approaches widely implemented in R are the Levenshtein and the Hamming distance. The former is avalaible in package 'MiscPsycho' and the latter in 'e1071'. Using these, i would simply compute a 92 by 55 matrix of pairwise distances, then proceed from there (i.e. the best candidate match for string "1" in list 1 is the string "x" from list 2 with smallest distance to string "1"). Alternatively, there is a function compare() in package RecordLinkage that seems to be designed to do what you want and uses the so called Jaro-Winkler [distance](http://en.wikipedia.org/wiki/Jaro%E2%80%93Winkler_distance) which seems more appropriate for the task at hand, but i've had no experience with it. EDIT: i'm editing my answer to include Brandon's comment as well as Tal's code, to find a match to "Aeropyrum pernix", the first entry of vec55: ``` agrep(vec55[1],vec91,ignore.case=T,value=T,max.distance = 0.1, useBytes = FALSE) [1] "Aeropyrum pernix K1" ```
null
CC BY-SA 2.5
null
2010-10-08T21:45:29.480
2010-10-09T20:14:40.313
2010-10-09T20:14:40.313
603
603
null
3428
2
null
3412
4
null
there is a paper by [d.r.cox](https://projecteuclid.org/ebooks/berkeley-symposium-on-mathematical-statistics-and-probability/Proceedings%20of%20the%20Fourth%20Berkeley%20Symposium%20on%20Mathematical%20Statistics%20and%20Probability,%20Volume%201:%20Contributions%20to%20the%20Theory%20of%20Statistics/chapter/Tests%20of%20Separate%20Families%20of%20Hypotheses/bsmsp/1200512162) that discusses testing separate [unnested] models. it considers a few examples, which do not rise to the complexity of mixed models. [as my facility with R code is limited, i'm not quite sure what your models are.] altho cox's paper may not solve your problem directly, it may be helpful in two possible ways. - you can search google scholar for citations to his paper, to see if subsequent such results come closer to what you want. - if you are of an analytical bent, you could try applying cox's method to your problem. [perhaps not for the faint-hearted.] btw - cox does mention in passing the idea srikant broached of combining the two models into a larger one. he doesn't pursue how one would then decide which model is better, but he remarks that even if neither model is very good, the combined model might give an adequate fit to the data. [it's not clear in your situation that a combined model would make sense.]
null
CC BY-SA 4.0
null
2010-10-08T22:27:14.820
2022-09-18T20:11:05.137
2022-09-18T20:11:05.137
79696
1112
null
3429
2
null
346
12
null
I re-direct you to my answer to a similar [question](https://stats.stackexchange.com/questions/3372/is-it-possible-to-accumulate-a-set-of-statistics-that-describes-a-large-number-of/3376#3376). In a nutshell, it's a read once, 'on the fly' algorithm with $O(n)$ worst case complexity to compute the (exact) median.
null
CC BY-SA 2.5
null
2010-10-08T22:49:46.743
2010-10-08T22:49:46.743
2017-04-13T12:44:55.360
-1
603
null
3430
2
null
3402
0
null
if your sample size $n$ is not such a tiny fraction of the population size $N$ as in your example, and if you sample without replacement [Sw/oR], a better expression for the [estimated] SE is $$\hat{SE} = \sqrt{\frac{N - n}{N}\frac{\hat p \hat q}{n}},$$ where $\hat p$ is the estimated proportion $j/n$ and $\hat q = 1- \hat p$. [the term $\frac{N-n}{N}$ is called the FPC [finite population correction]. altho whuber's remark is technically correct, it seems to suggest that nothing can be done to get, say, a confidence interval for the true proportion $p$. if $n$ is large enough to make a normal approximation reasonable [$np > 10$, say], it is unlikely one would get $j=0$. also, if the sample size is large enough for a normal approximation using the true $SE$ to be reasonable, using $\hat{SE}$ instead also gives a reasonable approximation. [if your $n$ is really small and you use Sw/oR, you may have to use the exact hypergeometric distribution for $j$ instead of a normal approximation. if you do SwR, the size of $N$ is irrelevant and you can use exact binomial methods to get a CI for $p$.] in any case, since $p(1-p) \le 1/4$, one could always be conservative and use $\frac{1}{2\sqrt{n}}$ in place of $\sqrt{\frac{\hat p \hat q}{n}}$ in the above. if you do that, it takes a sample of $n = 1,111$ to get an estimated ME [margin of error = 2$\hat {SE}$] of $\pm$.03 [regardless of how big $N$ is!].
null
CC BY-SA 2.5
null
2010-10-08T23:11:33.593
2010-10-08T23:30:44.833
2010-10-08T23:30:44.833
1112
1112
null
3431
2
null
3413
3
null
[Octave](http://www.gnu.org/software/octave/) has a built-in Hurst Exponent function.
null
CC BY-SA 2.5
null
2010-10-08T23:53:06.367
2010-10-08T23:53:06.367
null
null
226
null
3432
2
null
3425
7
null
To supplement Kwak's useful answer, allow me to add some simple principles and ideas. A good way to determine the metric is by considering how the strings might vary from their target. "Edit distance" is useful when the variation is a combination of typographic errors like transposing neighbors or mis-typing a single key. Another useful approach (with a slightly different philosophy) is to map every string into one representative of a class of related strings. The "[Soundex](http://en.wikipedia.org/wiki/Soundex)" method does this: the Soundex code for a word is a sequence of four characters encoding the principal consonant and groups of similar-sounding internal consequence. It is used when words are phonetic misspellings or variants of one another. In the example application you would fetch all target words whose Soundex code equals the Soundex code for each probe word. (There could be zero or multiple targets fetched this way.)
null
CC BY-SA 2.5
null
2010-10-09T00:12:17.553
2010-10-09T00:12:17.553
null
null
919
null
3433
2
null
3425
22
null
I've had similar problems. (seen here: [https://stackoverflow.com/questions/2231993/merging-two-data-frames-using-fuzzy-approximate-string-matching-in-r](https://stackoverflow.com/questions/2231993/merging-two-data-frames-using-fuzzy-approximate-string-matching-in-r)) Most of the recommendations that I received fell around: `pmatch()`, and `agrep()`, `grep()`, `grepl()` are three functions that if you take the time to look through will provide you with some insight into approximate string matching either by approximate string or approximate regex. Without seeing the strings, it's hard to provide you with hard example of how to match them. If you could provide us with some example data I'm sure we could come to a solution. Another option that I found works well is to flatten the strings, `tolower()`, looking at the first letter of each word within the string and then comparing. Sometimes that works without a hitch. Then there are more complicated things like the distances mentioned in other answers. Sometimes these work, sometimes they're horrible - it really depends on the strings. Can we see them? ## Update It looks like agrep() will do the trick for most of these. Note that agrep() is just R's implementation of Levenshtein distance. ``` agrep(vec55[1],vec91,value=T) ``` Some don't compute although, I'm not even sure if Ferroplasm acidaramus is the same as Ferroglobus placidus DSM 10642, for example: ``` agrep(vec55[7],vec91,value=T) ``` I think you may be a bit SOL for some of these and perhaps creating an index from scratch is the best bet. ie,. Create a table with id numbers for vec55, and then manually create a reference to the id's in vec55 in vec91. Painful, I know, but a lot of it can be done with agrep().
null
CC BY-SA 2.5
null
2010-10-09T02:41:55.300
2010-10-09T20:00:20.550
2017-05-23T12:39:26.167
-1
776
null
3434
2
null
3377
4
null
I've used the following approach in the past to calculate absolution deviation moderately efficiently (note, this a programmers approach, not a statisticians, so indubitably there may be clever tricks like [shabbychef's](https://stats.stackexchange.com/questions/3377/online-algorithm-for-mean-absolute-deviation-and-large-data-set/3378#3378) that might be more efficient). WARNING: This is not an online algorithm. It requires `O(n)` memory. Furthermore, it has a worst case performance of `O(n)`, for datasets like `[1, -2, 4, -8, 16, -32, ...]` (i.e. the same as the full recalculation). [1] However, because it still performs well in many use cases it might be worth posting here. For example, in order to calculate the absolute deviance of 10000 random numbers between -100 and 100 as each item arrives, my algorithm takes less than one second, while the full recalculation takes over 17 seconds (on my machine, will vary per machine and according to input data). You need to maintain the entire vector in memory however, which may be a constraint for some uses. The outline of the algorithm is as follows: - Instead of having a single vector to store past measurements, use three sorted priority queues (something like a min/max heap). These three lists partition the input into three: items greater than the mean, items less than the mean and items equal to the mean. - (Almost) every time you add an item the mean changes, so we need to repartition. The crucial thing is the sorted nature of the partitions which means that instead of scanning every item in the list to repartion, we only need to read those items we are moving. While in the worst case this will still require O(n) move operations, for many use-cases this is not so. - Using some clever bookkeeping, we can make sure that the deviance is correctly calculated at all times, when repartitioning and when adding new items. Some sample code, in python, is below. Note that it only allows items to be added to the list, not removed. This could easily be added, but at the time I wrote this I had no need for it. Rather than implement the priority queues myself, I have used the [sortedlist](http://stutzbachenterprises.com/blist/sortedlist.html) from Daniel Stutzbach's excellent [blist package](http://stutzbachenterprises.com/blist/), which use [B+Tree](http://en.wikipedia.org/wiki/B%2B_tree)s internally. Consider this code licensed under the [MIT license](http://www.opensource.org/licenses/mit-license.html). It has not been significantly optimised or polished, but has worked for me in the past. New versions will be available [here](http://github.com/fmark/phes-code/blob/master/deviance_list.py). Let me know if you have any questions, or find any bugs. ``` from blist import sortedlist import operator class deviance_list: def __init__(self): self.mean = 0.0 self._old_mean = 0.0 self._sum = 0L self._n = 0 #n items # items greater than the mean self._toplist = sortedlist() # items less than the mean self._bottomlist = sortedlist(key = operator.neg) # Since all items in the "eq list" have the same value (self.mean) we don't need # to maintain an eq list, only a count self._eqlistlen = 0 self._top_deviance = 0 self._bottom_deviance = 0 @property def absolute_deviance(self): return self._top_deviance + self._bottom_deviance def append(self, n): # Update summary stats self._sum += n self._n += 1 self._old_mean = self.mean self.mean = self._sum / float(self._n) # Move existing things around going_up = self.mean > self._old_mean self._rebalance(going_up) # Add new item to appropriate list if n > self.mean: self._toplist.add(n) self._top_deviance += n - self.mean elif n == self.mean: self._eqlistlen += 1 else: self._bottomlist.add(n) self._bottom_deviance += self.mean - n def _move_eqs(self, going_up): if going_up: self._bottomlist.update([self._old_mean] * self._eqlistlen) self._bottom_deviance += (self.mean - self._old_mean) * self._eqlistlen self._eqlistlen = 0 else: self._toplist.update([self._old_mean] * self._eqlistlen) self._top_deviance += (self._old_mean - self.mean) * self._eqlistlen self._eqlistlen = 0 def _rebalance(self, going_up): move_count, eq_move_count = 0, 0 if going_up: # increase the bottom deviance of the items already in the bottomlist if self.mean != self._old_mean: self._bottom_deviance += len(self._bottomlist) * (self.mean - self._old_mean) self._move_eqs(going_up) # transfer items from top to bottom (or eq) list, and change the deviances for n in iter(self._toplist): if n < self.mean: self._top_deviance -= n - self._old_mean self._bottom_deviance += (self.mean - n) # we increment movecount and move them after the list # has finished iterating so we don't modify the list during iteration move_count += 1 elif n == self.mean: self._top_deviance -= n - self._old_mean self._eqlistlen += 1 eq_move_count += 1 else: break for _ in xrange(0, move_count): self._bottomlist.add(self._toplist.pop(0)) for _ in xrange(0, eq_move_count): self._toplist.pop(0) # decrease the top deviance of the items remain in the toplist self._top_deviance -= len(self._toplist) * (self.mean - self._old_mean) else: if self.mean != self._old_mean: self._top_deviance += len(self._toplist) * (self._old_mean - self.mean) self._move_eqs(going_up) for n in iter(self._bottomlist): if n > self.mean: self._bottom_deviance -= self._old_mean - n self._top_deviance += n - self.mean move_count += 1 elif n == self.mean: self._bottom_deviance -= self._old_mean - n self._eqlistlen += 1 eq_move_count += 1 else: break for _ in xrange(0, move_count): self._toplist.add(self._bottomlist.pop(0)) for _ in xrange(0, eq_move_count): self._bottomlist.pop(0) # decrease the bottom deviance of the items remain in the bottomlist self._bottom_deviance -= len(self._bottomlist) * (self._old_mean - self.mean) if __name__ == "__main__": import random dv = deviance_list() # Test against some random data, and calculate result manually (nb. slowly) to ensure correctness rands = [random.randint(-100, 100) for _ in range(0, 1000)] ns = [] for n in rands: dv.append(n) ns.append(n) print("added:%4d, mean:%3.2f, oldmean:%3.2f, mean ad:%3.2f" % (n, dv.mean, dv._old_mean, dv.absolute_deviance / dv.mean)) assert sum(ns) == dv._sum, "Sums not equal!" assert len(ns) == dv._n, "Counts not equal!" m = sum(ns) / float(len(ns)) assert m == dv.mean, "Means not equal!" real_abs_dev = sum([abs(m - x) for x in ns]) # Due to floating point imprecision, we check if the difference between the # two ways of calculating the asb. dev. is small rather than checking equality assert abs(real_abs_dev - dv.absolute_deviance) < 0.01, ( "Absolute deviances not equal. Real:%.2f, calc:%.2f" % (real_abs_dev, dv.absolute_deviance)) ``` [1] If symptoms persist, see your doctor.
null
CC BY-SA 2.5
null
2010-10-09T03:27:11.417
2010-11-09T05:33:11.947
2017-04-13T12:44:36.923
-1
179
null
3435
2
null
3425
3
null
I would also suggest you check out [N-grams](http://en.wikipedia.org/wiki/N-gram) and the [Damerau–Levenshtein](http://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance) distance besides the other suggestions of Kwak. This [paper](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.15.178&rep=rep1&type=pdf) compares the accuracy of a few different edit distances mentioned here (and is highly cited according to google scholar). As you can see there are many different ways to approach this, and you can even combine different metrics (the paper I linked to talks about this alittle bit). I think the Levenshtein and related based metrics make the most intuitive sense, especially if errors occur because of human typing. N-grams are also simple and make sense for data that is not names or words per say. While soundex is an option, the little bit of work I have seen (which is admittedly a very small amount) soundex does not perform as well as Levenshstein or other edit distances for matching names. And the Soundex is limited to phonetic phrases likely inputted by human typers, where as Levenshtein and N-grams have a potentially broader scope (especially N-gram, but I would expect the Levenshtein distance to perform better for non-words as well). I can't help as far as packages, but the concept of N-grams is pretty simple (I did make an SPSS macro to do N-grams recently, but for such a small project I would just go with the already made packages in R the other posters have suggested). [Here](http://hetland.org/coding/python/levenshtein.py) is an example of calculating the Levenshtein distance in python.
null
CC BY-SA 2.5
null
2010-10-09T03:32:47.390
2010-10-09T03:32:47.390
null
null
1036
null
3436
2
null
3381
3
null
I would see each histogram as a different model (parametrized by the width). Fitting a smoothing spline or some other kind of smoother for each of the models is simple. You can then do model selection (such as cross-validation) to choose the histogram width that gives the best results, or do model stacking to fit least-squares weights on the models. However, why not directly smooth the data instead of clustering it into histogram bars first? There are finite-window width kernels that don't use the entire dataset for prediction at a given point. Practicality and speed depends on what you are really trying to obtain, but I am sure there exist simpler solutions.
null
CC BY-SA 2.5
null
2010-10-09T09:16:11.677
2010-10-09T09:16:11.677
null
null
1526
null
3437
2
null
3419
3
null
The simple idea is to make PCA and base distance of few first components (yet I don't like this technique because of assumptions it makes). The complex idea is to use machine learning; the resulting distances will expose the classifier structure, so will be about as good as the classification accuracy. The simplest approach here is just random forest object distance ([Breiman's example](http://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm#cluster)), but you can also use the kernel justified by SVM, see for instance [Winters-Hilt & Merat 2007](http://www.biomedcentral.com/1471-2105/8/S7/S18).
null
CC BY-SA 2.5
null
2010-10-09T11:59:06.653
2010-10-09T11:59:06.653
null
null
null
null
3438
1
3440
null
14
55594
See this Wikipedia page: [Binomial proportion confidence interval](http://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval#Agresti-Coull_Interval). To get the Agresti-Coull Interval, one needs to calculate a percentile of the normal distribution, called $z$. How do I calculate the percentile? Is there a ready-made function that does this in Wolfram Mathematica and/or Python/NumPy/SciPy?
Calculating percentile of normal distribution
CC BY-SA 4.0
null
2010-10-09T13:34:40.713
2020-08-23T04:02:16.183
2020-08-23T04:02:16.183
236645
5793
[ "python", "normal-distribution" ]
3439
2
null
3438
4
null
Well, you didn't ask about R, but in R you do it using ?qnorm (It's actually the quantile, not the percentile, or so I believe) ``` > qnorm(.5) [1] 0 > qnorm(.95) [1] 1.644854 ```
null
CC BY-SA 2.5
null
2010-10-09T13:40:55.500
2010-10-09T13:40:55.500
null
null
253
null
3440
2
null
3438
3
null
For Mathematica `$VersionNumber > 5` you can use ``` Quantile[NormalDistribution[μ, σ], 100 q] ``` for the `q`-th percentile. Otherwise, you have to load the appropriate Statistics package first.
null
CC BY-SA 3.0
null
2010-10-09T14:08:55.643
2017-01-17T09:38:14.200
2017-01-17T09:38:14.200
830
830
null
3441
2
null
3438
4
null
In Python, you can use the [stats](http://www.scipy.org/SciPyPackages/Stats) module from the [scipy](http://www.scipy.org/) package (look for `cdf()`, as in the following [example](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html)). (It seems the [transcendantal](http://bonsai.hgc.jp/~mdehoon/software/python/special.html) package also includes usual cumulative distributions).
null
CC BY-SA 2.5
null
2010-10-09T14:20:58.783
2010-10-09T14:20:58.783
null
null
930
null
3442
2
null
97
28
null
For basic summaries, I agree that reporting frequency tables and some indication about central tendency is fine. For inference, a recent article published in PARE discussed t- vs. MWW-test, [Five-Point Likert Items: t test versus Mann-Whitney-Wilcoxon](http://pareonline.net/pdf/v15n11.pdf). For more elaborated treatment, I would recommend reading Agresti's review on ordered categorical variables: > Liu, Y and Agresti, A (2005). The analysis of ordered categorical data: An overview and a survey of recent developments. Sociedad de Estadística e Investigación Operativa Test, 14(1), 1-73. It largely extends beyond usual statistics, like threshold-based model (e.g. proportional odds-ratio), and is worth reading in place of Agresti's [CDA](http://www.stat.ufl.edu/~aa/cda/cda.html) book. Below I show a picture of three different ways of treating a Likert item; from top to bottom, the "frequency" (nominal) view, the "numerical" view, and the "probabilistic" view (a [Partial Credit Model](http://en.wikipedia.org/wiki/Polytomous_Rasch_model)): ![alt text](https://i.stack.imgur.com/ppdQg.png) The data comes from the `Science` data in the `ltm` package, where the item concerned technology ("New technology does not depend on basic scientific research", with response "strongly disagree" to "strongly agree", on a four-point scale)
null
CC BY-SA 3.0
null
2010-10-09T15:03:51.913
2014-01-21T21:33:29.013
2014-01-21T21:33:29.013
2921
930
null
3443
2
null
3400
11
null
Propensity scores are typically used in the matching literature. Propensity scores use pre-treatment covariates to estimate the probability of receiving treatment. Essentially, a regression (either just regular OLS or logit, probit, etc) is used to calculate the propensity score with treatment as your outcome and pre-treatment variables are your covariates. Once a good estimate of the propensity score is obtained, subjects with similar propensity scores, but different treatments received, are matched to one another. The treatment effect is the difference in means between these two groups. Rosenbaum and Rubin (1983) show that matching treated and control subjects using just the propensity score is sufficient to remove all bias in the estimate of the treatment effect stemming from the observed pre-treatment covariates used to construct the score. Note that this proof requires the use of the true propensity score, rather than an estimate. The advantage of this approach is it turns a problem of matching in multiple dimensions (one for each pre-treatment covariate) into a univariate matching case---a great simplification. Rosenbaum, Paul R. and Donald B. Rubin. 1983. "[The Central Role of the Propensity Score in Observational Studies for Causal Effects](http://faculty.smu.edu/millimet/classes/eco7377/papers/rosenbaum%20rubin%2083a.pdf)." Biometrika. 70(1): 41--55.
null
CC BY-SA 2.5
null
2010-10-09T16:02:58.157
2010-10-09T18:23:45.387
2010-10-09T18:23:45.387
930
401
null
3444
2
null
3438
21
null
John Cook's page, [Distributions in Scipy](http://www.johndcook.com/distributions_scipy.html), is a good reference for this type of stuff: ``` In [15]: import scipy.stats In [16]: scipy.stats.norm.ppf(0.975) Out[16]: 1.959963984540054 ```
null
CC BY-SA 2.5
null
2010-10-09T16:09:00.780
2010-10-09T16:09:00.780
null
null
251
null
3445
1
null
null
3
10854
I have the following data, which is the output from the [MS Hudson](http://bioinformatics.oxfordjournals.org/content/18/2/337.full.pdf+html) software. ``` segsites: 6 positions: 0.1256 0.3122 0.3218 0.4970 0.5951 0.7943 001010 110101 010100 001010 010100 ``` I want to make an R function to calculate the R-Squared across pairs separated by <10% (the difference between positions of SNPs must be < 0.10) of the simulated genomic region. How would I go about doing this?
How to calculate the pairwise LD for the given data?
CC BY-SA 3.0
null
2010-10-09T16:28:24.350
2017-02-02T21:14:02.180
2013-04-14T10:54:52.443
null
null
[ "r", "correlation", "genetics" ]
3446
1
3454
null
9
2543
I've been looking at some of the packages from the High perf task [view](http://cran.r-project.org/web/views/HighPerformanceComputing.html) dealing with GPU computations, and given that most GPU seem to be an order of magnitude stronger at performing single precision arithmetics than DP [ones](http://en.wikipedia.org/wiki/Comparison_of_Nvidia_graphics_processing_units), I was wondering: - why none of the packages gives more control to the user on the type of precision required? I can see many applications in statistics where SP arithmetics (i.e., number coded with 7 digit accuracy) are good enough for practical use (if I am overestimating the gains involved, let me know). - is Python more flexible on this? If so, why? I don't see why the absence of a 'single' type in R would make including such an option (together with a warning) in say GPUtools or magma impossible (though I'll be happy to be shown wrong). PS: I'm specifically thinking of applications where the numbers are already dimension-wise scaled and centered (so that Chebychev's [inequality](http://en.wikipedia.org/wiki/Chebyshev%27s_inequality) is binding) dimension-wise.
Significance of single precision floating point
CC BY-SA 4.0
null
2010-10-09T17:47:05.413
2020-11-15T18:55:57.833
2020-11-15T18:55:57.833
265676
603
[ "r", "python", "gpu" ]
3447
2
null
3445
4
null
I know the `LDheatmap` function/package can calculate the pairwise LDs, see `ldhm$LDmatrix` in the example below. I'm not familiar with the software you mention or how to get data into the required format for `LDheatmap`. ``` > library(LDheatmap) > data(CEUData) > ldhm <- LDheatmap(CEUSNP, genetic.distances=CEUDist, LDmeasure="r") > head(ldhm$LDmatrix, 1) rs4615512 rs2283089 rs1894731 rs2283092 rs2283093 rs6978939 rs6979287 rs4615512 NA 0.003329066 0.1657227 0.1657227 0.1269654 0.1657227 0.1209495 rs6979572 rs2283094 rs2283095 rs6467111 rs1111183 rs2237789 rs2299531 rs2299532 rs4615512 0.1197102 0.8395061 0.1197102 0.099289 0.3470553 0.1197102 0.3746656 0.3470553 ```
null
CC BY-SA 2.5
null
2010-10-09T18:14:22.367
2010-10-09T18:14:22.367
null
null
251
null
3448
2
null
3445
3
null
There are various R/Bioconductor packages that allow you to compute pairwise correlation for SNPs in linkage disequilibrium, see the CRAN Task View [Statistical Genetics](http://cran.r-project.org/web/views/Genetics.html). As I worked directly with whole genome scan, I've been mainly using `snpMatrix`, but [LDheatmap](http://cran.r-project.org/web/packages/LDheatmap/index.html) or [mapLD](http://cran.r-project.org/web/packages/mapLD/index.html) are fine. However, usually they expect genotype data (AA, AB, or BB), so I guess you will have to first convert your binary-encoded haplotype... About the filter on location, I also guess you just have to consider the pairwise $R^2$ or $D'$ for proximal SNPs (usually, we draw a so-called heatmap of pairwise LD, which is roughly speaking the lower-diag elements of the correlation matrix, so you just have to consider the very first off diagonal elements). Update Now that I've read some papers, I'm not sure you will achieve your goals with the aforementioned method. To my knowledge, few packages allow to cope with multiallelic loci or haplotype blocks, one example being the [gap](http://cran.r-project.org/web/packages/gap/index.html) package from JH Zhao (see also a review in the [Journal of Statistical Software](http://www.jstatsoft.org/v23/i08/paper)). The `LDkl()` function for example computes D' and $\rho$ from a vector of haplotype frequencies, which can easily be plotted using `image()` or `levelplot()` from the `lattice` package.
null
CC BY-SA 2.5
null
2010-10-09T18:14:36.530
2010-10-10T14:05:27.840
2010-10-10T14:05:27.840
930
930
null
3449
2
null
134
14
null
#Edit: As @Hunaphu's points out (and @whuber below in his answer) the original answer I gave to the OP (below) is wrong. It is indeed quicker to first sort the initial batch and then keep updating the median up or down (depending on whether a new data points falls to the left or to the right of the current median). --- It's bad form to sort an array to compute a median. Medians (and other quantiles) are typically computed using the [quickselect](http://www.ics.uci.edu/%7Eeppstein/161/960125.html) algorithm, with $O(n)$ complexity. You may also want to look at my answer to a recent related question [here](https://stats.stackexchange.com/questions/3372/is-it-possible-to-accumulate-a-set-of-statistics-that-describes-a-large-number-of/3376#3376).
null
CC BY-SA 4.0
null
2010-10-09T19:02:09.717
2021-08-19T04:28:21.460
2021-08-19T04:28:21.460
603
603
null
3450
2
null
3400
7
null
The question seems to involve two things that really ought to be considered separately. First is whether one can infer causality from an observational study, and on that you might contrast the views of, say, Pearl (2009), who argues yes so long as you can model the process properly, versus the view @propofol, who will find many allies in experimental disciplines and who may share some of the thoughts expressed in (a rather obscure but nonetheless good) essay by Gerber et al (2004). Second, assuming that you do think that causality can be inferred from observational data, you might wonder whether propensity score methods are useful in doing so. Propensity score methods include various conditioning strategies as well as inverse propensity weighting. A nice review is given by Lunceford and Davidian (2004). They have good properties but certain assumptions are required (most specifically, "conditional independence") for them to be consistent. A little wrinkle though: propensity score matching and weighting are also used in the analysis of randomized experiments when, for example, there is an interest in computing "indirect effects" and also when there are problems of potentially non-random attrition or drop out (in which case what you have resembles an observational study). References Gerber A, et al. 2004. "The illusion of learning from observational research." In Shapiro I, et al, Problems and Methods in the Study of Politics, Cambridge University Press. Lunceford JK, Davidian M. 2004. "Stratification and weighting via the propensity score in estimation of causal treatment effects: a comparative study." Statistics in Medicine 23(19):2937–2960. Pearl J. 2009. Causality (2nd Ed.), Cambridge University Press.
null
CC BY-SA 2.5
null
2010-10-09T19:17:12.947
2010-10-09T19:17:12.947
null
null
96
null
3451
2
null
2849
4
null
I guess the current (econometrics) industry standard for this setting is fixed effects regression. Take a look at the section on panel data in [this paper](http://www-personal.umich.edu/~nicholsa/ciwod.pdf) by Austin Nichols for a concise discussion. For these kinds of analyses you want larger N, typically, though. (By the way, for those with a background in statistics rather than econometrics, the usage of "fixed effects" is different for econometricians. For them, it means a regression that uses differencing or dummy variables to account for unmeasured (linearly additive) confounders in a repeat observation setting. This is different than what statisticians usually mean, which is usually in contrasting "fixed" and "random" effects.)
null
CC BY-SA 2.5
null
2010-10-09T19:40:28.550
2010-10-09T19:40:28.550
null
null
96
null
3452
2
null
3446
6
null
- Because before GPUs there was no practical sense of using single reals; you never have too much accuracy and memory is usually not a problem. And supporting only doubles made R design simpler. (Although R supports reading/writing single reals.) - Yes, because Python is aimed to be more compatible with compiled languages. Yet you are right that it is possible for R libraries' wrappers to do in-fly conversion (this of course takes time but this is a minor problem); you can try e-mailing GPU packages' maintainers requesting such changes.
null
CC BY-SA 2.5
null
2010-10-09T20:12:24.353
2010-10-09T20:12:24.353
null
null
null
null
3453
2
null
3446
4
null
I presume that by GPU programming, you mean programming nvidia cards? In which case the underlying code calls from R and python are to C/[CUDA](http://en.wikipedia.org/wiki/CUDA). --- The simple reason that only single precision is offered is because that is what most GPU cards support. However, the new nvidia [Fermi](http://www.nvidia.com/object/fermi_architecture.html) architecture does support double precision. If you bought a nvidia graphics card this year, then it's probably a Fermi. Even here things aren't simple: - You get a slight performance hit if you compile with double precision (a factor of two if I remember correctly). - On the cheaper cards Fermi cards, nvidia intentionally disabled double precision. However, it is possible to get round this and run double precision programs. I managed to do this on my GeForce GTX 465 under linux. To answer the question in your title, "Is single precision OK?", it depends on your application (sorry crap answer!). I suppose everyone now uses double precision because it no longer gives a performance hit. When I dabbled with GPUs, programming suddenly became far more complicated. You have to worry about things like: - warpsize and arranging your memory properly. - #threads per kernel. - debugging is horrible - there's no print statement in the GPU kernel statements - lack of random number generators - Single precision.
null
CC BY-SA 2.5
null
2010-10-09T20:40:43.653
2010-10-09T20:47:49.703
2010-10-09T20:47:49.703
8
8
null
3454
2
null
3446
5
null
From the [GPUtools help file](http://cran.r-project.org/web/packages/gputools/gputools.pdf), it seems that `useSingle=TRUE` is the default for the functions.
null
CC BY-SA 2.5
null
2010-10-09T22:56:45.643
2010-10-09T22:56:45.643
null
null
251
null
3455
2
null
3446
1
null
The vast majority of GPUs in circulation only support single precision floating point. As far as the title question, you need to look at the data you'll be handling to determine if single precision is enough for you. Often, you'll find that singles are perfectly acceptable for >90% of the data you handle, but will fail spectacularly for that last 10%; unless you have an easy way of determining whether your particular data set will fail or not, you're stuck using double precision for everything.
null
CC BY-SA 2.5
null
2010-10-10T05:22:38.977
2010-10-10T05:22:38.977
null
null
1539
null
3456
2
null
3419
3
null
There is a subfield called Distance Metric Learning. One such method is Information Theoretic Metric Learning (ITML).
null
CC BY-SA 2.5
null
2010-10-10T06:12:37.237
2010-10-10T06:12:37.237
null
null
1540
null
3457
2
null
138
4
null
One more: R bloggers has many posts with tutorials materials: [http://www.r-bloggers.com/?s=tutorial](http://www.r-bloggers.com/?s=tutorial)
null
CC BY-SA 2.5
null
2010-10-10T06:27:50.547
2010-10-10T06:27:50.547
null
null
253
null
3458
1
3459
null
25
12284
I am looking for an alternative to Classification Trees which might yield better predictive power. The data I am dealing with has factors for both the explanatory and the explained variables. I remember coming across random forests and neural networks in this context, although never tried them before, are there another good candidate for such a modeling task (in R, obviously)?
Alternatives to classification trees, with better predictive (e.g: CV) performance?
CC BY-SA 2.5
null
2010-10-10T09:27:49.817
2013-10-09T17:51:28.310
2010-10-10T13:24:22.520
null
253
[ "r", "machine-learning", "classification", "cart" ]
3459
2
null
3458
31
null
I think it would be worth giving a try to Random Forests ([randomForest](http://cran.r-project.org/web/packages/randomForest/index.html)); some references were provided in response to related questions: [Feature selection for “final” model when performing cross-validation in machine learning](https://stats.stackexchange.com/questions/2306/feature-selection-for-final-model-when-performing-cross-validation-in-machine-l/2307#2307); [Can CART models be made robust?](https://stats.stackexchange.com/questions/2410/can-cart-models-be-made-robust). Boosting/bagging render them more stable than a single CART which is known to be very sensitive to small perturbations. Some authors argued that it performed as well as penalized SVM or [Gradient Boosting Machines](http://en.wikipedia.org/wiki/Gradient_boosting) (see, e.g. Cutler et al., 2009). I think they certainly outperform NNs. Boulesteix and Strobl provides a nice overview of several classifiers in [Optimal classifier selection and negative bias in error rate estimation: an empirical study on high-dimensional prediction](http://www.biomedcentral.com/1471-2288/9/85/) (BMC MRM 2009 9: 85). I've heard of another good study at the [IV EAM meeting](http://www.iqb.hu-berlin.de/veranst/EAM-SMABS), which should be under review in Statistics in Medicine, > João Maroco, Dina Silva, Manuela Guerreiro, Alexandre de Mendonça. Do Random Forests Outperform Neural Networks, Support Vector Machines and Discriminant Analysis classifiers? A case study in the evolution to dementia in elderly patients with cognitive complaints I also like the [caret](http://caret.r-forge.r-project.org/Classification_and_Regression_Training.html) package: it is well documented and allows to compare predictive accuracy of different classifiers on the same data set. It takes care of managing training /test samples, computing accuracy, etc in few user-friendly functions. The [glmnet](http://cran.r-project.org/web/packages/glmnet/index.html) package, from Friedman and coll., implements penalized GLM (see the review in the [Journal of Statistical Software](http://www.jstatsoft.org/v33/i01/)), so you remain in a well-known modeling framework. Otherwise, you can also look for association rules based classifiers (see the CRAN Task View on [Machine Learning](http://cran.r-project.org/web/views/MachineLearning.html) or the [Top 10 algorithms in data mining](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.144.5575&rep=rep1&type=pdf) for a gentle introduction to some of them). I'd like to mention another interesting approach that I plan to re-implement in R (actually, it's Matlab code) which is [Discriminant Correspondence Analysis](http://www.utdallas.edu/~herve/abdi-wafo2010-mudica-inpress.pdf) from Hervé Abdi. Although initially developed to cope with small-sample studies with a lot of explanatory variables (eventually grouped into coherent blocks), it seems to efficiently combine classical DA with data reduction techniques. References - Cutler, A., Cutler, D.R., and Stevens, J.R. (2009). Tree-Based Methods, in High-Dimensional Data Analysis in Cancer Research, Li, X. and Xu, R. (eds.), pp. 83-101, Springer. - Saeys, Y., Inza, I., and Larrañaga, P. (2007). A review of feature selection techniques in bioinformatics. Bioinformatics, 23(19): 2507-2517.
null
CC BY-SA 3.0
null
2010-10-10T09:50:16.577
2012-05-01T10:48:25.300
2017-04-13T12:44:29.923
-1
930
null
3460
1
3464
null
45
1916
For some of us, refereeing papers is part of the job. When refereeing statistical methodology papers, I think advice from other subject areas is fairly useful, i.e. [computer science](https://cstheory.stackexchange.com/questions/1893/how-do-i-referee-a-paper) and [Maths](https://mathoverflow.net/questions/36596/refereeing-a-paper). This question concerns reviewing more applied statistical papers. By this I mean, the paper is submitted to a non-statistical/mathematical journal and statistics is just mentioned in the "methods" section. Some particular questions: - How much effort should we put in to understand the application area? - How much time should I spend on a report? - How picky are you when looking at figures/tables. - How do you cope with the data not being available. - Do you try and rerun the analysis used. - What's the maximum number of papers your would review in a year? Have a missed any questions? Feel free to edit or add a comment. Edit I coming to this question as a statistician reviewing a biology paper, but I'm interested in the statistical review of any non-mathematical discipline. --- I'm not sure if this should be a CW. On one hand it's a bit open, but on the other I can see myself accepting an answer. Also, answers will probably be fairly long.
Reviewing statistics in papers
CC BY-SA 2.5
null
2010-10-10T09:55:00.890
2010-10-12T09:49:10.387
2017-04-13T12:58:32.177
-1
8
[ "references", "referee" ]
3461
2
null
3458
8
null
For multi-class classification, support vector machines are also a good choice. I typically use the the R kernlab package for this. See the following JSS paper for a good discussion: [http://www.jstatsoft.org/v15/i09/](http://www.jstatsoft.org/v15/i09/)
null
CC BY-SA 2.5
null
2010-10-10T10:19:27.863
2010-10-10T10:19:27.863
null
null
5
null
3462
2
null
3296
3
null
I think you are suffering from the presence of outliers in your design matrix. The remedy is to detect them using a multivariate robust estimator of location/scale (just as you can use the median to detect outliers in an univariate setting but you can't use the mean because the mean itself is sensitive to the presence of outliers). High quality estimators are already present in the R-base tool (through MASS). I advise you to read the following (non technical) summary introduction to multivariate robust method: P. J. Rousseeuw and K. van Driessen (1999) A fast algorithm for the minimum covariance determinant estimator. Technometrics 41, 212-223. There are many good implementation in R, one i recommend particularly is covMcd() in package robustbase (better than the MASS implementation because it includes the small sample correction factor). A typical use would be: ``` x<-mydata #your 300 by 40 matrix of **design variables** out<-covMcd(x) ind.out<-which(out$mcd.wt==0) ``` Now, ind.out contains the indexes of the observations flagged as outliers. You should exclude them from your sample and re-run your classification procedure on the 'decontaminated' sample. I think it will stabilize your results, solve your problem. Let us know :) EDIT: As pointed out by Chl (in the comments, below). It could be advisable, in your case, to supplement the hard rejection rule used in the code above by a graphical method (an implementation of which can be found in the R package mvoutlier). This is wholly consistent with the approach proposed in my answer, in fact it is well explained (and illustrated) in the paper i cite above. Therefore, i will just point out two arguments in its favor that may be particularly relevant to your case (assuming that you indeed have an outlier problem and that these can be found by the mcd): - Provides a visually strong illustration of the problem with outliers as each observations is associated with a measure of its influence on the resulting estimates (observations with outsized influence then stand out). - The approach i proposed applies a strong rejection rule: in a nutshell, any observation whose influence over the final estimates is larger than some threshold is considered an outlier. The graphical approach might help you save some observation, by trying to recover those observations whose influence over the estimator is beyond the threshold but only by a small amount. It is important in the context of your model because 300 observations in a 40 dimensional space is rather sparse already.
null
CC BY-SA 2.5
null
2010-10-10T11:10:14.780
2010-10-12T15:02:24.580
2010-10-12T15:02:24.580
603
603
null
3463
1
null
null
15
9228
I have two time series S, and T. they have the same frequency and the same length. I would like to calculate (using R), the correlation between this pair (i.e. S and T), and also be able to calculate the significance of the correlation), so I can determine whether the correlation is due to chance or not. I would like to do this in R, and am looking for pointers/skeletal framework to get me started.
Computing correlation (and the significance of said correlation) between a pair of time series
CC BY-SA 2.5
null
2010-10-10T11:11:52.523
2010-10-13T06:37:12.110
null
null
1216
[ "r", "time-series", "correlation" ]
3464
2
null
3460
23
null
I am not sure about which area of science you are referring to (I'm sure the answer would be really different if dealing with biology vs physics for instance...) Anyway, as a biologist, I will answer from a "biological" point of view: > How much effort should we put in to understand the application area? I tend at least to read the previous papers from the same authors and look for a few review on the subject if I am not too familiar with it. This is especially true when dealing with new techniques I don't know, because I need to understand if they did all the proper controls etc. > How much time should I spend on a report? As much as needed (OK, dumb answer, I know! :P) In general I would not like someone reviewing my paper to do an approximative job just because he/she has other things to do, so I try not to do it myself. > How picky are you when looking at figures/tables. Quite picky. Figures are the first thing you look at when browsing through a paper. They need to be consistent (e.g. right titles on the axes, correct legend etc.). On occasion I have suggested to use a different kind of plot to show data when I thought the one used was not the best. This happens a lot in biology, a field that is dominated by the "barplot +/- SEM" type of graph. I'm also quite picky on the "materials and methods" section: a perfect statistical analysis on a inherently wrong biological model is completely useless. > How do you cope with the data not being available. You just do and trust the Authors, I guess. In many cases in biology there's not much you can do, especially when dealing with things like imaging or animal behaviour and similar. Unless you want people to publish tons of images, videos etc (that you most likely would not go through anyways), but that may be very unpractical. If you think the data are really necessary ask for the authors to provide them as supplementary data/figures. > Do you try and rerun the analysis used. Only if I have serious doubts on the conclusions drawn by the authors. In biology there's often a difference between what is (or not) "statistically significant" and what is "biologically significant". I prefer a thinner statistical analysis with good biological reasoning then the other way around. But again, in the very unlikely event that I were to review a bio-statistics paper (ahah, that would be some fun!!!) I would probably pay much more attention to the stats than to the biology in there.
null
CC BY-SA 2.5
null
2010-10-10T11:27:35.330
2010-10-10T11:33:27.747
2010-10-10T11:33:27.747
582
582
null
3465
2
null
3458
3
null
As already mentioned Random Forests are a natural "upgrade" and, these days, SVM are generally the recommended technique to use. I want to add that more often than not switching to SVM yields very disappointing results. Thing is, whilst techniques like random trees are almost trivial to use, SVM are a bit trickier. I found this paper invaluable back when I used SVM for the first time (A Practical Guide to Support Vector Classication) [http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf](http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf) In R you can use the the e1071 package for SVM, it links against the de facto standard (in free software at least!) libSVM library.
null
CC BY-SA 2.5
null
2010-10-10T12:08:40.070
2010-10-10T17:31:45.723
2010-10-10T17:31:45.723
300
300
null
3466
1
3467
null
63
69981
Imagine the following common design: - 100 participants are randomly allocated to either a treatment or a control group - the dependent variable is numeric and measured pre- and post- treatment Three obvious options for analysing such data are: - Test the group by time interaction effect in mixed ANOVA - Do an ANCOVA with condition as the IV and the pre- measure as the covariate and post measure as the DV - Do a t-test with condition as the IV and pre-post change scores as the DV Question: - What is the best way to analyse such data? - Are there reasons to prefer one approach over another?
Best practice when analysing pre-post treatment-control designs
CC BY-SA 2.5
null
2010-10-10T13:04:18.347
2022-08-21T17:03:50.260
2022-08-21T17:03:50.260
121522
183
[ "ancova", "clinical-trials", "pre-post-comparison", "faq" ]
3467
2
null
3466
43
null
There is a huge literature around this topic (change/gain scores), and I think the best references come from the biomedical domain, e.g. > Senn, S (2007). Statistical issues in drug development. Wiley (chap. 7 pp. 96-112) In biomedical research, interesting work has also been done in the study of [cross-over trials](http://en.wikipedia.org/wiki/Crossover_study) (esp. in relation to carry-over effects, although I don't know how applicable it is to your study). [From Gain Score t to ANCOVA F (and vice versa)](http://pareonline.net/pdf/v14n6.pdf), from Knapp & Schaffer, provides an interesting review of ANCOVA vs. t approach (the so-called Lord's Paradox). The simple analysis of change scores is not the recommended way for pre/post design according to Senn in his article [Change from baseline and analysis of covariance revisited](http://onlinelibrary.wiley.com/doi/10.1002/sim.2682/abstract) (Stat. Med. 2006 25(24)). Moreover, using a mixed-effects model (e.g. to account for the correlation between the two time points) is not better because you really need to use the "pre" measurement as a covariate to increase precision (through adjustment). Very briefly: - The use of change scores (post $-$ pre, or outcome $-$ baseline) does not solve the problem of imbalance; the correlation between pre and post measurement is < 1, and the correlation between pre and (post $-$ pre) is generally negative -- it follows that if the treatment (your group allocation) as measured by raw scores happens to be an unfair disadvantage compared to control, it will have an unfair advantage with change scores. - The variance of the estimator used in ANCOVA is generally lower than that for raw or change scores (unless correlation between pre and post equals 1). - If the pre/post relationships differ between the two groups (slope), it is not as much of a problem than for any other methods (the change scores approach also assumes that the relationship is identical between the two groups -- the parallel slope hypothesis). - Under the null hypothesis of equality of treatment (on the outcome), no interaction treatment x baseline is expected; it is dangerous to fit such a model, but in this case one must use centered baselines (otherwise, the treatment effect is estimated at the covariate origin). I also like [Ten Difference Score Myths](http://public.kenan-flagler.unc.edu/faculty/edwardsj/Edwards2001b.pdf) from Edwards, although it focuses on difference scores in a different context; but here is an [annotated bibliography](https://homes.ori.org/keiths/bibliography/statistics-prepost.html) on the analysis of pre-post change (unfortunately, it doesn't cover very recent work). Van Breukelen also compared ANOVA vs. ANCOVA in randomized and non-randomized setting, and his conclusions support the idea that ANCOVA is to be preferred, at least in randomized studies (which prevent from regression to the mean effect).
null
CC BY-SA 4.0
null
2010-10-10T13:59:47.777
2020-09-14T18:03:39.107
2020-09-14T18:03:39.107
930
930
null
3471
1
3493
null
0
159
Is it possible to load an S-PLUS Linux workspace in Windows? If I try it I get this error: "Problem in exists(name, where = db): This directory has both Unix style __nonfile and Windows style __nonfi" The __nonfi file is created when I first try to load that Linux workspace in Windows. Is there any way to convert it to a Windows workspace? I'm using S-PLUS 8.0.
Load Linux workspace in S-PLUS for Windows
CC BY-SA 2.5
null
2010-10-10T20:35:54.560
2010-10-11T20:27:59.543
null
null
749
[ "splus" ]
3472
2
null
3307
4
null
I doubt you're going to find a single answer to this, given the space of fractal dimensions. Most papers (in physics, geology) looking at correlation simply stick to a Pearson correlation with fractal math reserved for identifying dimension/self-similarity, etc. But you might be interested in the following papers which use a "Correlation Fractal Dimension" as a similarity metric. The second paper mentions a fractal clustering algorithm which employs this metric. - Estimating the Selectivity of Spatial Queries Using the `Correlation' Fractal Dimension (Belussi, Faloutsos, 1995) - Characterizing Datasets Using Fractal Methods (Abrahao, Barbosa, 2003)
null
CC BY-SA 2.5
null
2010-10-10T21:16:38.417
2010-10-10T21:16:38.417
null
null
251
null
3474
1
3481
null
9
2462
As the title says, I'm looking for the marginal densities of $$f (x,y) = c \sqrt{1 - x^2 - y^2}, x^2 + y^2 \leq 1.$$ So far I have found $c$ to be $\frac{3}{2 \pi}$. I figured that out through converting $f(x,y)$ into polar coordinates and integrating over $drd\theta$, which is why I'm stuck on the marginal densities portion. I know that $f_x(x) = \int_{-\infty}^\infty f(x,y)dy$, but I'm not sure how to solve that without getting a big messy integral, and I know the answer isn't supposed to be a big messy integral. Is it possible to instead find $F(x,y)$, and then take $\frac{dF}{dx}$ to find $f_x(x)$? That seems like the intuitive way to do it but I can't seem to find anything in my textbook that states those relationships, so I didn't want to make the wrong assumptions.
Finding marginal densities of $f (x,y) = c \sqrt{1 - x^2 - y^2}, x^2 + y^2 \leq 1$
CC BY-SA 3.0
null
2010-10-11T03:48:14.103
2014-03-13T21:55:48.787
2014-03-13T21:55:48.787
919
1545
[ "self-study", "marginal-distribution", "multivariable" ]
3475
2
null
2467
7
null
Caution: I'm assuming that when you said "classification", you are rather referring to cluster analysis (as understood in French), that is an unsupervised method for allocating individuals in homogeneous groups without any prior information/label. It's not obvious to me how class membership might come into play in your question. I'll take a different perspective from the other answers and suggest you to try to do a data reduction (through PCA) of your $p$ variables followed by a mix of Ward's hierarchical and k-means clustering (this is called mixed clustering in the French literature, the basic idea is that HC is combined to a weighted k-means to consolidate the partition) on the first two or three factorial axes. This was proposed by Ludovic Lebart et coll. and is actually implemented in the [FactoClass](http://cran.r-project.org/web/packages/FactoClass/index.html) package. The advantages are as follows: - If any part of your survey is not clearly unidimensional, you will be able to gauge item contribution to the second axis, and this may help to flag those items for further inspection; - Clustering is done on the PCA scores (or you can work with a multiple correspondence analysis, though in the case of binary items it amounts to yield the same results than a scaled PCA), and thanks to the mixed clustering the resulting partition is more stable and allow to spot potential extreme respondents; you can also introduce supplementary variable (like gender, SES or age), which is useful to inspect between-group homogeneity. In this case, no rotation is supposed to be applied to the principal axes. Considering a subspace with q < p allows to remove random fluctuations which often make the variance in the p - q remaining axes. This can be viewed as some kind of "smoothing" on the data. Instead of PCA, as I said, you can use Multiple Correspondence Analysis (MCA), which is basically a non-linear PCA where numerical scores are assigned to respondents and modalities of dummy-coded variables. I have had some success using this method in characterizing clinical subgroups assessed on a wide range testing battery for neuropsychological impairment, and this generally yields results that are more or less comparable (wrt. interpretation) to model-based clustering (aka latent trait analysis, in the psychometric literature). The `FactoClass` package relies on [ade4](http://cran.r-project.org/web/packages/ade4/index.html) for the factorial methods, and allows to visualize clusters in the factorial space, as shown below: ![alt text](https://i.stack.imgur.com/JFtTa.png) Now, the problem with so-called tandem approach is that there is no guarantee that the low-dimensional representation that is produced by PCA or MCA will be an optimal representation for identifying cluster structures. This is nicely discussed in Hwang et al. (2006), but I'm not aware of any implementation of the algorithm they proposed. Basically, the idea is to combine MCA and k-means in a single step, which amounts to minimize two criteria simultaneously (the standard homeogeneity criterion and the residual SS). References - Lebart, L, Morineau, A, and Piron, M (2000). Statistique exploratoire multidimensionnelle (3rd ed.). Dunod. - Hwang, H, Dillon, WR, and Takane, Y (2006). An extension of multiple correspondence analysis for identifying heterogeneous subgroups of respondents. Psychometrika, 71, 161-171.
null
CC BY-SA 3.0
null
2010-10-11T06:30:02.287
2012-02-03T12:00:08.003
2012-02-03T12:00:08.003
930
930
null
3476
1
3477
null
20
180938
Python [matplotlib](http://matplotlib.sourceforge.net/) has a [boxplot command](http://matplotlib.sourceforge.net/api/pyplot_api.html#matplotlib.pyplot.boxplot). Normally, all the parts of the graph are numerically ticked. How can I change the ticks to names instead of positions? For illustration, I mean the Mon Tue Wed labels like in this boxplot: ![alt text](https://i.stack.imgur.com/ZFENf.png)
How to name the ticks in a python matplotlib boxplot
CC BY-SA 2.5
null
2010-10-11T06:39:50.770
2016-01-27T20:05:46.097
null
null
190
[ "python", "matplotlib" ]
3477
2
null
3476
30
null
Use the second argument of `xticks` to set the labels: ``` import numpy as np import matplotlib.pyplot as plt data = [[np.random.rand(100)] for i in range(3)] plt.boxplot(data) plt.xticks([1, 2, 3], ['mon', 'tue', 'wed']) ``` edited to remove `pylab` bc [pylab is a convenience module that bulk imports matplotlib.pyplot (for plotting) and numpy (for mathematics and working with arrays) in a single name space. Although many examples use pylab, it is no longer recommended.](http://matplotlib.org/faq/usage_faq.html#matplotlib-pyplot-and-pylab-how-are-they-related)
null
CC BY-SA 3.0
null
2010-10-11T07:12:12.017
2016-01-27T20:05:46.097
2016-01-27T20:05:46.097
94986
251
null
3478
2
null
3460
12
null
My POV would be reviewing a paper in psychology or forecasting on its statistical merits. I'll mostly second Nico's very good remarks. > How much effort should we put in to understand the application area? Quite a lot, actually. I wouldn't trust myself to comment on more than the most basic statistical problems without having understood the area. Fortunately, this is often not very hard in many branches of psychology. > How much time should I spend on a report? I'll go out on a limb and state a specific time: I'll spend anything between two and eight hours on a review, sometimes more. If I find that I'm spending more than a day on a paper, it probably means that I'm really not qualified to understand it, so I'll recommend the journal find someone else (and try to suggest some people). > How picky are you when looking at figures/tables. Very picky indeed. The figures are going to be what people remember of a paper and what ends up in lecture presentations without much context, so these really need to be done well. > How do you cope with the data not being available. In psychology, the data are usually not shared - measuring 50 people by MRI is very expensive, and the authors will want to use these data for further papers, so I kind of understand their reluctance to just give out the data. So anyone who does share their data gets a big bonus in my book, but not sharing is understandable. In forecasting, many datasets are publicly available. In this case I usually recommend that the authors share their code (and do so myself). > Do you try and rerun the analysis used. Without the data, there is only so much one can learn from this. I'll play around with simulated data if something is very surprising about the paper's results; otherwise one can often tell appropriate from inappropriate methods without the data (once one understands the area, see above). > What's the maximum number of papers your would review in a year? There is really little to add to whuber's point above - assuming that every paper with on average n coauthors I (co-)submit gets 3 reviews, one should really aim at reviewing at least 3/(n+1) papers for each own submission (counting submissions rather than own papers which may be rejected and resubmitted). And of course, the number of submissions as well as the number of coauthors varies strongly with the discipline.
null
CC BY-SA 2.5
null
2010-10-11T08:09:36.607
2010-10-12T09:49:10.387
2010-10-12T09:49:10.387
1352
1352
null
3479
1
3482
null
11
5005
What is the rationale, if any, to use Discriminant Analysis (DA) on the results of a clustering algorithm like k-means, as I see it from time to time in the literature (essentially on clinical subtyping of mental disorders)? It is generally not recommended to test for group differences on the variables that were used during cluster construction since they support the maximisation (resp. minimisation) of between-class (resp. within-class) inertia. So, I am not sure to fully appreciate the added value of predictive DA, unless we seek to embed individuals in a factorial space of lower dimension and get an idea of the "generalizability" of such a partition. But even in this case, cluster analysis remains fundamentally an exploratory tool, so using class membership computed this way to further derive a scoring rule seems strange at first sight. Any recommendations, ideas or pointers to relevant papers?
Cluster Analysis followed by Discriminant Analysis
CC BY-SA 2.5
null
2010-10-11T08:37:31.890
2010-10-11T15:10:57.870
null
null
930
[ "clustering", "discriminant-analysis" ]
3480
2
null
3471
2
null
I don't know for certain, but that won't stop me from wildly speculating: The __nonfi file lists what's in the workspace. You can open it with a text editor and look at the contents. It might be possible to either manipulate the unix version (e.g. using dos2unix) or else copy the contents over into your new file. That said, I doubt that this will work since some of the S-Plus files are in binary format, and I have run into trouble in the past when I have tried to manually change the __nonfi file. This question might be better served by Tibco technical support. Please update us here if you get an answer.
null
CC BY-SA 2.5
null
2010-10-11T13:11:51.507
2010-10-11T13:11:51.507
null
null
5
null
3481
2
null
3474
15
null
Geometry helps here. The graph of $f$ is a spherical dome of unit radius. (It follows immediately that its volume is half that of a unit sphere, $(4 \pi /3)/2$, whence $c=3/(2 \pi)$.) The marginal densities are given by areas of vertical cross-sections through this sphere. Obviously each cross-section is a semicircle: to obtain the marginal density, find its radius as a function of the remaining variable and use the formula for the area of a circle. Normalizing the resulting univariate function to have unit area turns it into a density.
null
CC BY-SA 2.5
null
2010-10-11T14:55:07.220
2010-10-11T14:55:07.220
null
null
919
null
3482
2
null
3479
5
null
I don't know of any papers on this. I've used this approach, for descriptive purposes. DFA provides a nice way to summarize group differences and dimensionality with respect to the original variables. One might more easily just profile the groups on the original variables, however, this loses the inherently multivariate nature of the clustering problem. DFA allows you to describe the groups while keeping the multivariate character of the problem intact. So, it can assist with the interpretation of the clusters, where that is a goal. This is particularly ideal when there is a close relationship between your clustering method and your classification method--e.g., DFA and Ward's method. You are right about the testing problem. I published a paper using the Cluster Analysis with DFA follow-up to describe the clustering solution. I presented the DFA results with no test statistics. A reviewer took issue with that. I conceded and put the test statistics and p values in there, with the disclaimer that these p-values should not be interpreted in the traditional manner.
null
CC BY-SA 2.5
null
2010-10-11T15:10:57.870
2010-10-11T15:10:57.870
null
null
485
null
3483
2
null
3460
17
null
This addresses the new question #6: "What's the maximum number of papers you would review in a year?" I'm responding as a member of several editorial boards. The perennial problem is finding enough reviewers. Depending on the journal, every submitted paper needs one to three peer reviewers, usually three. If the journal has an $x$% acceptance rate, then the mean number of reviews per accepted paper obviously is around $3/(x/100)$. E.g., if the acceptance rate is 33%, the editors need to obtain nine reviews for every paper published. If you, as an author, take this seriously, then you should attempt to provide nine reviews (or whatever the number turns out to be for your target journals) for every paper you publish! I was moved to write this due to the strong parallel with voting on this site: in order for you to garner a reputation of $r$, other people have to upvote some combination of $r/10$ of your answers and $r/5$ of your questions. Thus, if you're pulling your weight, a check of your profile should show at least $r/10$ upvotes. That is the case for many but certainly not all of the most active members of this site. Something to think about... Remember to vote!
null
CC BY-SA 2.5
null
2010-10-11T15:35:26.567
2010-10-11T15:56:43.347
2010-10-11T15:56:43.347
919
919
null
3484
1
3487
null
8
7209
I have been looking at analyst job postings and one of the most common requirement is experience of SAS. - Unless your organisation currently uses SAS, how can you train as a SAS user? - What programming language would be equivalent to SAS that employers might be happy to accept?
Obtaining SAS experience
CC BY-SA 2.5
null
2010-10-11T15:42:06.377
2017-02-22T22:44:58.320
null
null
1077
[ "sas", "careers" ]
3485
2
null
3199
4
null
Good question. A trivial way to find "cluster of high values in the upper left" (as opposed to correlations) is to split the image into tiles and look at tile means. For example, ``` means of 100 x 100 tiles: [[ 82 78 80 94 99 100] [ 80 53 66 62 80 100] [ 82 61 65 64 72 98] [ 87 83 99 81 80 100] [100 100 100 100 100 100]] means of 50 x 50 tiles: [[100 85 84 100 70 96 100 100 100 100 100] [ 83 59 57 71 67 88 89 86 98 100 100] [ 87 58 54 49 71 74 71 61 61 100 100] [100 76 58 52 59 61 55 59 65 95 100] [100 62 59 60 57 63 60 60 59 97 100] [100 68 65 59 59 82 76 61 61 70 95] [ 83 64 76 66 96 100 96 61 80 67 100] [100 100 97 92 100 100 84 82 83 88 100] [100 100 100 100 100 100 100 100 100 100 100]] ``` (a plot with average height / colour in each tile would be 10x better). (If you're looking for features in images, what's a "feature" ? E.g. a red stop sign, as in [Histograms for feature representation](http://www.cse.yorku.ca/~sizints/histogram_search/histogram_search_main.html) )
null
CC BY-SA 2.5
null
2010-10-11T15:46:43.540
2010-10-11T15:46:43.540
null
null
557
null
3487
2
null
3484
5
null
I would recommend going through a self-study course such as the [UCLA website](http://www.ats.ucla.edu/stat/sas/) and specifically the [SAS Starter Kit](http://www.ats.ucla.edu/stat/sas/sk/default.htm). If you learn better within an interactive environment, I would suggest checking out online course offerings such as the [World Campus SAS courses](http://www.worldcampus.psu.edu/AppliedStatisticsCertificate_CourseList.shtml) offered at Penn State University (Stat 480, 481, & 482). Update: Sorry should've read more carefully, I agree with @Christoper.Aden that there aren't really any equivalent languages with SAS. You can learn R to perform statistical calculations, but if you need to use SAS, then learning R will only be a small step in the right direction (general programming knowledge - the two languages are incredibly different in practice). I would recommend getting an academic discount version of SAS if you enroll in a program like I mentioned above - Penn State currently sells a 1yr licensed copy of SAS for $30 (only to students).
null
CC BY-SA 2.5
null
2010-10-11T17:08:44.130
2010-10-11T17:37:01.560
2010-10-11T17:37:01.560
1499
1499
null
3488
2
null
3463
6
null
You can use the ccf function to get the cross-correlation, but this will only give you a plot. If the estimated cross correlations fall outside the dash red line, then you can conclude that there is a statistically significant cross-correlation. But I do not know of a package with a formally encapsulated test. Example from ccf doc: ``` require(graphics) ## Example from Venables & Ripley (Provided in CCF help file) ccf(mdeaths, fdeaths, ylab = "cross-correlation") ``` Note, that the question of significance test is also discussed [here](https://stats.stackexchange.com/questions/3115/cross-correlation-significance-in-r).
null
CC BY-SA 2.5
null
2010-10-11T17:17:26.393
2010-10-11T17:17:26.393
2017-04-13T12:44:40.807
-1
1499
null
3489
1
3508
null
6
10089
I have a biometric system that outputs a distribution of scores that resembles a Gaussian distribution (similar to the example graph in the following link: [LINK](http://support.bioid.com/sdk/docs/About_EER.htm)). My point of confusion is how I calculate the False Acceptance Rate. How does threshold factor into the whole problem?
Calculating False Acceptance Rate for a Gaussian Distribution of scores
CC BY-SA 2.5
null
2010-10-11T17:21:37.217
2010-10-20T21:12:08.117
2010-10-20T21:12:08.117
8
1224
[ "bioinformatics" ]
3490
2
null
3484
5
null
As far as SAS goes, getting [certified is resume gold](http://support.sas.com/certify/). The SAS Institute offers [classes and exams](http://support.sas.com/certify/creds/prep.html) to receive the certification. There are also books you can use if you are self-motivated. Getting SAS is quite difficult if your company does not have it. I'm on a college campus, and they offer academic discounts on student liscenses and the campus labs have it installed on some machines. If you want something a little similar, but cheaper, give JMP a try. It's probably the closest thing to the SAS feel. For similar languages, it would probably depend on your field. The social sciences would probably be more receptive to seeing SPSS on your resume than would the economics-related work.
null
CC BY-SA 2.5
null
2010-10-11T17:27:37.583
2010-10-11T17:27:37.583
null
null
1118
null
3491
2
null
3489
5
null
I'm not certain. I'm curious as to the other responses you get. However, I think you'll need to clarify a bit: Does your Gaussian distribution represent the scores for a population of individuals which should be rejected by your biometric system? If so, then I think you simply need to compute a cumulative probability - i.e. the percentage of individuals which should be rejected but who, by random chance, fall above your threshold and are "falsely accepted" by your biometric device. So, it could be as simply as computing the number of people who randomly fall above your threshold divided by the total number of "should be rejected" people. But again, I'm not certain of my response and I think you need to clarify what your assumptions are, what your threshold is, and how you wish to classify individuals as "falsely rejected".
null
CC BY-SA 2.5
null
2010-10-11T17:33:16.783
2010-10-11T17:33:16.783
null
null
1499
null
3492
2
null
3484
1
null
The programming language most similar to SAS is... SAS. Which you can interpret using [WPS, which will run SAS code and evidently costs substantially less than a SAS license](http://en.wikipedia.org/wiki/World_Programming_System) and has [a 30 day free trial](http://www.teamwpc.co.uk/tryorbuy). I haven't used it myself, but it should get you started programming in the SAS language. As M. Tibbits suggests, I don't think that experience with R would be helpful in most corporate settings. I also don't think that SPSS experience will be all that helpful, either, and my sense is that it has a less than stellar reputation outside of the social sciences.
null
CC BY-SA 2.5
null
2010-10-11T18:07:43.963
2010-10-11T18:07:43.963
null
null
71
null
3493
2
null
3471
3
null
Tibco support gave me a solution: - Create a new Windows workspace - Attach the Linux workspace attach("C:\\Linux\\Workspace\\Path") - Copy the contents of the Linux workspace to the Windows workspace objs <- objects(2) for (i in objs) assign(i, value=get(i, where=2), where=1) objs <- objects(2, meta=1) for (i in objs) assign(i, value=get(i, where=2, meta=1), where=1, meta=1) Almost everything was copied. There were problems with functions named like `lBounds<-`, which were renamed to `lBounds_-`
null
CC BY-SA 2.5
null
2010-10-11T20:27:59.543
2010-10-11T20:27:59.543
null
null
749
null
3495
2
null
3377
1
null
The following provides an inaccurate approximation, although the inaccuracy will depend on the distribution of the input data. It is an online algorithm, but only approximates the absolute deviance. It is based on a [well known algorithm](http://www.johndcook.com/blog/2008/09/26/comparing-three-methods-of-computing-standard-deviation/) for calculating variance online, described by [Welford](http://www.jstor.org/stable/1266577) in the 1960s. His algorithm, translated into R, looks like: ``` M2 <- 0 mean <- 0 n <- 0 var.online <- function(x){ n <<- n + 1 diff <- x - mean mean <<- mean + diff / n M2 <<- M2 + diff * (x - mean) variance <- M2 / (n - 1) return(variance) } ``` It performs very similarly to R's builtin variance function: ``` set.seed(2099) n.testitems <- 1000 n.tests <- 100 differences <- rep(NA, n.tests) for (i in 1:n.tests){ # Reset counters M2 <- 0 mean <- 0 n <- 0 xs <- rnorm(n.testitems) for (j in 1:n.testitems){ v <- var.online(xs[j]) } differences[i] <- abs(v - var(xs)) } summary(differences) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.000e+00 2.220e-16 4.996e-16 6.595e-16 9.992e-16 1.887e-15 ``` Modifying the algorithm to calculate absolute deviation simply involves an additional `sqrt` call. However, the `sqrt` introduces inaccuracies that are reflected in the result: ``` absolute.deviance.online <- function(x){ n <<- n + 1 diff <- x - mean mean <<- mean + diff / n a.dev <<- a.dev + sqrt(diff * (x - mean)) return(a.dev) } ``` The errors, calculated as above, are much greater than for the variance calculation: ``` Min. 1st Qu. Median Mean 3rd Qu. Max. 0.005126 0.364600 0.808000 0.958800 1.360000 3.312000 ``` However, depending on your use case, this magnitude of error might be acceptable. ![historgram of differences](https://i.stack.imgur.com/bLZr0.png)
null
CC BY-SA 2.5
null
2010-10-11T23:15:33.223
2010-11-09T05:36:27.843
2010-11-09T05:36:27.843
179
179
null