Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
sequence
1942
1
null
null
7
830
What are the most significant annual conferences focusing on quantitative methods in psychology? This could include but is not limited to psychometrics, mathematical psychology, and statistical methods in psychology. Rules: - One conference per answer - Include a link to the conference
Quantitative methods and statistics conferences in psychology?
CC BY-SA 2.5
null
2010-08-20T02:25:32.253
2010-08-22T03:05:50.557
null
null
183
[ "conferences", "psychometrics", "psychology" ]
1943
2
null
1844
7
null
If I'm not mistaken, the log-rank estimator you reference is also known as the Pike estimator. I believe it's generally recommended for HR < 3 because it exhibits less bias in that range. The following paper may be of interest (note that the paper refers to it as O/E): - Estimation of the Proportional Hazard in Two-Treatment-Group Clinical Trials (Bernstein, Anderson, Pike) > [...] The O/E method is biased but, within the range of values of the ratio of the hazard rates of interest in clinical trials, it is more efficient in terms of mean square error than either CML or the Mantel-Haenszel method for all but the largest trials. The Mantel-Haenszel method is minimally biased, gives answers very close to those obtained using CML, and may be used to provide satisfactory approximate confidence intervals.
null
CC BY-SA 2.5
null
2010-08-20T06:46:21.083
2010-08-20T06:46:21.083
null
null
251
null
1944
1
1945
null
7
3596
Is there a specific name for normalizing some data so that it has mean=0 and sd=1? Or do I just say "data was normalized to have mean=0 and sd=1"?
What is the name of this normalization (which yields mean 0 and sd 1)?
CC BY-SA 4.0
null
2010-08-20T08:31:16.120
2019-12-28T22:14:37.003
2019-12-28T07:45:11.480
3277
582
[ "data-transformation", "terminology", "normalization", "standardization" ]
1945
2
null
1944
10
null
The quantity $z = \frac{X - \mu}{\sigma}$ is a [standard score](http://en.wikipedia.org/wiki/Standard_score). So, standardization is a common way to refer to it.
null
CC BY-SA 2.5
null
2010-08-20T08:42:39.977
2010-08-20T08:42:39.977
null
null
251
null
1946
2
null
1944
4
null
I think it is just called z-score. [@ttnphns remark: that is correct, however "z-score" also has other meanings in statistics. z-standardization, z standard value - probably the most widely used terms for the linear transform to mean 0 and sd 1]
null
CC BY-SA 4.0
null
2010-08-20T08:45:26.740
2019-12-28T07:56:27.200
2019-12-28T07:56:27.200
3277
null
null
1947
2
null
1927
17
null
As @Ars said there are no accepted definition (and this is a good point). There are general alternatives families of ways to generalize quantiles on $\mathbb{R}^d$, I think the most significant are: - Generalize quantile process Let $P_n(A)$ be the empirical measure (=the proportion of observations in $A$). Then, with $\mathbb{A}$ a well chosen subset of the Borel sets in $\mathbb{R}^d$ and $\lambda$ a real valued measure, you can define the empirical quantile function: $U_n(t)=\inf (\lambda(A) : P_n(A)\geq t A\in\mathbb{A})$ Suppose you can find one $A_{t}$ that gives you the minimum. Then the set (or an element of the set) $A_{1/2-\epsilon}\cap A_{1/2+\epsilon}$ gives you the median when $\epsilon$ is made small enough. The definition of the median is recovered when using $\mathbb{A}=(]-\infty,x] x\in\mathbb{R})$ and $\lambda(]-\infty,x])=x$. Ars answer falls into that framework I guess... tukey's half space location may be obtained using $\mathbb{A}(a)=( H_{x}=(t\in \mathbb{R}^d :\; \langle a, t \rangle \leq x ) $ and $\lambda(H_{x})=x$ (with $x\in \mathbb{R}$, $a\in\mathbb{R}^d$). - variational definition and M-estimation The idea here is that the $\alpha$-quantile $Q_{\alpha}$ of a random variable $Y$ in $\mathbb{R}$ can be defined through a variational equality. - The most common definition is using the quantile regression function $\rho_{\alpha}$ (also known as pinball loss, guess why ? ) $Q_{\alpha}=arg\inf_{x\in \mathbb{R}}\mathbb{E}[\rho_{\alpha}(Y-x)]$. The case $\alpha=1/2$ gives $\rho_{1/2}(y)=|y|$ and you can generalize that to higher dimension using $l^1$ distances as done in @Srikant Answer. This is theoretical median but gives you empirical median if you replace expectation by empirical expectation (mean). - But Kolshinskii proposes to use Legendre-Fenchel transform: since $Q_{\alpha}=Arg\sup_s (s\alpha-f(s))$ where $f(s)=\frac{1}{2}\mathbb{E} [|s-Y|-|Y|+s]$ for $s\in \mathbb{R}$. He gives a lot of deep reasons for that (see the paper ;)). Generalizing this to higher dimensions require working with a vectorial $\alpha$ and replacing $s\alpha$ by $\langle s,\alpha\rangle$ but you can take $\alpha=(1/2,\dots,1/2)$. - Partial ordering You can generalize the definition of quantiles in $\mathbb{R}^d$ as soon as you can create a partial order (with equivalence classes). Obviously there are bridges between the different formulations. They are not all obvious...
null
CC BY-SA 4.0
null
2010-08-20T09:52:03.863
2022-08-06T19:09:10.103
2022-08-06T19:09:10.103
79696
223
null
1948
2
null
1927
4
null
A definition that comes close to it, for unimodal distributions, is the tukey halfspace median - http://cgm.cs.mcgill.ca/~athens/Geometric-Estimators/halfspace.html - http://www.isical.ac.in/~statmath/html/publication/Tukey_tech_rep.pdf - https://www.isical.ac.in/~statmath/report/11310-15.pdf
null
CC BY-SA 4.0
null
2010-08-20T10:05:35.663
2019-09-15T13:16:58.333
2019-09-15T13:16:58.333
603
603
null
1949
2
null
1519
13
null
Really, these approaches have not been actively developed for a very long time. For univariate Outliers, the optimal (most efficent) filter is median+/-$\delta \times$ MAD, or better yet (if you have access to R) median+/-$\delta \times$ Qn (so you don't assume the underlying distribution to be symmetric), The Qn estimator is implemented in package robustbase. See: Rousseeuw, P.J. and Croux, C. (1993) Alternatives to the Median Absolute Deviation, Journal of the American Statistical Association *88*, 1273-1283. Response to comment: Two levels. A) Philosophical. Both the Dixon and Grub tests are only able to detect a particular type of (isolated, single) outlier. For the last 20-30 years the concept of outliers has involved unto "any observation that departs from the main body of the data". Without further specification of what the particular departure is. This characterization-free approach renders the idea of building tests to detect outliers void. The emphasize shifted to the concept of estimators (a classical example of which is the median) that retain there values (i.e. are insensitive) even for large rate of contamination by outliers -such estimator is then said to be robust- and the question of detecting outliers becomes void. B) Weakness, You can see that the Grub and Dixon tests easily break down: one can easily generated contaminated data that would pass either test like a bliss (i.e. without breaking the null). This is particularly obvious in the Grubb test, because outliers will break down the mean and s.d. used in the construction of the test stat. It's less obvious in the Dixon, until one learns that order statistics are not robust to outliers either. I think you will find more explanation of these facts in papers oriented towards the general non-statistician audience such as the one cited above (I can also think of the Fast-Mcd paper by Rousseeuw). If you consult any recent book/intro to robust analysis, you will notice that neither Grubb nor Dixon are mentioned.
null
CC BY-SA 3.0
null
2010-08-20T10:19:39.907
2013-12-02T16:02:40.650
2013-12-02T16:02:40.650
603
603
null
1950
2
null
1935
3
null
Bootstrap is essentially a simulation of repeating experiment; let's say you have a box with balls an want to obtain an average size of a ball -- so you draw some of them, measure and take a mean. Now you want to repeat it to get the distribution, for instance to get a standard deviation -- but you found out that someone stole the box. What can be done now is to use what you have -- this one series of measurements. The idea is to put the balls to the new box and simulate the original experiment by drawing the same number of balls with replacement -- both to have same sample size and some variability. Now this can be replicated many times to get a series of means which can be finally used to approximate the mean distribution.
null
CC BY-SA 2.5
null
2010-08-20T11:16:27.923
2010-08-20T11:16:27.923
null
null
null
null
1951
2
null
1942
2
null
The annual meeting of the Society for Computers in Psychology often features content on quantitative methods: [http://sites.google.com/site/scipws/](http://sites.google.com/site/scipws/)
null
CC BY-SA 2.5
null
2010-08-20T12:28:09.067
2010-08-20T12:28:09.067
null
null
364
null
1952
2
null
1935
4
null
I like to think of it as follows: If you obtain a random sample data set from a population, then presumably that sample will have characteristics that roughly match that of the source population. So, if you're interested in obtaining confidence intervals on on a particular feature of the distribution, its skewness for example, you can treat the sample as a pseudo-population from which you can obtain many sets of random pseudo-samples, computing the value of the feature of interest in each. The assumption that the original sample roughly matches the population also means that you can obtain the pseudo-samples by sampling from the pseudo-population "with replacement" (eg. you sample a value, record it, then put it back; thus each value has a chance of being observed multiple times.). Sampling with replacement means that the computed value of the feature of interest will vary from pseudo-sample to pseudo-sample, yielding a distribution of values from which you can compute, say, the 2.5th and 97.5th percentiles to obtain the 95% confidence interval for the value of the feature of interest.
null
CC BY-SA 2.5
null
2010-08-20T12:52:31.310
2010-08-20T12:52:31.310
null
null
364
null
1953
2
null
1853
6
null
L-moments might be useful here? [Wikipedia article](http://en.wikipedia.org/wiki/L-moment) [The L-moments page (Jonathan R.M. Hosking, IBM Research)](http://www.research.ibm.com/people/h/hosking/lmoments.html) They provide quantities analogous to conventional moments such as skewness and kurtosis, called the l-skewness and l-kurtosis. These have the advantage that they don't require calculation of high moments as they are computed from linear combinations of the data and defined as linear combinations of expected values of order statistics. This also means they are less sensitive to outliers. I believe you only need second-order moments to calculate their sample variances, which presumably you'd need for your test. Also their asymptotic distribution converges to a normal distribution much faster than conventional moments. It seems the expressions for their sample variances get quite complicated (Elamir and Seheult 2004), but i know they've been programmed in downloadable packages for both R and Stata (available from their standard repositories), and maybe in other packages too for all i know. As your samples are independent once you've got the estimates and standard errors you could just plug them into a two-sample z-test if your sample sizes are "large enough" (Elamir and Seheult report some limited simulations that appear to show that 100 isn't large enough, but not what is). Or you could bootstrap the difference in l-skewness. The above properties suggest that may perform considerably better than bootstrapping based on the conventional skewness.
null
CC BY-SA 2.5
null
2010-08-20T13:42:01.620
2010-08-20T13:53:11.397
2010-08-20T13:53:11.397
449
449
null
1954
2
null
1844
7
null
There are actually several more methods and the choice often depends on whether you are most interested in looking for early differences, later differences or - as for the log-rank test & the Mantel-Haenszel test - give equal weight to all time points. To the question at hand. The log-rank test is in fact a form of the Mantel-Haenszel test applied to survival data. The Mantel-Haenszel test is usually used to test for independence in stratified contingency tables. If we try to apply the MH test to survival data, we can start by assuming that events at each failure time are independent. We then stratify by failure time. We use the MH methods for by making each failure time a strata. Not surprisingly they often give the same result. The exception occcurs when more than one event occurs simultaneously - multiple deaths at exactly the same time point. I can't remember how the treatment then differs. I think the log-rank test averages over the possible orderings of the tied failure times. So the log-rank test is the MH test for survival data and can deal with ties. I've never used the MH test for survival data.
null
CC BY-SA 2.5
null
2010-08-20T13:54:04.360
2010-08-20T13:54:04.360
null
null
521
null
1955
1
null
null
6
349
(Prompted to some extent by the answers already given by Shane and Srikant, I've rewritten this to try to clarify what I'm getting at, if only to myself.) Suppose we have several similar systems, each with behaviour that approximates a continuous time Markov process. That is, there are some number of discrete states the system can be in and associated probabilities of transitioning from one state to another at any instant, depending solely on the current state. For now, consider the processes to be stationary, ie the transition probabilities do not change over time, and unaffected by seasonality or other external considerations. Unfortunately, we cannot measure the state of any system directly, but have instead to measure a proxy quantity, which varies with state but is not discrete and is subject to various sources of noise and error. The principal question is this: Q1: Given two data sequences produced independently from two such systems, how can we decide whether the underlying Markov processes are the same? Now, it may be that the best way to approach this is as two separate problems: - Convert the imperfect proxy sequence into an idealised time series of (categorical) states - Determine whether the state sequences correspond On the other hand, such a separation might involve discarding some information from the data in step 1 (eg, about its variability) that would be useful in step 2. Which leads to: Q2: Does it make sense to decompose the problem in this way or is it better to compare the proxy data directly? If such a decomposition does make sense, that opens up a whole other issue about how to do the idealisation, but that's definitely a question for another day. Shane, below, mentions goodness-of-fit and distributional tests such as Anderson-Darling, and that seems like a promising approach. But I'd like to check I'm understanding the idea correctly. Given sufficient samples in a sequence, we would expect the proportion of time spent in each state to tend to the stationary distribution. So one could test the distributions of occupancies in the two sequences for similarity. (I have the vague sense a two-sample Kolmogorov-Smirnov might suit for this, but please set me right about that.) The thing is, I'm not sure how good this can be as evidence. If the distributions are very different, that seems like a reasonable strike against the underlying processes being the same, but what if they're very similar? Can we draw conclusions in that direction? Q3: Does a good fit of occupancy distributions tell us anything useful? It seems like there could be an infinite number of processes that will tend to the same stationary distribution. I think this is very unlikely in practice, and that different systems will tend to have distinctly different behaviour, but still it's worth considering. Finally, we will often have a model of the underlying process that we are looking for, although it may not be perfect. So we could compare each sequence with the expected behaviour of the model, instead of with each other. We may also have more than two sequences to test. Q4: Is it better to compare multiple sequences to a single model, even an approximate one, or are we better off comparing data directly?
Comparing noisy data sequences to estimate the likelihood of them being produced by different instances of an identical Markov process
CC BY-SA 2.5
null
2010-08-20T13:59:34.987
2010-08-22T14:07:19.200
2010-08-22T14:07:19.200
174
174
[ "time-series", "markov-process", "goodness-of-fit" ]
1956
2
null
1955
4
null
You can perhaps use a [hidden markov model](http://en.wikipedia.org/wiki/Hidden_Markov_model) (HMM). I know that there is a R package that estimates HMMs but cannot recall its name right now.
null
CC BY-SA 2.5
null
2010-08-20T14:08:11.677
2010-08-20T14:08:11.677
null
null
null
null
1957
2
null
1935
7
null
Very broadly: the intuition, as well as the origin of the name ("pulling oneself up by the bootstraps"), derive from the observation that in using properties of a sample to draw inferences about a population (the "inverse" problem of statistical inference), we expect to err. To find out the nature of that error, treat the sample itself as a population in its own right and study how your inferential procedure works when you draw samples from it. That's a "forward" problem: you know all about your sample-qua-population and don't have to guess anything about it. Your study will suggest (a) the extent to which your inferential procedure may be biased and (b) the size and nature of the statistical error of your procedure. So, use this information to adjust your original estimates. In many (but definitely not all) situations, the adjusted bias is asymptotically much lower. One insight provided by this schematic description is that bootstrapping does not require simulation or repeated subsampling: those just happen to be omnibus, computationally tractable ways to study any kind of statistical procedure when the population is known. There exist plenty of bootstrap estimates that can be computed mathematically. This answer owes much to Peter Hall's book "The Bootstrap and Edgeworth Expansion" (Springer 1992), especially his description of the "Main Principle" of bootstrapping.
null
CC BY-SA 2.5
null
2010-08-20T14:37:06.270
2010-08-20T14:37:06.270
null
null
919
null
1958
2
null
1955
4
null
A few thoughts: - Can you not just use a goodness-of-fit test? Choose a distribution and compare both samples. Or use a qqplot. You may want to do this with returns (i.e. changes) instead of the original series, since this is often easier to model. There are also relative distribution functions (see, for instance, the reldist package). - You could look at whether the two series are cointegrated (use the Johansen test). This is available in the urca package (and related book). - There many multivariate time series models such as VAR that could be applied to model the dependencies (see the vars package). - You could trying using a copula, which is used for dependence modeling, and is available in the copula package. If the noise is serious concern, then try using a filter on the data before analyzing it.
null
CC BY-SA 2.5
null
2010-08-20T15:35:33.223
2010-08-20T15:50:42.683
2010-08-20T15:50:42.683
5
5
null
1959
2
null
1912
4
null
I still feel negatively about what seems to be a gratuitous insult on King's part but I can see where he might be coming from. "Scale-invariance" is a restriction on a statistical procedure. Thus, limiting our choice of procedures to scale-invariant ones (or to linear ones or to unbiased ones or minimax ones, etc.) potentially excludes procedures that might perform better. Whether this is actually the case or not depends. In many situations, data are reported in units that are essentially independent of what is being studied. It shouldn't matter whether you measure distances in angstroms or parsecs, for example. In this context, any procedure that is not scale invariant is therefore an arbitrary one--and arbitrariness is not a positive attribute in this field. In other situations, though, there is a natural scale. The most obvious of these concern counted data. A procedure that treats counted data as if they were measurements on a continuous scale (e.g., using OLS for a counted response) is potentially inferior to other available procedures and may be (likely is, I suspect) inadmissible in the decision-theoretic sense. This can be a tricky and subtle point because it's not always obvious when we have counted data. One example I'm familiar with concerns many chemical or radioactivity measurements, which ultimately originate as counts on some machine. Said counts get converted by the laboratory into a concentration or activity that forever after is treated as a real number. (However, attempts to exploit this fact in the chemometrics literature have not yielded superior statistical procedures.) Just to stave off one possible misunderstanding: I wouldn't view a selection of an informative prior for a scale parameter (in a Bayesian analysis) as a scale-dependent procedure. Such a prior obviously favors some ranges of values over others, but does not affect the scale invariance of the procedure itself.
null
CC BY-SA 2.5
null
2010-08-20T16:52:39.990
2010-08-27T15:21:05.230
2010-08-27T15:21:05.230
919
919
null
1960
2
null
1906
1
null
M2010 - 13th Annual Data Mining Conference [http://www.sas.com/m2010](http://www.sas.com/m2010)
null
CC BY-SA 2.5
null
2010-08-20T16:54:44.217
2010-08-20T16:54:44.217
null
null
null
null
1961
1
1981
null
4
3852
When one wants to compute the correlation of two vectors of a continues variables, one uses pearson (or spearman) correlation. But what should (can) one use for the case of two vectors with 2 (or 3) ordered levels only? Is spearman enough, or does it require another method? I remember coming across someone who once claimed to me that OR (odds ratio) is more fitting for such situations (for 2 by 2 tables, where order has no meaning), is this true? Here is an example R code, for allowing of answers relating to the same example: ``` set.seed(10) x2 <- sample(c(-1,1),50, T) x3 <- sample(c(-1:1),50, T) y3 <- sample(c(-1:1),50, T) y2 <- sample(c(-1,1),50, T) cor(x3,y3, method = c("spearman")) cor(x2,y2, method = c("spearman")) cor(x3,y2, method = c("spearman")) ``` p.s: for the 2 by 2 case, I followed from the comments that categorical "measures of association" is the term to look for. However, Part of the time I am comparing 2 on 3 tables, where on the factor with 3 levels there is order - so I would like to take use of that information.
Can you use normal correlation for vectors with only 2 (or 3), ordered, levels?
CC BY-SA 2.5
0
2010-08-20T17:15:43.887
2010-09-25T07:49:22.363
2010-09-25T07:49:22.363
930
253
[ "r", "correlation", "association-measure", "ordinal-data" ]
1962
2
null
1961
2
null
The OR is a good measure of association, but sometimes people prefer a correlation coefficient for interpretation because it has a [-1, 1] scale. For binary variables, the Phi statistic provides Pearson's correlation (see Jeromy's comment). Cramer's V is applicable when you have more than 2x2 cases. For details, see the following references: - Effect size - Correlation - Cramer's V I've never used any of these, so hopefully someone will jump in and say if there are good reasons for preferring them.
null
CC BY-SA 2.5
null
2010-08-20T17:38:51.850
2010-08-21T05:49:42.250
2010-08-21T05:49:42.250
251
251
null
1963
1
1976
null
14
1620
A (non-statistician) colleague has been encountering meta-analyses in papers he reviews for medical journals and is looking for a good introductory level treatment so he can educate himself. Any recommendations? Favorites? Books, monographs, nontechnical survey articles would all be fine. (Yes, he's familiar with the Wikipedia entry and other stuff readily accessible by a Google search, such as [Jerry Dallal's nice little article](http://www.jerrydallal.com/LHSP/meta.htm).)
Looking for good introductory treatment of meta-analysis
CC BY-SA 2.5
null
2010-08-20T18:06:04.580
2016-10-27T07:35:40.520
2010-08-20T18:22:35.203
5
919
[ "modeling", "meta-analysis" ]
1964
1
null
null
10
2972
I am doing some Kernel density estimation, with a weighted points set (ie., each sample has a weight which is not necessary one), in N dimensions. Also, these samples are just in a metric space (ie., we can define a distance between them) but nothing else. For example, we cannot determine the mean of the sample points, nor the standard deviation, nor scale one variable compared to another. The Kernel is just affected by this distance, and the weight of each sample: $$f(x) = \frac{1.}{\sum weights_i} * \sum\frac{weight_i}{h} * Kernel(\frac{distance(x,x_i)}{h})$$ In this context, I am trying to find a robust estimation for the kernel bandwidth $h$, possibly spatially varying, and preferably which gives an exact reconstruction on the training dataset $x_i$. If necessary, we could assume that the function is relatively smooth. I tried using the distance to the first or second nearest neighbor but it gives quite bad results. I tried with leave-one-out optimization, but I have difficulties finding a good measure to optimize for in this context in N-d, so it finds very bad estimates, especially for the training samples themselves. I cannot use the greedy estimate based on the normal assumption since I cannot compute the standard deviation. I found references using covariance matrices to get anisotropic kernels, but again, it wouldn't hold in this space... Someone has an idea or a reference ?
Kernel bandwidth in Kernel density estimation
CC BY-SA 3.0
null
2010-08-20T18:34:22.683
2016-04-12T16:53:54.970
2016-04-12T16:19:26.500
10416
1025
[ "density-function", "smoothing", "kernel-smoothing" ]
1965
2
null
1249
3
null
(I will delete my other non-answer, the edit of which had this nugget in it) No. Asymptotically, the 'trivial' upper bound is the least upper bound. To see this, Let $P_n = P(Z = n)$. Trivially, $E[\exp{(Z^2)}] \ge P_n \exp{(n^2)} = L$, where $L$ is the lower bound of interest. Since $Z$ is binomial, we have $P_n = {n\choose n} (n^{-\beta})^n (1-n^{-\beta})^0 = n^{-n\beta}$. Then $\log{L} = -\beta n \log{n} + n^2$. It is easily shown that this is $\Omega{(n^2)}$, and thus, $L$ is $\Omega{(\exp{(n^2)})}$. Thus the trivial upper bound is, asymptotically, the least upper bound, i.e. $E[\exp{(Z^2)}] \in \Theta{(\exp{(n^2)})}$.
null
CC BY-SA 2.5
null
2010-08-20T19:39:20.197
2010-08-23T16:37:56.203
2010-08-23T16:37:56.203
795
795
null
1966
1
null
null
8
1619
What non-/semiparametric methods to estimate a probability density from a data sample are you using ? (Please do not include more than one method per answer)
Density estimation methods?
CC BY-SA 2.5
null
2010-08-20T20:10:24.127
2011-01-22T00:50:41.847
null
null
961
[ "probability", "estimation", "nonparametric" ]
1967
2
null
1249
2
null
Numerical experiments (for 2 <= n <= 4000 and all values of beta) indicate the estimate n^2 - (n Ln(n))*beta exceeds the logarithm of the expectation by an amount on the order of beta*Exp(-n). The error appears to increase monotonically in beta for each n. This should provide some useful clues about how to proceed (for those with the time and interest). In particular, an upper bound for the expectation exists of the form Exp(n^2 - (n Ln(n))*beta + C*Exp(-n)*beta) with C << 1. Update After staring at the summation expression for the expectation, it became evident where the nLn(n)*beta term comes from: break each binomial coefficient Comb(n,k) into its sum Comb(n-1,k) + Comb(n-1,k-1) and write Exp(k^2) = Exp((k-1)^2)*Exp(2k-1). This decomposes the expectation e(n,beta) into the sum of two parts, one of which looks like the expectation e(n-1,beta) and the other of which is messy (because each term is multiplied by Exp(2k-1)) but can be bounded above by replacing all those exponential terms by their obvious upper bound Exp(2n-1). (This is not too bad, because the last term with the highest exponent strongly dominates the entire sum.) This gives a recursive inequality for the expectation, > e(n,beta) <= (n^-beta * Exp(2n-1) + 1 - n^-beta) * e(n-1,beta) Doing this n times creates a polynomial whose highest term is in fact Exp(n^2)*n^(-n*beta), with the remaining terms decreasing fairly rapidly. At this point any reasonable bound on the remainder will produce an improved bound for the expectation essentially of the form suggested by the numerical experiments. At this point you have to decide how hard you want to work to obtain a tighter upper bound; the numerical experiments suggest this additional work is not going to pay off unless you're interested in the smallest values of n.
null
CC BY-SA 2.5
null
2010-08-20T20:37:23.543
2010-08-20T21:38:08.627
2010-08-20T21:38:08.627
919
919
null
1968
2
null
498
0
null
I haven't had the problem you're talking about for a while but I used to. It might be that when you're selecting the output that you want you release your mouse drag somewhere off of the output (ie you're selecting a large region and you start at the bottom right and drag to the upper left and let go of the mouse somewhere in the explorer area). I know when I did that it wouldn't go through with the copy. I needed to release the drag inside of the output window to allow a copy to go through. Like I said the version I'm using now doesn't have this problem so I can't actually test this but I believe that's how I fixed that problem before.
null
CC BY-SA 2.5
null
2010-08-20T20:45:24.150
2010-08-20T20:45:24.150
null
null
1028
null
1969
2
null
1966
3
null
I use Silverman's Adaptive Kernel Density estimator. see e.g [akj help page](http://bm2.genes.nig.ac.jp/RGM2/R_current/library/quantreg/man/akj.html)
null
CC BY-SA 2.5
null
2010-08-20T20:51:48.023
2010-08-20T20:51:48.023
null
null
795
null
1970
1
1989
null
7
532
What are good statistical journals with quick turnaround (fast review cycle), suitable for short notes in mathematical statistics and preferably with open access. An example is Statistics & Probability Letters, however, that journal only has sponsored open access.
What is a statistical journal with quick turnaround?
CC BY-SA 2.5
null
2010-08-20T21:30:13.107
2010-08-21T21:19:39.280
null
null
168
[ "open-source" ]
1971
2
null
1908
5
null
[AISTATS](http://www.aistats.org/) -- Conference on Artificial Intelligence and Statistics Similar flavor of papers to NIPS, although papers may be of slightly lower quality. It is much smaller than ICML or NIPS, which allows people to have deeper interactions.
null
CC BY-SA 2.5
null
2010-08-20T21:52:06.853
2010-08-20T21:52:06.853
null
null
168
null
1972
1
1974
null
5
3854
I have a bunch of variables organized into 10 different levels of a grouping factor. I'm doing some ANCOVA on particular variables and also plotting the data using boxplots. I'd like to add 84% confidence intervals to all the groups (since non-overlapping 84% CIs indicate a significant difference at alpha .05 - at least for two groups). I can do all this quite easily in R. My question is - should I be applying a "family-wise" 84% CIs to all the groups? In other words, just as one would devalue an alpha level by the number of groups to obtain a family-wise alpha, should I inflate the CI a reciprocal amount to achieve a family-wise interval? This seems reasonable to me, but I haven't seen this discussed in the literature. If alpha were CI were interchangeable for two or more groups the the family-wise 84% CI would be 99.5%, but i've read that alpha and CI are only interchangable for 1-sample situations. If this is the case, how would I go about calculating the family-wise confidence intervals for 10 (or any number) groups? Any advice would be welcome. best, Steve
Family-wise confidence intervals
CC BY-SA 2.5
null
2010-08-20T22:12:36.160
2010-08-21T17:13:07.467
null
null
1029
[ "confidence-interval" ]
1973
2
null
1966
2
null
Half-space depth a.k.a. bag-plots. [http://www.r-project.org/user-2006/Slides/Mizera.pdf](http://www.r-project.org/user-2006/Slides/Mizera.pdf)
null
CC BY-SA 2.5
null
2010-08-20T22:35:55.070
2010-08-20T22:35:55.070
null
null
603
null
1974
2
null
1972
3
null
It sounds a reasonable solution if this is what important for you to present in the plot. What this will give you (besides many questions, in case you are working with people who like statistics less then you), is a CI that is applicable to your situation which requires correction for multiple hypothesis. What this won't give you, is the ability to compare difference between groups based on the CI. Regarding the computation of the CI, you could use the p.adjust with something like simes which will still keep your FWE (family wise error), but will give you a wider window. As to why you didn't find people writing about this, that is a good question, I don't know.
null
CC BY-SA 2.5
null
2010-08-20T22:40:51.053
2010-08-20T22:40:51.053
null
null
253
null
1975
2
null
1961
3
null
If you have more than two levels, you can use (M)CA: [http://en.wikipedia.org/wiki/Correspondence_analysis](http://en.wikipedia.org/wiki/Correspondence_analysis)
null
CC BY-SA 2.5
null
2010-08-20T22:45:24.010
2010-08-21T10:48:49.690
2010-08-21T10:48:49.690
603
603
null
1976
2
null
1963
11
null
I have two suggestions: - Systematic Reviews in Health Care: Meta-Analysis in Context (Amazon link) - Introduction to Meta-Analysis (Statistics in Practice) (Amazon link) Both books are very good, including introductory information as well as detailed information about how to actually perform meta-analyses.
null
CC BY-SA 2.5
null
2010-08-20T23:58:49.150
2010-08-20T23:58:49.150
null
null
561
null
1977
2
null
1966
5
null
[Dirichlet Process](http://en.wikipedia.org/wiki/Dirichlet_process) mixture models can be very flexible nonparametric Bayesian approach for density modeling, and can also be used as building blocks in more complex models. They are essentially an infinite generalization of parametric Gaussian mixture models and don't require specifying in advance the number of components in the mixture.
null
CC BY-SA 2.5
null
2010-08-21T00:00:26.493
2010-08-21T00:00:26.493
null
null
881
null
1978
2
null
1966
5
null
Gaussian Processes can also be another nonparametric Bayesian approach for density estimation. See this [Gaussian Process Density Sampler](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.144.3937&rep=rep1&type=pdf) paper.
null
CC BY-SA 2.5
null
2010-08-21T00:04:13.697
2010-08-21T00:04:13.697
null
null
881
null
1979
2
null
1963
7
null
I wrote a post a while back on [getting started with meta analysis](http://jeromyanglim.blogspot.com/2009/12/meta-analysis-tips-resources-and.html) with: (a) tips on getting started, (b) links to online introductory texts, and (c) links to free software for meta analysis. Specifically, you might want to read [James DeCoster's notes](http://www.stat-help.com/meta.pdf).
null
CC BY-SA 2.5
null
2010-08-21T04:42:39.203
2010-08-21T04:42:39.203
null
null
183
null
1980
1
null
null
73
5292
The Question: Are there any good examples of [reproducible research](http://reproducibleresearch.net/index.php/Main_Page) using R that are freely available online? Ideal Example: Specifically, ideal examples would provide: - The raw data (and ideally meta data explaining the data), - All R code including data import, processing, analyses, and output generation, - Sweave or some other approach for linking the final output to the final document, - All in a format that is easily downloadable and compilable on a reader's computer. Ideally, the example would be a journal article or a thesis where the emphasis is on an actual applied topic as opposed to a statistical teaching example. Reasons for interest: I'm particularly interested in applied topics in journal articles and theses, because in these situations, several additional issues arise: - Issues arise related to data cleaning and processing, - Issues arise related to managing metadata, - Journals and theses often have style guide expectations regarding the appearance and formatting of tables and figures, - Many journals and theses often have a wide range of analyses which raise issues regarding workflow (i.e., how to sequence analyses) and processing time (e.g., issues of caching analyses, etc.). Seeing complete working examples could provide good instructional material for researchers starting out with reproducible research.
Complete substantive examples of reproducible research using R
CC BY-SA 3.0
null
2010-08-21T04:58:11.600
2017-03-23T14:00:50.230
2017-02-22T23:08:43.460
35989
183
[ "r", "references", "reproducible-research" ]
1981
2
null
1961
4
null
A few thoughts: - There are many different binary-binary and ordinal-ordinal measures of association. SPSS provides names and algorithms for many of them under proximities and crosstabs. - I'm also intrigued by tetrachoric (binary-binary) and polychoric (ordinal-ordinal) correlations that aim to estimate the correlation between theorised latent continuous variables. - You can use Pearson's correlation. However, it is not always the most meaningful metric of association. Also, confidence intervals and p-values that assume continuous normal variables wont be perfectly accurate.
null
CC BY-SA 2.5
null
2010-08-21T05:30:40.953
2010-08-21T05:30:40.953
null
null
183
null
1982
2
null
1980
7
null
Charles Geyer's [page on Sweave](http://www.stat.umn.edu/~charlie/Sweave/#exam) has an example from a thesis, which meets some of your requirements (the raw data is simply from an R package, but the R/sweave code and final PDF are available): > A paper on the theory in Yun Ju Sung's thesis, Monte Carlo Likelihood Inference for Missing Data Models (preprint) contained computing examples. Every number in the paper and every plot was taken (by cut-and-paste, I must admit) from a "supplementary materials" document done in Sweave. (The [source file](http://www.stat.umn.edu/geyer/bernor/library/bernor/doc/examples.Rnw) is linked under the "Supplementary Materials for a Paper" section.) I know I've come across at least one R example browsing the [ReproducibleResearch.net material](http://www.stat.umn.edu/geyer/bernor/library/bernor/doc/examples.Rnw) page before, but unfortunately didn't bookmark it.
null
CC BY-SA 2.5
null
2010-08-21T05:47:09.693
2010-08-21T05:47:09.693
null
null
251
null
1983
2
null
1980
4
null
I have found good ones in the past and will post once I dig them up, but some quick general suggestions: - You may be able to find some interesting examples by searching google with keywords and ext:rnw (which will search for files with the sweave extension). Here's an example search. This is the third result from my search: http://www.ne.su.se/paper/araietal_source.Rnw. Here's another example from my search: http://www.stat.umn.edu/geyer/gdor/. - Many R packages have interesting vignettes which essentially amount to the same thing. An example: https://r-forge.r-project.org/scm/viewvc.php/paper/maxLik.Rnw
null
CC BY-SA 2.5
null
2010-08-21T07:15:43.827
2010-08-21T07:36:31.883
2010-08-21T07:36:31.883
5
5
null
1984
2
null
1980
4
null
Also look at [Journal Of Statistical Software](http://www.jstatsoft.org/); they encourage making papers in Sweave.
null
CC BY-SA 2.5
null
2010-08-21T07:30:48.057
2010-08-21T07:30:48.057
null
null
null
null
1985
2
null
1980
15
null
Frank Harrell has been beating the drum on reproducible research and reports for many, many years. You could start [at this wiki page](http://biostat.mc.vanderbilt.edu/wiki/Main/StatReport) which lists plenty of other resources, including published research and also covers Charles Geyer's page.
null
CC BY-SA 2.5
null
2010-08-21T14:03:03.573
2010-08-21T14:03:03.573
null
null
334
null
1986
2
null
1972
0
null
For nearly equal sample sizes you can translate Tukey's HSD (Google it) into a set of individual CIs. For unequal sample sizes your approach may be doomed, because all pairwise comparisons cannot be reduced to pairwise comparisons of intervals: check out the literature on the Tukey-Kramer Method for details. (I know Stata and SAS both do these computations; contributed package DTK does it in R.)
null
CC BY-SA 2.5
null
2010-08-21T17:13:07.467
2010-08-21T17:13:07.467
null
null
919
null
1987
2
null
1460
2
null
You can also use the HST (mentioned here) [How should I transform non-negative data including zeros?](https://stats.stackexchange.com/questions/1444/how-should-i-transform-non-negative-data-including-zeros/1630#1630) If $x$ is p.c. consumption, create a variable $x'=x-\bar{x}$ (i.e. de-mean x). Then use $f(x',\theta=1)$ as your explanatory variable (where $f()$ is the inverse hyperbolic sin transform). for positive values of x' (i.e. people who consume more than the average) $f(x',\theta=1)$ behaves as $\log(x')$. for negative values of $x'$ (i.e. people who consume less than the average) $f(x',\theta=1)$ behaves as $-\log(-x')$. ($f(x',\theta=1)$ looks like a big 'S', passing by the origin).
null
CC BY-SA 2.5
null
2010-08-21T17:59:43.010
2010-09-16T14:17:36.323
2017-04-13T12:44:46.680
-1
603
null
1988
2
null
1841
1
null
You can do it all on Excel. Plotting the six time series should give you a hint of the shapes of the curves. Let's say that, as you mentioned, five of the curves look like they're exponential and the sixth looks like it grows sub-linearly. Insert a trendline for each curve. If you are right, five of them will provide the best fit (as measured by r squared) with an exponential trendline, while the sixth will be best fitted to a logarithmic trendline. This may sound non-deterministic, but if all six values of r squared are close to 1 you can be pretty confident of your result.
null
CC BY-SA 2.5
null
2010-08-21T20:33:14.980
2010-08-21T20:33:14.980
null
null
666
null
1989
2
null
1970
3
null
Maybe [Statistics Surveys](http://www.i-journals.org/ss/) (but I think they are seeking review more than short note), [Statistica Sinica](http://www3.stat.sinica.edu.tw/statistica/), or the [Electronic Journal of Statistics](http://www.imstat.org/ejs/). They are not as quoted as SPL, but I hope this may help.
null
CC BY-SA 2.5
null
2010-08-21T21:19:39.280
2010-08-21T21:19:39.280
null
null
930
null
1990
2
null
1942
4
null
The [European Association of Methodology](http://www.eam-online.org/) has a meeting turning around statistics and psychometrics for applied research in social, educational and psychological science every two years. The latest was held in [Postdam](http://www.iqb.hu-berlin.de/veranst/EAM-SMABS) two months ago.
null
CC BY-SA 2.5
null
2010-08-21T21:24:35.643
2010-08-21T21:24:35.643
null
null
930
null
1991
2
null
1912
3
null
We have a tendency to crunch data according to pre-established algorithms and methods, and forget that "data" is actually information about the real world. I recall as a child in school solving a second-degree equation where the teacher had stated that the answer represented the length of a pencil. Some students actually reported that the answer was "one inch plus or minus two inches". Before you plug your data into any software, you should first get to really know and understand it, which you can only accomplish if you keep the subject matter in mind. That's the only way you can spot any quirky data points (such as a pencil measuring -1 inch) or determine which scales make sense in the real world.
null
CC BY-SA 2.5
null
2010-08-21T21:34:42.003
2010-08-21T21:34:42.003
null
null
666
null
1992
2
null
1908
1
null
[AAAI (in Atlanta this year)](http://www.aaai.org/Conferences/conferences.php)
null
CC BY-SA 2.5
null
2010-08-21T21:36:07.480
2010-08-21T21:36:07.480
null
null
1033
null
1993
2
null
1980
8
null
We wrote a paper explaining how to use R/Bioconductor when analysing microarray data. The paper was written in Sweave and all the code used to generate the graphs is included as supplementary material. Gillespie, C. S., Lei, G., Boys, R. J., Greenall, A. J., Wilkinson, D. J., 2010. [Analysing yeast time course microarray data using BioConductor: a case study using yeast2 Affymetrix arrays](http://www.mas.ncl.ac.uk/~ncsg3/microarray/) BMC Research Notes, 3:81.
null
CC BY-SA 2.5
null
2010-08-21T21:59:56.010
2010-09-02T09:45:08.890
2010-09-02T09:45:08.890
8
8
null
1994
2
null
1460
1
null
Using variables in logs is actually quite common in economics, since the estimated coefficients can be interpreted as sensitivities to relative changes in RHS variables (or elasticities, if both LHS and RHS variables are in logs). For example, say that you have model y = b ln(x), and x changes to x(1+r). Then you can use the approximation $ln(1+t) \approx t$ to see how y changes: $$y = b \ln(x(1+r)) = b \ln(x) + b \ln(1+r) \approx b \ln(x) + b r.$$ So if r is 0.01 (x increases by 1%), y increases by b r = 0.01 b (of course, this works only for small r). In case of your probit model, if coefficient for log-consumption is b, it can be interpreted so that increase in consumption by 1% would increase probability of enrollment by b %.
null
CC BY-SA 2.5
null
2010-08-21T22:50:51.183
2010-08-21T22:50:51.183
null
null
1034
null
1995
1
null
null
40
24631
Under which conditions should someone consider using multilevel/hierarchical analysis as opposed to more basic/traditional analyses (e.g., ANOVA, OLS regression, etc.)? Are there any situations in which this could be considered mandatory? Are there situations in which using multilevel/hierarchical analysis is inappropriate? Finally, what are some good resources for beginners to learn multilevel/hierarchical analysis?
Under what conditions should one use multilevel/hierarchical analysis?
CC BY-SA 2.5
null
2010-08-22T00:22:33.757
2017-02-02T22:13:52.610
2017-02-02T22:13:52.610
28666
835
[ "mixed-model", "multilevel-analysis" ]
1996
2
null
726
19
null
> It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so. Mark Twain (okay, so he's not a statistician)
null
CC BY-SA 2.5
null
2010-08-22T00:34:17.460
2010-12-05T05:32:57.497
2010-12-05T05:32:57.497
74
74
null
1997
2
null
1995
24
null
When the structure of your data is naturally hierarchical or nested, multilevel modeling is a good candidate. More generally, it's one method to model interactions. A natural example is when your data is from an organized structure such as country, state, districts, where you want to examine effects at those levels. Another example where you can fit such a structure is is longitudinal analysis, where you have repeated measurements from many subjects over time (e.g. some biological response to a drug dose). One level of your model assumes a group mean response for all subjects over time. Another level of your model then allows for perturbations (random effects) from the group mean, to model individual differences. A popular and good book to start with is Gelman's [Data Analysis Using Regression and Multilevel/Hierachical Models](http://rads.stackoverflow.com/amzn/click/052168689X).
null
CC BY-SA 2.5
null
2010-08-22T00:40:31.490
2010-08-22T00:40:31.490
null
null
251
null
1998
1
null
null
12
3772
Miller and Chapman (2001) argue that it is absolutely inappropriate to control for non-independent covariates that are related to both the independent and dependent variables in an observational (non-randomized) study - even though this is routinely done in the social sciences. How problematic is it to do so? How is the best way to deal with this problem? If you routinely control for non-independent covariates in an observational study in your own research, how do you justify it? Finally, is this a fight worth picking when arguing methodology with ones colleagues (i.e., does it really matter)? ## Thanks Miller, G. A., & Chapman, J. P. (2001). Misunderstanding analysis of covariance. Journal of Abnormal Psychology, 110, 40-48. - [http://mres.gmu.edu/pmwiki/uploads/Main/ancova.pdf](http://mres.gmu.edu/pmwiki/uploads/Main/ancova.pdf)
How problematic is it to control for non-independent covariates in an observational (i.e., non-randomized) study?
CC BY-SA 2.5
null
2010-08-22T00:53:28.873
2012-04-19T09:29:21.603
2010-09-16T06:42:57.313
null
835
[ "non-independent" ]
1999
2
null
1995
4
null
Generally, speaking a hierarchical bayesian (HB) analysis will lead to efficient and stable individual level estimates unless your data is such that individual level effects are completely homogeneous (an unrealistic scenario). The efficiency and stable parameter estimates of HB models becomes really important when you have sparse data (e.g., less no of obs than the no of parameters at the individual level) and when you want to estimate individual level estimates. However, HB models are not always easy to estimate. Therefore, while HB analysis usually trumps non-HB analysis you have to weigh the relative costs vs benefits based on your past experience and your current priorities in terms of time and cost. Having said that if you are not interested in individual level estimates then you can simply estimate an aggregate level model but even in these contexts estimating aggregating models via HB using individual level estimates may make a lot of sense. In summary, fitting HB models is the recommended approach as long as you have the time and the patience to fit them. You can then use aggregate models as a benchmark to assess the performance of your HB model.
null
CC BY-SA 2.5
null
2010-08-22T01:04:54.670
2010-08-22T01:04:54.670
null
null
null
null
2000
2
null
1998
2
null
I read the first page of their paper and so I may have misunderstood their point but it seems to me that they are basically discussing the problem of including multi-collinear independent variables in the analysis. The example they take of age and grade illustrates this idea as they state that: > Age is so intimately associated with grade in school that removal of variance in basketball ability associated with age would remove considerable (perhaps nearly all) variance in basketball ability associated with grade ANCOVA is linear regression with the levels represented as dummy variables and the covariates also appearing as independent variables in the regression equation. Thus, unless I have misunderstood their point (which is quite possible as I have not read their paper completely) it seems they are saying 'do not include dependent covariates' which is equivalent to stating avoid multi-collinear variables.
null
CC BY-SA 2.5
null
2010-08-22T01:22:31.077
2010-08-22T01:22:31.077
null
null
null
null
2001
2
null
1998
4
null
It is as problematic as the degree of correlation. The irony is that you wouldn't bother controlling if there weren't some expected correlation with one of the variables. And, if you expect your independent variable to affect your dependent then it's necessarily somewhat correlated with both. However, if it's highly correlated them perhaps you shouldn't be controlling for it since it's tantamount to controlling out the actual independent or dependent variable.
null
CC BY-SA 2.5
null
2010-08-22T02:10:22.197
2010-08-22T02:10:22.197
null
null
601
null
2002
1
null
null
32
43066
I am running LOESS regression models in R, and I want to compare the outputs of 12 different models with varying sample sizes. I can describe the actual models in more details if it helps with answering the question. Here are the sample sizes: ``` Fastballs vs RHH 2008-09: 2002 Fastballs vs LHH 2008-09: 2209 Fastballs vs RHH 2010: 527 Fastballs vs LHH 2010: 449 Changeups vs RHH 2008-09: 365 Changeups vs LHH 2008-09: 824 Changeups vs RHH 2010: 201 Changeups vs LHH 2010: 330 Curveballs vs RHH 2008-09: 488 Curveballs vs LHH 2008-09: 483 Curveballs vs RHH 2010: 213 Curveballs vs LHH 2010: 162 ``` The LOESS regression model is a surface fit, where the X location and the Y location of each baseball pitch is used to predict sw, swinging strike probability. However, I'd like to compare between all 12 of these models, but setting the same span (i.e. span = 0.5) will bear different results since there is such a wide range of sample sizes. My basic question is how do you determine the span of your model? A higher span smooths out the fit more, while a lower span captures more trends but introduces statistical noise if there is too little data. I use a higher span for smaller sample sizes and a lower span for larger sample sizes. What should I do? What's a good rule of thumb when setting span for LOESS regression models in R? Thanks in advance!
How do I decide what span to use in LOESS regression in R?
CC BY-SA 2.5
null
2010-08-22T02:24:24.947
2023-04-29T06:37:55.367
2017-09-12T03:19:50.470
9162
null
[ "r", "regression", "loess" ]
2003
2
null
2002
9
null
I suggest checking out generalized additive models (GAM, see the mgcv package in R). I'm just learning about them myself, but they seem to automatically figure out how much "wiggly-ness" is justified by the data. I also see that you're dealing with binomial data (strike vs not a strike), so be sure to analyze the raw data (i.e. don't aggregate to proportions, use the raw pitch-by-pitch data) and use family='binomial' (assuming that you're going to use R). If you have information about what individual pitchers and hitters are contributing to the data, you can probably increase your power by doing a generalized additive mixed model (GAMM, see the gamm4 package in R) and specifying pitcher and hitter as random effects (and again, setting family='binomial'). Finally, you probably want to allow for an interaction between the smooths of X & Y, but I've never tried this myself so I don't know how to go about that. A gamm4 model without the X*Y interaction would look like: ``` fit = gamm4( formula = strike ~ s(X) + s(Y) + pitch_type*batter_handedness + (1|pitcher) + (1|batter) , data = my_data , family = 'binomial' ) summary(fit$gam) ``` Come to think of it, you probably want to let the smooths vary within each level of pitch type and batter handedness. This makes the problem more difficult as I've not yet found out how to let the smooths vary by multiple variables in a way that subsequently produces meaninful analytic tests ([see my queries to the R-SIG-Mixed-Models list](https://stat.ethz.ch/pipermail/r-sig-mixed-models/2010q3/004170.html)). You could try: ``` my_data$dummy = factor(paste(my_data$pitch_type,my_data$batter_handedness)) fit = gamm4( formula = strike ~ s(X,by=dummy) + s(Y,by=dummy) + pitch_type*batter_handedness + (1|pitcher) + (1|batter) , data = my_data , family = 'binomial' ) summary(fit$gam) ``` But this won't give meaningful tests of the smooths. In attempting to solve this problem myself, I've used bootstrap resampling where on each iteration I obtain the model predictions for the full data space then compute the bootstap 95% CIs for each point in the space and any effects I care to compute.
null
CC BY-SA 2.5
null
2010-08-22T02:46:05.200
2010-08-22T03:14:30.497
2010-08-22T03:14:30.497
364
364
null
2004
2
null
1995
18
null
[The Centre for Multilevel Modelling](http://www.cmm.bristol.ac.uk/learning-training/index.shtml) has some good free online tutorials for multi-level modeling, and they have software tutorials for fitting models in both their MLwiN software and STATA. Take this as heresy, because I have not read more than a chapter in the book, but Hierarchical linear models: applications and data analysis methods By Stephen W. Raudenbush, Anthony S. Bryk comes highly recommended. I also swore there was a book on multi level modeling using R software in the Springer Use R! series, but I can't seem to find it at the moment (I thought it was written by the same people who wrote the A Beginner’s Guide to R book). edit: The book on using R for multi-level models is [Mixed Effects Models and Extensions in Ecology with R by Zuur, A.F., Ieno, E.N., Walker, N., Saveliev, A.A., Smith, G.M.](http://www.springer.com/life+sciences/ecology/book/978-0-387-87457-9) good luck
null
CC BY-SA 2.5
null
2010-08-22T02:48:20.607
2010-08-23T16:26:02.707
2010-08-23T16:26:02.707
1036
1036
null
2005
2
null
1942
4
null
You're probably already aware of it, but the Society for Mathematical Psychology has an annual conference, MathPsych, which is attached to CogSci (generally happnens in the same city either before or after) and blends statistical methodology and Psychological modeling. They do a pretty good job getting big names to come present, it's pretty cutting edge. 2010 conference site: [http://www.mathpsych.org/conferences/2010/](http://www.mathpsych.org/conferences/2010/)
null
CC BY-SA 2.5
null
2010-08-22T03:05:50.557
2010-08-22T03:05:50.557
null
null
5186
null
2006
2
null
1998
3
null
As I see it, there are two basic problems with observational studies that "control for" a number of independent variables. 1) You have the problem of missing explanatory variables and thus model misspecification. 2) You have the problem of multiple correlated independent variables--a problem that does not exist in (well) designed experiments--and the fact that regression coefficients and ANCOVA tests of covariates are based on partials, making them difficult to interpret. The first is intrinsic to the nature of observational research and is addressed in scientific context and the process of competitive elaboration. The latter is an issue of education and relies on a clear understanding of regression and ANCOVA models and exactly what those coefficients represent. With respect to the first issue, it is easy enough to demonstrate that if all of the influences on some dependent variable are known and included in a model, statistical methods of control are effective and produce good predictions and estimates of effects for individual variables. The problem in the "soft sciences" is that all of the relevant influences are rarely included or even known and thus the models are poorly specified and difficult to interpret. Yet, many worthwhile problems exist in these domains. The answeres simply lack certainty. The beauty of the scientific process is that it is self corrective and models are questioned, elaborated, and refined. The alternative is to suggest that we cannot investigate these issues scientifically when we can't design experiments. The second issue is a technical issue in the nature of ANCOVA and regression models. Analysts need to be clear about what these coefficients and tests represent. Correlations among the independent variables influence regression coefficients and ANCOVA tests. They are tests of partials. These models take out the variance in a given independent variable and the dependent variable that are associated with all of the other variables in the model and then examine the relationship in those residuals. As a result, the individual coefficients and tests are very difficult to interpret outside of the context of a clear conceptual understanding of the entire set of variables included and their interrelationships. This, however, produces NO problems for prediction--just be cautious about interpreting specific tests and coefficients. A side note: The latter issue is related to a problem discussed previously in this forum on the reversing of regression signs--e.g., from negative to positive--when other predictors are introduced into a model. In the presence of correlated predictors and without a clear understanding of the multiple and complex relationships among the entire set of predictors, there is no reason to EXPECT a (by nature partial) regression coefficient to have a particular sign. When there is strong theory and a clear understanding of those interrelationships, such sign "reversals" can be enlightening and theoretically useful. Though, given the complexity of many social science problems sufficient understanding would not be common, I would expect. Disclaimer: I'm a sociologist and public policy analyst by training.
null
CC BY-SA 2.5
null
2010-08-22T03:11:06.507
2010-08-22T16:22:23.530
2010-08-22T16:22:23.530
485
485
null
2007
1
2026
null
16
1070
Although I was trained as an engineer, I find that I'm becoming more interested in data mining. Right now I'm trying to investigate the field further. In particular, I would like to understand the different categories of software tools that exist and which tools are notable in each category and why. (Note that I didn't say the "best" tools, just the notable ones lest we start a flame war.) Especially make note of the tools that are open-source and freely available - although don't take this to mean that I'm only interested in open-source and free.
A survey of data-mining software tools
CC BY-SA 2.5
null
2010-08-22T03:24:28.493
2016-03-15T02:46:33.770
null
null
1026
[ "data-mining" ]
2008
1
null
null
4
261
I am looking through articles citing an article I'm reading - and I wish to judge how much that citing article is "important" or "good" by itself. one way of knowing would be if I had known how "distinguished" that journal was. Which leads me to my question: What measures are there (and where can I find them) for a journal "importance" or "impact"? (I know of Impact Factor score. But I wonder if there are other such measures, and which are more applicable to statistical journals)
Measures of publication "importance" in statistics?
CC BY-SA 2.5
null
2010-08-22T04:16:03.970
2013-05-20T00:51:02.750
null
null
253
[ "references" ]
2010
1
2019
null
0
273
Let $X_1$, $X_2, \dots, X_n$ be identically distributed random variables with given mean, and let $N$ be a random variable with $N \ge 0$ that is independent of $X_1$, $X_2, \dots, X_n$. If $Y=X_1+X_2+...X_n$; how do we find $\mathrm{E}(Y|N=n)$? With $D$ being the domain of $y$, I know the conditional expectation is defined as $\mathrm{E}(Y|N=n) = \sum\limits_{y \in D} y \cdot \mathrm{P}(Y=y|N=n) = \sum\limits_{y \in D} y \cdot \dfrac{P(Y=y,N=n)}{P(N=n)}$. I am not sure how to count/compute $P(Y=y,N=n)$. How can it be computed?
Conditional expectation given number of outcomes
CC BY-SA 3.0
null
2010-08-22T04:51:46.790
2014-12-03T02:46:55.200
2014-12-03T02:46:55.200
38160
862
[ "probability", "conditional-probability", "conditional-expectation" ]
2012
2
null
2008
4
null
This might be a good place to start. [Eigenfactor](http://www.eigenfactor.org/whyeigenfactor.php) is a scholarly journal and article scoring, and ranking, system. It is a free service offered by the University of Washington. The algorithm, methodology, data set composition and update frequency is described on the Eigenfactor website.
null
CC BY-SA 3.0
null
2010-08-22T06:37:15.933
2013-05-20T00:51:02.750
2013-05-20T00:51:02.750
5505
74
null
2013
2
null
2007
7
null
Have a look at - Weka (java, strong in classification) - Orange (python scripting, mostly classification) - GNU R (R language, somewhat vector table oriented, see the Machine Learning taskview, and Rattle UI) - ELKI (java, strong on clustering and outlier detection, index structure support for speedups, algorithm list) - Mahout (Java, belongs to Hadoop, if you have a cluster and huge data sets) and the [UCI Machine Learning Repository](http://archive.ics.uci.edu/ml/) for data sets.
null
CC BY-SA 3.0
null
2010-08-22T07:27:32.010
2012-10-29T22:51:33.887
2012-10-29T22:51:33.887
7828
930
null
2016
2
null
2008
4
null
Anne-Wil Harzing has a useful site with some free software called [Publish or Perish](http://www.harzing.com/pop.htm). The site discusses a number of journal, article, and author impact factor metrics. The software uses Google Scholar to calculate citation based impact factor metrics.
null
CC BY-SA 2.5
null
2010-08-22T10:02:58.580
2010-08-22T10:02:58.580
null
null
183
null
2017
2
null
2007
3
null
[Rattle](http://rattle.togaware.com/) is a data mining GUI that provides a front end to a wide range of R packages.
null
CC BY-SA 2.5
null
2010-08-22T10:07:23.333
2010-08-22T10:07:23.333
null
null
183
null
2018
2
null
1998
0
null
Some of the matching tools developed by Gary King and colleagues look promising: - Software - Video providing a tutorial in R
null
CC BY-SA 2.5
null
2010-08-22T10:19:09.027
2010-08-22T10:19:09.027
null
null
183
null
2019
2
null
2010
1
null
You can use the fact that expectation is [linear](http://en.wikipedia.org/wiki/Expected_value#Linearity) and compute $E(Y|N=n)$. The fact that $N$ is a random variable does not matter as you are computing the expected value of $Y$ conditional on a specific value of $N$.
null
CC BY-SA 2.5
null
2010-08-22T10:29:46.923
2010-08-22T10:29:46.923
null
null
null
null
2020
2
null
1935
3
null
> This is the essence of bootstrapping: taking different samples of your data, getting a statistic for each sample (e.g., the mean, median, correlation, regression coefficient, etc.), and using the variability in the statistic across samples to indicate something about the standard error and confidence intervals for the statistic. - Bootstrapping and the boot package in R
null
CC BY-SA 2.5
null
2010-08-22T10:30:13.163
2010-08-22T10:30:13.163
null
null
183
null
2021
2
null
2008
3
null
There's a nice website: [http://www.arnetminer.org/page/conference-rank/html/Journal.html](http://www.arnetminer.org/page/conference-rank/html/Journal.html) - however, it only contains computer science conferences. It's an interesting question because publication venue is an extremely important factor for the reception of an article - most of these concepts however live inside the heads of the people working in the field; all these metrics imo just try to mimic those concepts. Although having argued in a comment to your question in favor of metrics, actually looking (even superficially) at as many articles as possible from different sources should let you develop an intuition about good and bad publication venues, which is, on the long run, probably the most efficient tool you can get.
null
CC BY-SA 2.5
null
2010-08-22T10:47:03.253
2010-08-22T10:47:03.253
null
null
979
null
2022
1
null
null
5
194
Assume: - A previous study looking at the relationship between $X$ and $Y$ obtained a correlation of $r = 0.50$ using a sample of $n = 100$. The raw data is not available. - The current study also looking at the relationship between $X$ and $Y$ obtained a correlation of $r = 0.45$ with $n = 50$. How would you do the following tasks: - Give your best estimate of the population correlation between $X$ and $Y$ assuming the two studies are estimating the same correlation. - Give your best estimate of the population correlation between $X$ and $Y$ in the current study assuming that the first study is slightly different than the current study (e.g., it used a different measurement procedure, a different type of sample, etc.). Of course the weight given to the previous study would depend on perceived similarity with the second study. Thus, are there standard ways of quantifying similarity between studies in such calculations?
Estimating population correlation based on current data and a previous study
CC BY-SA 2.5
null
2010-08-22T11:24:41.663
2010-08-22T22:56:57.690
null
null
183
[ "bayesian", "correlation", "meta-analysis" ]
2023
2
null
2022
2
null
The article on [Combinative Properties of Correlation Coefficients](http://www.jstor.org/stable/20150454) may have the answer to your question.
null
CC BY-SA 2.5
null
2010-08-22T13:52:09.993
2010-08-22T13:52:09.993
null
null
null
null
2024
1
2027
null
3
403
This is a very basic questions however I have not been able to find an answer. When you plot your regular bar plot with your standard errors of the means for comparison do you plot the standard errors and means from the raw data ? Or do you plot the ones predicted by the model you are fitting. Thank you in advance for your help. I usually use the latter please advice. Alfred
Plotting standard errors
CC BY-SA 2.5
null
2010-08-22T14:18:56.710
2010-08-22T15:54:54.607
null
null
null
[ "confidence-interval" ]
2026
2
null
2007
7
null
This is probably the most comprehensive list you'll find: [mloss.org](http://mloss.org/software/)
null
CC BY-SA 2.5
null
2010-08-22T15:08:47.093
2010-08-22T15:08:47.093
null
null
635
null
2027
2
null
2024
4
null
The list of things to say here... As Tai's question suggests, it's hard to directly answer your question without information on the actual model. Nevertheless, it's usually good to present data reflective of the model. Typically that is the means with t-tests or ANOVAs. With something else it's probably close to that. How does a standard error on a graph work for comparison? Are you going to be putting the N on the graph and the multiplying factor needed for comparisons? How many data points do you have? You could probably just put up the entire data set, with a line indicating your predicted value and some kind of measure of variability around it. Perhaps even an overlayed boxplot. The measure of variability should reflect what you want to say about the data. If you just want people to compare values then std. err. isn't a very good idea because it's dependent upon n and requires some value of multiplication greater than 2 (i.e. bars have to not overlap by some amount for significant effects). Instead, put up 0.5*LSD bars (comparison bars) (about an 84% confidence interval, or a 0.5 * 95%CI * sqrt(2)). Those bars would show significant differences at the point of bar overlap. Or, you could be wanting to represent how well you estimated the predicted values. In that case a more convenient confidence interval (about 95%) would be best or even the std. err would be ok. If you want to reflect the estimated variability of the population you put up the standard deviation.
null
CC BY-SA 2.5
null
2010-08-22T15:33:16.123
2010-08-22T15:54:54.607
2010-08-22T15:54:54.607
601
601
null
2029
2
null
2007
3
null
Have a look at [KNIME](http://www.knime.org/). Very easy to learn. With lots of scope for further progress. Integrates nicely with Weka and R.
null
CC BY-SA 2.5
null
2010-08-22T18:15:09.023
2010-08-22T18:15:09.023
null
null
22
null
2030
2
null
2007
2
null
From the popularity perspective, this paper (2008) surveys [top 10 algorithms in data mining](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.144.5575&rep=rep1&type=pdf).
null
CC BY-SA 2.5
null
2010-08-22T19:24:35.670
2010-08-22T19:24:35.670
null
null
881
null
2031
2
null
203
6
null
IMHO you cannot use a t-test for Likert scales. The Likert scale is ordinal and "knows" only about relations of values of a variable: e.g. "totally dissatisfied" is worse than "somehow dissatisfied". A t-test on the other hand needs to calculate means and more and thus needs interval data. You can map Likert scale scores to interval data ("totally dissatisfied" is 1 and so on) but nobody guarantees that "totally dissatisfied" is the same distance to "somehow dissatisfied" as "somehow dissatisfied" is from "neither nor". By the way: what is the difference between "totally dissatisfied" and "somehow dissatisfied"? So in the end, you'd do a t-test on the coded values of your ordinal data but that just doesn't make any sense.
null
CC BY-SA 2.5
null
2010-08-22T19:33:49.787
2010-08-22T19:33:49.787
null
null
1048
null
2032
1
null
null
10
5616
What learning material would you suggest for a CS person / novice statistician / novice mathematician to get into predictive analytics?
Recommend some books/articles/guides to enter predictive analytics?
CC BY-SA 2.5
null
2010-08-22T19:49:20.753
2018-04-13T19:27:35.470
null
null
1049
[ "references", "predictive-models" ]
2033
2
null
1493
0
null
``` set.seed(10); x <- matrix(rnorm(15000), ncol=15) plot(density(rowSums(x^2)), col=2, xlim=c(-100, 100)) for (i in 3:10) { lines(density(rowSums(x^i)), col=i) } ``` Can give us a plot. I don't have the perfect theoretical results. Another thing is, what's will it be if $n$ is not an integer? For example, what is $(-3.5)^{3.4}$? Oh, maybe you're looking for this [article](http://biomet.oxfordjournals.org/cgi/pdf_extract/46/3-4/296)?
null
CC BY-SA 2.5
null
2010-08-22T21:41:03.547
2010-08-23T02:22:47.720
2010-08-23T02:22:47.720
1043
1043
null
2034
2
null
2022
1
null
You can't hope to combine correlations with any legitimacy unless you also know the means and variances of the X's and Y's in each case, as well as their counts (n) and correlations (r). The article Srikant Vadali refers to (Jack Dunlap, 1937) starts off by making exactly this assumption. (It's easy to construct examples with the given values of your n and r statistics where the combined value of r is arbitrarily close to +1 or -1 or anything in between.) Having these full second-moment statistics is crucial in the case of question (2) where one should expect there to be some systematic differences between the two studies.
null
CC BY-SA 2.5
null
2010-08-22T22:56:57.690
2010-08-22T22:56:57.690
null
null
919
null
2035
1
2036
null
10
6004
If $X_i\sim\Gamma(\alpha_i,\beta_i)$ for $1\leq i\leq n$, let $Y = \sum_{i=1}^n c_iX_i$ where $c_i$ are positive real numbers. Assume all the parameters $\alpha_i$'s and $\beta_i$'s are all known, what is $Y$'s distribution ?
The distribution of the linear combination of Gamma random variables
CC BY-SA 2.5
null
2010-08-23T03:12:13.720
2011-05-20T08:17:10.563
null
null
1043
[ "distributions" ]
2036
2
null
2035
9
null
See Theorem 1 given in [Moschopoulos](http://www.ism.ac.jp/editsec/aism/pdf/037_3_0541.pdf) (1985) for the distribution of a sum of independent gamma variables. You can extend this result using the [scaling property](http://en.wikipedia.org/wiki/Gamma_distribution#Scaling) for linear combinations.
null
CC BY-SA 2.5
null
2010-08-23T06:07:35.467
2010-08-23T06:07:35.467
null
null
251
null
2037
1
null
null
15
4876
Suppose I have a black box that generates data following a normal distribution with mean m and standard deviation s. Suppose, however, that whenever it outputs a value < 0 it does not record anything (can't even tell that it's outputted such a value). We have a truncated gaussian distribution without a spike. How can I estimate these parameters?
Estimating mean and st dev of a truncated gaussian curve without spike
CC BY-SA 2.5
null
2010-08-23T09:34:51.700
2021-11-24T09:05:51.270
2021-11-24T09:05:51.270
1352
null
[ "distributions", "estimation", "truncated-normal-distribution", "truncated-distributions" ]
2038
1
2040
null
36
87237
I came across this nice tutorial: [A Handbook of Statistical Analyses Using R. Chapter 13. Principal Component Analysis: The Olympic Heptathlon](http://cran.r-project.org/web/packages/HSAUR/vignettes/Ch_principal_components_analysis.pdf) on how to do PCA in R language. I don't understand the interpretation of Figure 13.3: ![biplot](https://i.stack.imgur.com/SXvjv.png) So I am plotting first eigenvector vs the second eigenvector. What does that mean? Suppose eigenvalue corresponding to first eigenvector explains 60% of variation in data set and second eigenvalue-eigenvector explain 20% of variation. What does it mean to plot these against each other?
Interpretation of biplots in principal components analysis
CC BY-SA 3.0
null
2010-08-23T09:48:44.153
2020-09-14T18:37:52.917
2017-10-28T22:37:19.690
28666
862
[ "r", "pca", "data-visualization", "interpretation", "biplot" ]
2039
2
null
2038
22
null
The plot is showing: - the score of each case (i.e., athlete) on the first two principal components - the loading of each variable (i.e., each sporting event) on the first two principal components. The left and bottom axes are showing [normalized] principal component scores; the top and right axes are showing the loadings. In general it assumes that two components explain a sufficient amount of the variance to provide a meaningful visual representation of the structure of cases and variables. You can look to see which events are close together in the space. Where this applies, this may suggest that athletes who are good at one event are likely also to be good at the other proximal events. Alternatively you can use the plot to see which events are distant. For example, javelin appears to be bit of an outlier and a major event defining the second principal component. Perhaps a different kind of athlete is good at javelin than is good at most of the other events. Of course, more could be said about substantive interpretation.
null
CC BY-SA 3.0
null
2010-08-23T10:22:08.937
2015-01-14T18:09:01.110
2015-01-14T18:09:01.110
28666
183
null
2040
2
null
2038
25
null
PCA is one of the many ways to analyse the structure of a given correlation matrix. By construction, the first principal axis is the one which maximizes the variance (reflected by its eigenvalue) when data are projected onto a line (which stands for a direction in the $p$-dimensional space, assuming you have $p$ variables) and the second one is orthogonal to it, and still maximizes the remaining variance. This is the reason why using the first two axes should yield the better approximation of the original variables space (say, a matrix $X$ of dim $n \times p$) when it is projected onto a plane. Principal components are just linear combinations of the original variables. Therefore, plotting individual factor scores (defined as $Xu$, where $u$ is the vector of loadings of any principal component) may help to highlight groups of homogeneous individuals, for example, or to interpret one's overall scoring when considering all variables at the same time. In other words, this is a way to summarize one's location with respect to his value on the $p$ variables, or a combination thereof. In your case, Fig. 13.3 in HSAUR shows that Joyner-Kersee (Jy-K) has a high (negative) score on the 1st axis, suggesting he performed overall quite good on all events. The same line of reasoning applies for interpreting the second axis. I take a very short look at the figure so I will not go into details and my interpretation is certainly superficial. I assume that you will find further information in the HSAUR textbook. Here it is worth noting that both variables and individuals are shown on the same diagram (this is called a biplot), which helps to interpret the factorial axes while looking at individuals' location. Usually, we plot the variables into a so-called correlation circle (where the angle formed by any two variables, represented here as vectors, reflects their actual pairwise correlation, since the cosine of the angle between pairs of vectors amounts to the correlation between the variables. I think, however, you'd better start reading some introductory book on multivariate analysis to get deep insight into PCA-based methods. For example, B.S. Everitt wrote an excellent textbook on this topic, An R and S-Plus® Companion to Multivariate Analysis, and you can check the [companion website](http://biostatistics.iop.kcl.ac.uk/publications/everitt/) for illustration. There are other great R packages for applied multivariate data analysis, like [ade4](http://cran.r-project.org/web/packages/ade4/index.html) and [FactoMineR](http://cran.r-project.org/web/packages/FactoMineR/index.html).
null
CC BY-SA 4.0
null
2010-08-23T10:23:50.200
2020-09-14T18:37:52.917
2020-09-14T18:37:52.917
930
930
null
2041
2
null
2037
7
null
The model for your data would be: $y_i \sim N(\mu,\sigma^2) I(y_i > 0)$ Thus, the density function is: $$f(y_i|-) = \frac{exp(-\frac{(y_i-\mu)^2}{2 \sigma^2})}{\sqrt{2 \pi \sigma}\ (1 - \phi(-\frac{\mu}{\sigma}))}$$ where, $\phi(.)$ is the standard normal cdf. You can then estimate the parameters $\mu$ and $\sigma$ using either maximum likelihood or bayesian methods.
null
CC BY-SA 2.5
null
2010-08-23T10:46:36.267
2010-08-23T10:46:36.267
null
null
null
null
2042
2
null
213
27
null
Have a look at the [mvoutlier](http://cran.r-project.org/web/packages/mvoutlier/index.html) package which relies on ordered robust mahalanobis distances, as suggested by @drknexus.
null
CC BY-SA 2.5
null
2010-08-23T10:48:27.980
2010-08-23T10:48:27.980
null
null
930
null
2043
2
null
1812
6
null
The R [psych](http://cran.r-project.org/web/packages/psych/index.html) package includes various routines to apply Factor Analysis (whether it be PCA-, ML- or FA-based), but see my short review on [crantastic](http://crantastic.org/packages/psych). Most of the usual rotation techniques are available, as well as algorithm relying on simple structure criteria; you might want to have a look at W. Revelle's paper on this topic, [Very Simple Structure: An Alternative Procedure For Estimating The Optimal Number Of Interpretable Factors](http://personality-project.org/revelle/publications/vss.pdf) (MBR 1979 (14)) and the `VSS()` function. Many authors are using orthogonal rotation (VARIMAX), considering loadings higher than, say 0.3 or 0.4 (which amounts to 9 or 16% of variance explained by the factor), as it provides simpler structures for interpretation and scoring purpose (e.g., in quality of life research); others (e.g. Cattell, 1978; Kline, 1979) would recommend oblique rotations since "in the real world, it is not unreasonable to think that factors, as important determiners of behavior, would be correlated" (I'm quoting Kline, Intelligence. The Psychometric View, 1991, p. 19). To my knowledge, researchers generally start with FA (or PCA), using a scree-plot together with simulated data (parallel analysis) to help choosing the right number of factors. I often found that item cluster analysis and VSS nicely complement such an approach. When one is interested in second-order factors, or to carry on with SEM-based methods, then obviously you need to use oblique rotation and factor out the resulting correlation matrix. Other packages/software: - lavaan, for latent variable analysis in R; - OpenMx based on Mx, a general purpose software including a matrix algebra interpreter and numerical optimizer for structural equation modeling. References 1. Cattell, R.B. (1978). The scientific use of factor analysis in behavioural and life sciences. New York, Plenum. 2. Kline, P. (1979). Psychometrics and Psychology. London, Academic Press.
null
CC BY-SA 2.5
null
2010-08-23T11:18:49.273
2010-08-23T11:18:49.273
null
null
930
null
2044
2
null
726
119
null
> If you torture the data enough, nature will always confess. --Ronald Coase (quoted from Coase, R. H. 1982. How should economists chose? American Enterprise Institute, Washington, D. C.). I think most who hear this quote misunderstand its profound message against data dredging.
null
CC BY-SA 2.5
null
2010-08-23T13:47:21.180
2010-12-03T04:01:49.103
2010-12-03T04:01:49.103
795
null
null
2045
2
null
2037
5
null
As Srikant Vadali has suggested, Cohen and Hald solved this problem using ML (with a Newton-Raphson root finder) around 1950. Another paper is Max Halperin's "Estimation in the Truncated Normal Distribution" available on [JSTOR](http://www.jstor.org/pss/2281315) (for those with access). Googling "truncated gaussian estimation" produces lots of useful-looking hits. --- Details are provided in a thread that generalizes this question (to truncated distributions generally). See [Maximum likelihood estimators for a truncated distribution](https://stats.stackexchange.com/questions/48897). It might also be of interest to compare the Maximum Likelihood estimators to the Maximum Entropy solution given (with code) at [Max Entropy Solver in R](https://stats.stackexchange.com/questions/21173).
null
CC BY-SA 4.0
null
2010-08-23T14:14:25.510
2019-06-05T12:17:22.973
2019-06-05T12:17:22.973
919
919
null
2046
2
null
726
10
null
'Figures fool when fools figure'. Henry Oliver Lancaster
null
CC BY-SA 2.5
null
2010-08-23T14:14:57.677
2010-08-23T14:14:57.677
null
null
null
null
2048
2
null
1908
0
null
[European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning](http://www.dice.ucl.ac.be/esann/)
null
CC BY-SA 2.5
null
2010-08-23T14:53:02.577
2010-08-23T14:53:02.577
null
null
976
null