question
stringlengths
37
38.8k
group_id
int64
0
74.5k
<p>My objective is to implement a topic model for a large number of documents (20M or 30M). Let us assume that the number of topics is fixed at 50.</p> <p>I think implementing an LDA for the above problem would not be difficult. However, I have yet to find an answer for an NMF model. I have read that it is NOT easy to implement an NMF model for a large number of documents.</p> <p>Is it really not possible to implement an NMF model for my problem?</p>
73,901
<p>I wonder if someone can explain what is the main difference between omega and alpha reliabilities?</p> <p>I understand an omega reliability is based on hierarchical factor model as shown in the following picture, and alpha uses average inter-item correlations.</p> <p><img src="http://i.stack.imgur.com/QH4Mf.png" alt="enter image description here"></p> <p>What I don't understand is, in what condition, omega reliability coefficient would be higher than alpha coefficient, and vice versa?</p> <p>Can I assume if the correlations between the subfactors and the variables are higher, the omega coefficient would also be higher (as shown in the above picture)?</p> <p>Any advice is appreciated!</p>
73,902
<p>Note that there are some distributions that can be derived from the others (continuous as well as discrete).</p> <p>For example student and chi-square distribution are derived from normal distribution and binomial distribution can be derived from Bernoulli distribution.</p> <p>Of course, the term "derived" can be understood in two ways - directly derived (for example normal - student) or using some limit (binomial - poisson).</p> <p>Is there any diagram depicting these kind of relation? If not, it is possible to draw it with at least the most known discrete and continuous distributions? I would prefer graph with nodes as distributions and directed edges that would mean that one distribution can be derived from another (limit cases should be also depicted).</p>
98
<p><strong>Modified question to better explain the context of my problem:</strong></p> <p>I am studying young stars. When a star is born, it is surrounded by a disk of dust called "protoplanetary disk". Planets form in these disks, so understanding how they evolve gives information on plaent formation. Current theories and observations suggest that every star is born with one of these disks. However, different processes make these disks dissipate in about 10 million years. The usual way to study this subject is to study the fraction of stars with protoplanetary disks at different ages to see how dissipate. Past studies have found "hints" of massive stars loosing their disks earlier than low-mass stars, and therefore they may form different planetary systems. My aim is to determine the truthfulness of this dependence with stellar mass.</p> <p>To study these disks, we look at the flux measured at infrared wavelengths. When you know the type of star is (lets say, you know its temperature), you can apply a stellar model. If the flux you measure is signicalty higher (defined in some way) than the expected from the stellar model (a naked star), that could mean you have additional infrared flux emited by the protoplanetary disk. Also, you need an age estimate for the star, and another one for the stellar mass if you want to compare different masses. So, there are several sources of uncertainties:</p> <ul> <li><p>errors from the infrared measurements</p></li> <li><p>errors from the estimated temperature of the star</p></li> <li><p>errors from the age estimate</p></li> <li><p>errors from the mass estimate.</p></li> </ul> <p>The origin and behaviour of these uncertainties are very complicated, and usually not included in the calculations.</p> <p>I have built a large sample of young stars, and I want to see which evidence there is of the stellar mass affecting the evolution/dissipation of protoplanetary disks. To do so, I have subdivided the sample in two mass and ages bins (the cuts having some physical meaning). As a result, I have four bins: "young low-mass", "young high-mass", "old young-mass", "old low-mass". Computing the % of protoplanetary disks for each of these bins is simple, but that is not enough prove or discard the mass influence. On the other hand, assigning errors to that % by error propagation is extremely complicated. Usually, one assumes simple Poisson errors, but that is not correct as it does not account for these uncertainties. That is why I thought I could use bootstrapping, and vary these quantities within reasonable ranges during the iterations to account for them. </p> <p>As a result of that process, I end up with a list of % values for each bin, and therefore I can get statistical quantities from them (mean, standard deviation,…). They also provide and estimate of the correspoding PDFs.</p> <p><em>I would like to know how to quantify the statistical evidence of these bins having different protoplanetary disk fractions, which translates into evidence of stellar mass having an impact on their evolution.</em></p> <p>This is an example of the outcome. sample1 is "young, low-mass stars". sample2 is "young, high-mass stars". And their means and standard deviations are:</p> <p>sample1: 61 +- 2</p> <p>sample2: 47 +- 5 </p> <p>also, these are the obtained PDFs.</p> <p><img src="http://i.stack.imgur.com/mMnIx.png" alt="enter image description here"></p>
36,447
<p>Note that this is a simplified example:</p> <p>I have some time series that I made stationary by differencing twice. Then I ran <code>arima</code> on it, and set d = 0 to prevent additional differencing (I'm aware that <code>auto.arima</code> could detect the order of integration, but I'm hard-coding this myself for other reasons). Now I want to use <code>fitted</code> data from my <code>arima</code> object to determine what the <em>non-stationary</em> fit would look like.</p> <p>For example:</p> <pre><code>library('forecast') # simulate ARIMA(1,2,0) time series: rawData &lt;- arima.sim(n = 20, list(order = c(1,2,0), ar = 0.7)) # use diff function to make the series stationary: stationaryData &lt;- diff(diff(rawData)) # fit ARIMA on them appropriately rawDataFit &lt;- arima(rawData, c(1,2,0)) # include.mean = FALSE by default stationaryDataFit &lt;- arima(stationaryData, c(1,0,0), include.mean = FALSE) # stationaryData is already twice differenced # notice that there is very small variance between the AR(1) coefficients: coef(rawDataFit) coef(stationaryDataFit) </code></pre> <p>In this particular instance, my AR(1) coefficeints are 0.5511049 and 0.5511048. I also forced my ARIMA to exclude an intercept, so these ARIMA objects so be similar.</p> <pre><code># plot of rawData and the fitted values plot(rawData, type = "l") lines(fitted(rawDataFit), col = "slategrey") </code></pre> <p>Here's an example of what that plot could look like: <img src="http://i.stack.imgur.com/5R0cM.png" alt="Here&#39;s an example of what that plot could look like"></p> <p>I want to recreate the above plot, <em>without</em> using the rawDataFit object</p> <pre><code># using the diffinv function, I can easily replicate the rawData: recoveredRawData &lt;- diffinv(stationaryData, differences = 2, xi = rawData[1:2]) # Now I also want to "recover" the non-stationary data from the fitted AR(1) object: recoveredFit &lt;- diffinv(fitted(stationaryDataFit), differences = 2, xi = c(0,0)) # plot of rawData and the fitted values plot(recoveredRawData, type = "l") lines(recoveredFit, col = "slategrey") </code></pre> <p>Here's the attempt to recreate the above plot, using the results from my stationaryDataFit:</p> <p><img src="http://i.stack.imgur.com/xZuKH.png" alt="Here&#39;s the attempt to recreate the above plot, using the results from my stationaryDataFit"></p> <p>The shape looks correct, but the values are clearly off. I am <em>not</em> expecting to recover exactly the same results from both methods of fitting, but I still expect them to be reasonably close. </p> <p>I strongly suspect the problem is with my choice of xi in the <code>diffinv</code> function, since that's really the only place I'm making any assumptions. But I'm having trouble reconciling the issue.</p> <p>To integrate the data, <code>diffinv</code> requires the first observations of the integrated data. This is how I can convert the stationaryData back to the rawData, by passing the first two values of rawData to the <code>diffinv</code> xi argument. But I'm unsure what to use as the starting values to integrate <code>fitted(stationaryDataFit)</code>. The first two values of the (integrated) rawData are 0, so that's what I'm trying for now...</p> <p>Any ideas?</p> <p><strong>EDIT:</strong> Is this a legitimate work-around? Take the residuals from my stationaryDataFit object, and just subtract those from my rawData? For example:</p> <pre><code># prefix 2 zeros, so the vectors are the same length (due to secord-differencing): recoveredFit &lt;- rawData - c(rep(0, 2), stationaryDataFit$residuals) </code></pre> <p>My concern is about whether I need to transform my residuals from the stationaryDataFit somehow? In fact, the residuals from both fits are extremely close (within several decimals).</p> <p>Thank you!</p>
19,043
<p>I am looking at streaming data (i.e. online model), and looking for a specific discrete event. I want to stochastically model the time until this even happens, or if easier, say, model the probability that it happens within the next 30 seconds. What is a simple, practical way to tackle this problem? What kind of technique can I use, and how can I train the model and backtest it?</p> <p>Note that the training is happening offline on historical data, and then the model is applied online, on live, streaming data.</p>
73,903
<p>If the inflation indices was reset to 100 in the the 3rd quarter, what would be the fourth quarter inflation index for food. I have no idea how to do this . Below is the graph:</p> <p><img src="http://i.stack.imgur.com/geO3v.png" alt="enter image description here"></p>
73,904
<p><strong>I have applied the lm() to a data set. The independent variables are categorical. First, I use lm() with intercept and I got the next results:</strong></p> <pre><code>&gt; model &lt;- lm(y ~ factor(x)) &gt; summary(model) Call: lm(formula = y ~ factor(x)) Residuals: Min 1Q Median 3Q Max -5.3085 -1.8132 -0.4136 1.4323 11.2480 Coefficients: Estimate Std. Error t value Pr(&gt;|t|) (Intercept) 9.3085 0.4064 22.907 &lt;2e-16 *** factor(x)0.75 0.1435 0.6896 0.208 0.836 factor(x)1.5 0.9062 0.6272 1.445 0.151 factor(x)3 0.9040 0.6989 1.293 0.198 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Residual standard error: 2.786 on 126 degrees of freedom Multiple R-squared: 0.0238, Adjusted R-squared: 0.0005601 F-statistic: 1.024 on 3 and 126 DF, p-value: 0.3844 </code></pre> <p><strong>In the second model I don't use intercept:</strong></p> <pre><code>&gt; model.1 &lt;- lm(y ~ factor(x) - 1) &gt; summary(model.1) Call: lm(formula = y ~ factor(x) - 1) Residuals: Min 1Q Median 3Q Max -5.3085 -1.8132 -0.4136 1.4323 11.2480 Coefficients: Estimate Std. Error t value Pr(&gt;|t|) factor(x)0.25 9.3085 0.4064 22.91 &lt;2e-16 *** factor(x)0.75 9.4520 0.5572 16.96 &lt;2e-16 *** factor(x)1.5 10.2147 0.4778 21.38 &lt;2e-16 *** factor(x)3 10.2125 0.5687 17.96 &lt;2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Residual standard error: 2.786 on 126 degrees of freedom Multiple R-squared: 0.9267, Adjusted R-squared: 0.9243 F-statistic: 398 on 4 and 126 DF, p-value: &lt; 2.2e-16 </code></pre> <p>If I don't understand the difference between the R-squared value of them? Could I accept the second one as a fit model?</p> <p>Would somebody help me to understand this problem?</p>
73,905
<p>I am working on a self-study question where </p> <blockquote> <p>A study indicates that the typical American woman spends USD 340 per year for personal care products. The distribution of the amount follows a right-skewed distribution with a standard deviation of USD 80 per year. If a random sample of 100 women is selected, what is the probability that the sample mean of this sample will be between USD320 and USD350?</p> </blockquote> <p>I made two attempts to answer the question: one using normal distribution and another using CLT. However, neither approach has helped me achieve the goal answer of 0.8882.</p> <p><strong><em>My Normal Distribution Approach</em></strong></p> <blockquote> <p><img src="http://i.stack.imgur.com/umt1i.jpg" alt="enter image description here"></p> </blockquote> <p><strong><em>My CLT Approach</em></strong></p> <blockquote> <p><img src="http://i.stack.imgur.com/Ap99x.jpg" alt="enter image description here"></p> </blockquote> <p>Appreciate some guidance and advice please</p>
73,906
<p>I want to use VIF to check the multicollinearity between some ordinal variables and continuous variables. When I put one variable as dependent and the other as independent, the regression gives one VIF value, and when I exchange these two, then the VIF is different. And once the VIF value more than 3, and the other time it is less than 3. Then how I do make a decision to keep the variable or not, and which one should I keep? Ultimately I am going to use these variables in a logistic regression, how important it is to see multi-collinearity in logistic regression? Thanks, aruna</p>
36,454
<p>G. Monette and J. Fox provide in <a href="http://www.r-project.org/conferences/useR-2009/slides/Monette+Fox.pdf" rel="nofollow">these slides</a> a framework for the Type II Analysis of Variance/Deviance tests in terms of <em>conditional hypothesis</em>. My questions are:</p> <ul> <li><p>In this frequentist approach, the "conditional hypothesis" $L_1\beta=0 \mid L_2\beta=0$ is only symbolic (isn't it?). Is there a Bayesian analogue to this approach such as inference on $L_1 \beta$ under the posterior distribution of $\beta$ taken conditionally to $L_2 \beta =0$ ?</p></li> <li><p>Monette &amp; Fox rigorously define the conditional hypothesis as a classical hypothesis $L_{1\mid 2}\beta =0$ for a certain matrix $L_{1\mid 2}$ but this matrix depends on the estimated asymptotic covariance matrix of the parameters $\beta$. That sounds strange. Does it actually depends on the true asymptotic covariance matrix and then $L_{1\mid 2}$ is only an estimate of a theoretical matrix ? Even for the true asymptotic covariance that sounds strange because it still depends on the choice of the estimating method.</p></li> </ul> <p>In fact I have never seen the notion of conditional hypothesis before, is it presented in some textbooks ?</p> <h2>Update1</h2> <p>Still sick today but here are some thoughts. Consider a classical linear model $y = X\beta+\sigma\epsilon$ and the Jeffreys prior. Then the posterior distribution of $(\beta \mid \sigma)$ is ${\cal N}(\hat\beta, V)$ where $V$ is the asymptotic covariance matrix of the least-squares estimator $\hat\beta$, or (I do not remember), $V$ is this matrix up to a factor close to $1$. Then it is easy to see that $(L_1 \beta \mid L_2\beta=0)$ has the distribution of $L_{1 \mid 2} \beta$ under the conditional posterior distribution $(\beta \mid \sigma)$, where $L_{1 \mid 2}$ is a $V$-orthogonal complement as defined in Monette &amp; Fox's slides. And the Wald statistic $Z_{1|2}$ should be related to the norm of $L_{1 \mid 2} \beta$. </p> <p>For more general models the approach should asymptotically coincide with the Bayesian approach when $\hat\beta$ is taken to be the maximum-likelihood estimate.</p> <p>Too sick to continue...</p> <h2>Update2</h2> <p>I really wonder about whether this an old or a recent approach. As shown in <a href="http://stats.stackexchange.com/a/53041/8402">my answer to myself here</a>, this is not the way used by SAS. But the "old" <code>anova()</code> R function uses this approach. Indeed, for a generalized least-squares model such as </p> <pre><code>glsfit &lt;- gls(value ~ group*variable, data=ldat, correlation=corSymm(form= ~ 1 | id), weights=varIdent(form = ~1 | variable)) </code></pre> <p>the type II hypothesis Wald F-test statistic of the <code>variable</code> factor is provided by:</p> <pre><code>&gt; anova(glsfit) Denom. DF: 45 numDF F-value p-value (Intercept) 1 1401.9971 &lt;.0001 group 4 2.3793 0.0658 variable 2 79.5687 &lt;.0001 group:variable 8 1.4759 0.1929 </code></pre> <p>(and for the <code>group</code> factor one has to exchange the order of the factors:</p> <pre><code>glsfit.reverse &lt;- update(glsfit, model = value ~ variable*group) anova(glsfit.reverse) </code></pre> <p>) </p> <p>Or is it a new theoretical justification of an old approach ?..</p>
6,873
<p>I have to calculate half life of an advertisement using Kalman filter in R. The paper 'Estimating the Half-life of Advertisements' (<a href="http://link.springer.com/article/10.1023%2FA%3A1008158119567" rel="nofollow">Naik, 1999</a><sup>[1]</sup>) provides the base but am unable to understand how exactly the estimation is being done and since a lot of parameters that go in as the input to the kalman filters are unknown i am unable to move forward with my project.</p> <p>Any suggestion as in how to go forward? </p> <p>[1]: Naik, P.A. (1999),<br> "Estimating the Half-life of Advertisements,"<br> <em>Marketing Letters</em> 10:3, 351-362 </p>
73,907
<p>In relation to web usage mining from a log file, can you cluster data without performing User and/or Session identification? </p> <p>I mean,let's say I have these entries:</p> <blockquote> <p>123.234.324.122 [timestamp] "GET /cars/sport/porsche.jpg" 200 23432 "http://topgear.com/cars" "Mozilladsfsd" </p> <p>120.23.324.122 [timestamp] "GET /bikes/sport/r1.jpg" 200 23432 "http://topgear.com/cars" "Mozilladsfsd" </p> <p>13.234.324.122 [timestamp] "GET /cars/utility/micra.jpg" 200 23432"http://topgear.com/cars" "Mozilladsfsd" </p> </blockquote> <p>So,in this scenario, I just need to cluster based on which cars have been viewed more frequently etc etc..Do I need user identification and session identification then? Or can I just consider the URLs and cluster on them? </p> <p>Because as far as the traditional Web Usage Mining approach goes and all the papers I've gone through suggest, you do preprocessing,then the pattern-discovery comes along..</p> <p>My question is why not jump to the pattern discovery straight-away????</p>
45,351
<p>I want to determine the significance of a particular variable, among many confounders. If I fit a model on the training set and observe a small p value, should I discard the model because it extrapolates poorly on the test set (negative $R^2$ value), or should I keep it since I wasn't interested in predictions anyway?</p>
73,908
<p>I'm doing my PhD in geomechanics. I thought we use a Poisson-Weibull distribution (for the variability of a parameter at the rock), but reading more about the subject I think maybe is a Poisson-Weibull process and I don't know the difference (sorry I'm just a stupid engineer). To complete the problem I'm not too knowledgeable about the language of mathematics, so if you could give me an example it would be awesome!</p>
36,456
<p>I'm given two random sample datasets of sample size n=20, where the first dataset represents the weights of random boys, and second group represents weights of random girls. I need to find the significance level of saying that boys on average weight more than girls.<br> To solve this problem I'm assuming that the null hypothesis of saying that boys weight the same as girls (no difference) is true, and building a sampling distribution of <strong>differences</strong> of weights between boys and girls with the mean difference of 0 (since I'm assuming that there is no difference in general).<br> I've calculated the standard deviation for this sampling distribution and I know the <strong>difference</strong> of mean weights of two <strong>sample</strong> datasets (which is supposedly a part of this sampling distribution).<br> Now if I divide the difference of sample means by standard deviation of this sampling distribution what value do I get? Is that a t-value? How do I find the significance level from that?</p>
73,909
<p>I have a data set of repeated observations and I am trying to determine if any of the observations are outliers. The research I've done has only shown methods that would determine if one value (maximum, minimum, or one questioned value) is an outlier, or if both the highest and lowest value are outliers. What I would like to be able show is if multiple values throughout the data set are outliers, as I suspect, without knowing exactly how many outliers are present. Any help or direction you could give me would be much appreciated. </p>
73,910
<p>I've just come across a statement in </p> <p>Good, P. I.: Resampling methods. Springer, 2001 </p> <p>and wondered if someone could explain it to me.<br> If you want to construct a 95% confidence interval for, let's say the mean of a normal distribution then the bounds would be $\bar{X}\pm 1.96s/\sqrt{n}$ with $\hat X$ being the sample mean and $s$ being the standard deviation of the sample. </p> <p>So obviously if you want to halve the size of your confidence interval you'd have to multiply the sample size by 4 (i.e. take 400 samples instead of 100). So the width of the CI is proportional to $n^{-\frac{1}{2}}$. CI's with this feature are called first order exact (I guess most of you know that, I'm just repeating it because I didn't) </p> <p>Now what Good says is that $BC_\alpha$ intervals are second order exact, that is their width for large samples is proportional to $n^{-1}$. Why is that so?</p>
73,911
<p>I have annual data that seem to have a bimodal density function. My explaination is that there is a distinction between wet and dry years. For my work I would like to use an ar(1)-model for this. When I simulate from a fitted ar(1)-model with normally distributed innovations I of course get normally distributed data. Since the simulation results look different from the historical data this is not satisfactory. When I generate my innovations by just sampling from the residuals which also show a bimodal density function I get better results concerning the density function. By doing so my innovations are still white noise. But not Gaussian white noise any more. This approach is called "model based resampling" and was (I guess) introduced by Freedman (1984). That approach has some disadvantages but for me it seems to be an improvement. </p> <p>But I am not certain if I still can use (standard) MLE to estimate the ar-parameter(s) in the model based resampling approach. Or if I might use least squares instead. I suppose that least squares is better since there is no gaussian assumption. I was really looking for publications for this but the few I found are extremely difficult to understand. I found a lot concerning fitting of distributions and structural equation modeling, saying that small deviations from normal are not a problem for MLE as long as there are no extreme outliers. But I guess time series may not be the same.</p> <p>Can you help me?</p>
36,457
<p>According to the WinBUGS manual the gamma distribution is defined by:</p> <p><code>dgamma(r,mu)</code></p> <p>However, what is <code>r</code>? Is it the shape, scale or rate parameter? I'm pretty new to statistics and googling didn't really help. Above that in most explanations I've found <code>r</code>is usually not mentioned. It's either k, <code>theta</code>, <code>beta</code> or <code>alpha</code>.</p>
73,912
<p>For Univariate Linear Regression I can calculate the parameters (And most everything else) from simple sum of squares. Is there a corresponding method for Logistic Regression? Any pointers to code (in any language) would be helpful.</p> <p>(Yes there are many solvers, but I want to implement as a simple Map-Reduce algorithm, as I have for Univariate Linear Regression)</p>
73,913
<p>I am trying to cluster data. Each point in this dataset is connected to some other points. I want to define clusters "depending on how much the points are connected to each other". After some research, I read about k-core clustering (and saw it has applications in social networking for instance). I think this is the algorithm I want to apply, but, according to what I found, it can only be used to "visualize" a network. Isn't it possible to cluster data with this method?</p>
36,458
<p>Let $x[n]$ be some time series in 1D or $x[m,n]$ in 2D, of length $N$ (resp. $N^2$) How can I assess whether it is <em>stationary</em>? At least in the weak sense. I can check whether the <em>stdev</em> remains constant, but this is only a necessary condition.</p>
36,459
<p>I am trying to apply a simple Naive Bayes or SVM (libSVM) algorithms to a large data set, which I've constructed as an .arff file.</p> <p>The number of features in my set is ~180k and there are ~6k examples. Also there are 8 classification classes. The data is of size ~3.2GB.</p> <p>I am working with Weka's Java API and Eclipse, I am increasing JVM's memory to the maximum, but I am always getting a heap space error.</p> <p>I am on a MacBook Pro, 2.3 GHz Intel Core i5, 4GB 1333 MHz DDR3.</p> <p>Do I need to find another machine to work with or is it possible that I am having an memory leaks programmatically?</p>
31,626
<p>I have data of a continuous random variable within the range [-1,1], which sometimes is concentrated around 0, while other times is concentrated toward -1 and 1, while zero is relatively underpopulated. </p> <p>What measure can I use for both these cases to measure the divergence from a uniform distribution?</p> <p>In other terms: I am looking for a measure of how evenly spread out the data is within the range, but standard dispersion measures (like variance) don't seem to work, since they favor distributions in which the tails are higher than the 'peak', e.g., when the region around zero is relatively underpopulated.</p>
73,914
<p>Here's <a href="http://stats.stackexchange.com/questions/125/what-is-the-best-introductory-bayesian-statistics-textbook">a link</a> to a good question regarding Textbooks on Bayesian statistics from some time ago.</p> <p>People suggested John Kruschke's "Doing Bayesian Data Analysis: A Tutorial Introduction with R and BUGS" as one of the best options to get an introduction to Bayesian statistics. Meanwhile, a potentially interesting book called "Bayesian and Frequentist Regression Methods" by Jon Wakefield was released, which also provides code for R and BUGS. Thus, they esentially both seem to cover the same topics.</p> <p>Question 1: If you have read the book, would you recommend it to a frequentist economics masters graduate as both an introduction to Bayesian statstics and reference book for both frequentist and bayesian approaches?</p> <p>Question 2: If you have read both Wakefield's and Kruschke's book, which one would you recommend better?</p>
36,463
<p>I am looking for a (commonly used) probability density function, which would look like a normal distribution flipped upside down. It would look like a uniform distribution with a dent in the middle.</p> <p>Just to be clear, I am dealing with a continuous random variable within some range, say <code>[-1,1]</code>.</p> <p>Sometimes I have data which is concentrated around zero, but other times I have data which is concentrated toward 1 and -1, while the region around zero is relatively underpopulated. Is there a kind of pdf which (depending on some parameter) can represent these two cases?</p>
1,028
<p>In the context of REML estimation there is the result (ignoring some constants) that (my interest is in the matrix algebra so some notation is suppressed):</p> <p>$l(\mathbf V_0)=\log |\mathbf V_0| + \text{tr}(\mathbf V_0^{-1}\mathbf S) \tag{1}$</p> <p>Where both $\mathbf V_0$ and $\mathbf S$ are symmetric and invertible. I am told that the following expression can be obtained from (1) by differentiating with respect to a parameter $\sigma_i$ of $\mathbf V_0$:</p> <p>$\text{tr}\left[\mathbf V_0 \frac{\partial \mathbf V_0^{-1}}{\partial \sigma_i}\right]-\text{tr}\left[\frac{\partial \mathbf V_0^{-1}}{\partial \sigma_i}(\mathbf V_0 - \mathbf S)\right] \tag{2}$</p> <p>I'm trying to see what steps got this, but am stuck.</p> <p>Now I'm obviously lacking some machinery for dealing with these things but I don't know where to look. The issue is that, say, both terms in (1) are scalar, and we are differentiating w.r.t a scalar, so the it would seem the answer should be scalar. But my workings end up matrix valued. E.g.</p> <p>$\frac{\partial \log |\mathbf V_0|}{\partial \sigma_i} = \mathbf V_0^{-1} \frac{\partial \mathbf V_0}{\partial \sigma_i} $</p> <p>Clearly the trace gets involved wrapping it all up into a scalar, but I don't know how or why!</p>
73,915
<p>Can the standard deviation be calculated for the harmonic mean? I understand that the standard deviation can be calculated for arithmetic mean, but if you have harmonic mean, how do you calculate the standard deviation or CV?</p>
73,916
<p>I have a classification problem with two classes working on nominal data. I want to apply <a href="http://www.jair.org/media/953/live-953-2037-jair.pdf" rel="nofollow">SMOTE-N</a> to deal with imbalanced data. However, it is not clear to me how to use SMOTE-N for generating N synthetic data for each feature vector in the minority class. SMOTE-N uses a modified version of the value difference metric (VDM) to find the k-nearest neighbors for each feature vector in the minority class and then the new minority class feature vector is generated by creating new set feature values by taking the majority vote of the feature vector in consideration and its k nearest neighbors (k-nn). But, how is this process repeated to generate multiple synthetic feature vectors for each feature vector in the minority class? The way the algorithm is stated, it seems that one feature vector from the minority class can generate only one synthetic feature vector (using its K-nn)?</p>
36,467
<p>I have got a set $\{Y_t\}$ of observations consisting of two subsets $\{Y_{t,1}\}$ and $\{Y_{t,2}\} \subset \{Y_t\}$ with $\{Y_{t,1}\} \sim \mathcal{N}(\mu_1,\sigma^2)$ and $\{Y_{t,2}\} \sim \mathcal{N}(\mu_2,\sigma^2)$ i.e. <em>different means but the same variance</em> (resulting from a regime switching model).</p> <p>I know the means and want to draw a sample of $\sigma^2$ in a step of a MCMC estimation. </p> <p>In the case of $\mu_1 = \mu_2$ I would have used the conjugate prior of the Inverse Gamma distribution (see [1], "Normal with known mean").</p> <p>Can I use a conjugate prior in the case of $\mu_1 \neq \mu_2$ as well? For example by setting $\beta = \beta_0 + \frac{1}{2}\sum_{i=0}^n(Y_i - \mu_{I_i})^2 $ ($I_i$ being the correct indices according to the observations)?</p> <p>Or will I have to use Metropolis-Hastings to get my sample of $\sigma^2$?</p> <p>Best, Matt</p> <p>[1] <a href="http://en.wikipedia.org/wiki/Conjugate_prior#Continuous_distributions" rel="nofollow">http://en.wikipedia.org/wiki/Conjugate_prior#Continuous_distributions</a></p>
73,917
<p>I built a classifier with 13 features ( no binary ones ) and normalized individually for each sample using scikit tool ( Normalizer().transform).</p> <p>When I make predictions it predicts all training sets as positives and all test sets as negatives ( irrespective of fact whether it is positive or negative )</p> <p>What anomalies I should focus on in my classifier, feature or data ???</p> <p>Notes: 1) I normalize test and training sets (individually for each sample) separately.</p> <p>2) I tried cross validation but the performance is same</p> <p>3) I used both SVM linear and RBF Kernels</p> <p>4) I tried without normalizing too. But same poor results</p> <p>5) I have same number of positive and negative datasets ( 400 each) and 34 samples of positive and 1000+ samples of negative test sets.</p>
73,918
<p>Given a $n$-dimensional multivariate normal distribution $X=(x_i) \sim \mathcal{N}(\mu, \Sigma)$ with mean $\mu$ and covariance matrix $\Sigma$, what is the probability that $\forall j\in {1,\ldots,n}:x_1 \geq x_j$?</p>
73,919
<p>I have to write code to implement regression using random forests (by default Weka provides random forests for classification). Is this possible to do? </p>
73,920
<p>I was hoping someone could propose an argument explaining why the random variables $Y_1=X_2-X_1$ and $Y_2=X_1+X_2$, $X_i$ having the standard normal distribution, are statistically independent. The proof for that fact follows easily from the MGF technique, yet I find it extremely counter-intuitive.</p> <p>I would appreciate therefore the intuition here, if any.</p> <p>Thank you in advance.</p> <p><strong>EDIT</strong>: The subscripts do not indicate Order Statistics but IID observations from the standard normal distrubution. </p>
36,470
<p><strong>CONTEXT:</strong> I am modelling the relation between time (1 to 30) and a DV for a set of 60 participants. Each participant has their own time series. For each participant I am examining the fit of 5 different theoretically plausible functions within a nonlinear regression framework. One function has one parameter; three functions have three parameters; and one function has five parameters.</p> <p>I want to use a decision rule to determine which function provides the most "theoretically meaningful" fit. However, I don't want to reward over-fitting.</p> <p>Over-fitting seems to come in two varieties. One form is the standard sense whereby an additional parameter enables slightly more of the random variance to be explained. A second sense is where there is an outlier or some other slight systematic effect, which is of minimal theoretical interest. Functions with more parameters sometimes seem capable of capturing these anomalies and get rewarded.</p> <p>I initially used AIC. And I have also experimented with increasing the penalty for parameters. In addition to using $2k$: [$\mathit{AIC}=2k + n[\ln(2\pi \mathit{RSS}/n) + 1]$]; I've also tried $6k$ (what I call AICPenalised). I have inspected scatter plots with fit lines imposed and corresponding recommendations based on AIC and AICPenalised. Both AIC and AICPenalised provide reasonable recommendations. About 80% of the time they agree. However, where they disagree, AICPenalised seems to make recommendations that are more theoretically meaningful.</p> <p><strong>QUESTION:</strong> Given a set of nonlinear regression function fits:</p> <ul> <li>What is a good criterion for deciding on a best fitting function in nonlinear regression?</li> <li>What is a principled way of adjusting the penalty for number of parameters?</li> </ul>
36,471
<p>I am trying to do a multiple logistic regression for 2 similar groups. I have a few questions:</p> <ol> <li><p>In doing a univariate analysis, do I enter each independent variable, one at a time, first into the binary regression, before going on to do the multivariate analysis? Or is the significance values from Chi-square or t-test enough to go on with?</p></li> <li><p>I have a test group and a control group and I want to determine the effect of independent variables (e.g HIV status, maternal weight etc) on a particular dependent variable (low birth weight). Should I perform the regression on a dataset with both the test and the control cases or should I split the file? In this case I want to see the effect of HIV on birthweight and I am having a hard time knowing how to move on.</p></li> </ol>
19,045
<p>I have two autocorrelation functions measured from the same stationary process, but the two measurements were taken with different instruments that measure different lag times. I would like to combine these two autcorrelation functions together into one single curve that spans the entire lag time of both instruments. It seems that these could be combined in the frequency domain with Wiener-Khinchin theorem, but I have not gotten anything reasonable to come out from this. Thanks!</p> <p><img src="http://i.stack.imgur.com/soTIx.jpg" alt="Two autocorrelation functions"></p> <p>The autocorrelation data:</p> <p>Short lag times: {{0.0166638, 1.2427}, {0.0333277, 1.16926}, {0.0499915, 1.18007}, {0.0666553, 1.17344}, {0.0833192, 1.21829}, {0.099983, 1.19867}, {0.116647, 1.14627}, {0.133311, 1.17827}, {0.166638, 1.19614}, {0.199966, 1.19341}, {0.233294, 1.18352}, {0.266621, 1.18402}, {0.333277, 1.18672}, {0.399932, 1.18258}, {0.466587, 1.15333}, {0.533243, 1.17556}, {0.666553, 1.17179}, {0.799864, 1.17035}, {0.933175, 1.18451}, {1.06649, 1.16379}, {1.33311, 1.14078}, {1.59973, 1.13816}, {1.86635, 1.13299}, {2.13297, 1.13205}, {2.66621, 1.09647}, {3.19946, 1.09374}, {3.7327, 1.07922}, {4.26594, 1.06006}, {5.33243, 1.04623}, {6.39891, 1.03004}, {7.4654, 1.02369}, {8.53188, 1.01798}, {10.6649, 1.0145}, {12.7978, 1.00935}, {14.9308, 1.00495}, {17.0638, 1.00267}, {21.3297, 1.00247}, {25.5956, 1.00451}, {29.8616, 1.00155}, {34.1275, 1.00223}, {42.6594, 0.99827}, {51.1913, 1.00015}, {59.7232, 0.996647}}</p> <p>Longer lag times:</p> <p>{{1.483, 1.2196}, {2.966, 1.1595}, {4.4489, 1.1353}, {5.9319, 1.1261}, {7.4149, 1.1126}, {8.8979, 1.0907}, {10.381, 1.0796}, {11.864, 1.0697}, {14.83, 1.0471}, {17.796, 1.0397}, {20.762, 1.0229}, {23.728, 1.0199}, {29.66, 1.0097}, {35.592, 1.0077}, {41.524, 1.0048}, {47.455, 1.004}, {59.319, 1.0023}, {71.183, 1.0008}, {83.047, 1.0011}, {94.911, 0.99988}, {118.64, 1.0013}, {142.37, 1.001}, {166.09, 0.99821}, {189.82, 0.99895}, {237.28, 1.0015}, {284.73, 1.0006}, {332.19, 0.99739}}</p>
73,921
<p>I want to simulate data from the following model:</p> <p>$\textbf{z}_k=\textbf{H}\textbf{x}_k+\textbf{v}_k$ $\textbf{v}_k \sim N(\textbf{0},\textbf{R})$</p> <p>$\textbf{H}$ does not change over time<br> $\textbf{x}$ is a vector of loadings<br> $\textbf{R}$ is a diagonal of constants</p> <p>$\textbf{x}_k=\textbf{F}\textbf{x}_{k-1}+(\textbf{I}-\textbf{F}){\mu} + \textbf{w}_k $ $\textbf{w}_k \sim N(\textbf{0},\textbf{Q})$</p> <p>$\textbf{I}$ is the identity matrix<br> $\mu$ is the vector of mean values of $\textbf{x}$<br> $\textbf{F}$ is diagonal with the AR(1) params which do not change over time<br> $\textbf{Q}$ is diagonal with the innovation processes for $\textbf{x}$</p> <p>I have the following code in Matlab</p> <pre><code>nDates=20000; %number of dates mats=[1 2 3 4 5 6 7 8 9 10 12 15 20 25 30]'; %maturities nY=length(mats); %#number of yields z=zeros(nY,nDates); %declare vector for yields x=zeros(3,nDates); %declare vector for factors R=0.00001; %standard deviation I=eye(3); %3*3 identity matrix v=normrnd(0,R,nY,nDates); %generate residuals F=[0.9963 0 0; 0 0.9478 0; 0 0 0.774]; %AR(1) matrix mu=[0.0501; -0.0251;-.0116]; %mean of X lambda=0.5536; q = [0.0026^0.5 0 0;0 0.0027^0.5 0; 0 0 0.0035^0.5]; Q=q*q'; rng('default'); % For reproducibility r = randn(nDates,3); w= (r*Q)'; B= [ones(nY,1),((1-exp(-lambda*mats))./(lambda*mats)),((1-exp(-lambda*mats))./(lambda*mats))-exp(-lambda*mats)]; X(:,1)=mu; for t=2:nDates x(:,t)=F*(x(:,t-1))+(I-F)*mu+w(:,t); z(:,t)=B*x(:,t)+v(:,t); end z(:,1)=[]; </code></pre> <p>It all seems straightforward enough but what test can I so to ensure that it has been implemented correctly?</p> <p>Ones I have thought of:<br> Check the correlation of the factors x on their 1 period lags match the values given in matrix F<br> Check that the variances of v and w are correct<br> Check that the mean of the simulated variables are correct </p> <p>I would like to check that the empirical variance of the parameters matches their theoretical equivalent, but I don't know what the theoretical equivalent should be?</p> <p>Please feel free to suggest further tests that will allow me to know for sure if the implementation is correct.</p>
73,922
<p>In considering (for example) homicide rates reported annually for each country, it occurs to me that the U.S., being larger in population, might thereby be expected to have a rate closer to the worldwide mean.</p> <p>However, I don't know how to evaluate that idea quantitatively. What kind of additional data (if any) and calculations would be required to support that idea (not sure I should call it a hypothesis, because it relates to the model rather than to the world). </p> <p>Another way of thinking about this: with respect to statistics reported per country, does the U.S. look like a group of smaller countries combined?</p>
73,923
<p>I'm doing a binary classification using SVM classfier, libsvm, where roughly 95% belongs to one class.</p> <p>The parameters C and gamma are to be set before the actual training takes place. I followed <a href="http://www.csie.ntu.edu.tw/~cjlin/libsvm/">the tutorial</a> but still can't get any good results.</p> <p>There is a script that comes with the library that is supposed to help with choosing the right values for parameters but what this script is doing is basically maximizing the accuracy metric (TP+TN)/ALL, so in my case it chooses the parameters to label all data with prevailing class label.</p> <p>I would like to choose parameters with recall and precision based metrics. How could I approach this problem. Accuracy is a meaningless metric for what I'm doing. Also I'm keen on changing the library libsvm to any other one that can help me with this problem as long as it takes data in the same format.</p> <p>1 1:0.3 2:0.4 ... -1 1.0.4 2:0.23 and so on</p> <p>Can anybody help?</p> <p>UPDATE: yes I did try both grid.py and easy.py but even though grid search uses logarithmic scale it is extremely slow.I mean even if I run it on just small chunk of my data it takes tens of hours to finish. Is this the most efficient way to use SVM?? Have also tried svmlight but it does exactly the same it labels all data with one label.</p> <p>UPDATE2: I reformed my question the better reflect what sort of issues I am facing</p>
6,882
<p>I take vitamins in the morning, but one of them I only take a half tablet. </p> <p>So, I have an initial container with 100 full tablets, and every morning I take out a random tablet. If it's a full tablet, I break it in half, put half back, and take half. If it's a half tablet, I just take it. Given that, how many days must I do this before I get a $&gt; 50\%$ chance of getting a half tablet? (Or, what's the % chance I get a half tablet after $X$ days?)</p> <p><strong>NB</strong>: This isn't homework; it just occurred to me as I was doing my morning routine, and I really have no idea where to start on trying to solve this. Pointers welcome. It feels like something with an infinite series, but I'm not sure.</p>
36,476
<p>In a predictive linear regression model, does it make sense to combine binary and continuous data within a single scale?<br> Does it make statistical sense to convert binary data into a z score? </p> <p>I am wondering whether I could create a scale using a combination of binary and continuous items that I could convert into a Z score.</p>
36,479
<p>I have gotten a student job in the management department of a chain of 50 grocery stores. The job includes gathering daily statistics on the economy of the stores.</p> <p>Every day a <strong>per-store revenue statistic</strong> is made, comparing the revenue to matching day last year (sunday/sunday, monday/monday) and an accumulated statistic for the month. </p> <p>Also a <strong>per-store gross margin percentage statistic</strong> is made.</p> <p>Now these are simple measures and I feel that there could be a lot more to be gained from the data, the data is specific down to the (cost price)/(sale price)/(number of sales)-level on every type of item.</p> <p>Furthermore no or very few graphics are made and the data from the previous years are not taken into use.</p> <p>Have any of you seen a similar problem? Do you have any ideas on how to proceed? Any easy-to-read, informative types of graphs you would want to share regarding these types of data?</p>
73,924
<p>I am currently calculating reliability estimates for test-retest data.</p> <p>My question is regarding the difference between standard error of measurement (SEM) versus minimum detectable change (MDC) when seeking to determine if there is a 'real' difference between two measurements.</p> <p>Here is my thinking thus far:</p> <p>Each measurement has an error band about it. For two measurements, if error bands overlap then there is no 'real' difference between the measurements.</p> <ol> <li><p>For example, at 95% confidence, each measurement has an error band of $\pm 1.96 \times SEM$. So, two measurements would need to be more than $2 \times 1.96 \times SEM =3.92 \times SEM$ apart to avoid each measurement's confidence interval overlapping and for their to be a real difference between the two measurements.</p></li> <li><p>Another method for determining if two measurements are 'different' is to use MDC where </p></li> </ol> <p>$$MDC = 1.96 \times \sqrt{2} \times SEM =2.77 \times SEM$$</p> <p>[EDIT: for second formula see e.g. p. 238 of Weir, J. P. (2005). Quantifying test-retest reliability using the intraclass correlation coefficient and the SEM. Journal of strength and conditioning research / National Strength &amp; Conditioning Association, 19(1), 231–240. doi:10.1519/15184.1]</p> <p>If the difference between the two measurements is greater than MDC then there is a real difference between the measurements.</p> <p>Obviously the two formulas are different and would produce different results. So which formula is correct?</p>
73,925
<p>I need help in knowing which test to perform with this analysis. </p> <p>I have two groups (lets say group A and B). A study was performed with initial rates of heart failures in group A and B (i.e, baseline rates) were measured. After 6 months of follow up the heart failure rates in group A and B (i.e, re measurement rates) were measured again. By simple mathematics, I know that the improvement rate is better in group A as compared to group B. </p> <ul> <li>How do I test whether the difference in rates between the groups is statistically significant?</li> </ul> <p>I am using t-test for this but not sure how to go about it.</p>
73,926
<p>This is a more general treatment of the issue posed by <a href="http://stats.stackexchange.com/questions/104875/question-about-standard-deviation-and-central-limit-theorem">this question</a>. After deriving the asymptotic distribution of the sample variance, we can apply the Delta method to arrive at the corresponding distribution for the standard deviation. </p> <p>Let a sample of size $n$ of i.i.d. <strong>non-normal</strong> random variables $\{X_i\},\;\; i=1,...,n$, with mean $\mu$ and variance $\sigma^2$. Set the sample mean and the sample variance as $$\bar x = \frac 1n \sum_{i=1}^nX_i,\;\;\; s^2 = \frac 1{n-1} \sum_{i=1}^n(X_i-\bar x)^2$$</p> <p>We know that $$E(s^2) = \sigma^2, \;\;\; \operatorname {Var}(s^2) = \frac{1}{n} \left(\mu_4 - \frac{n-3}{n-1}\sigma^4\right)$$</p> <p>where $\mu_4 = E(X_i -\mu)^4$, and we restrict our attention to distributions for which what moments need to exist and be finite, do exist and are finite.</p> <p>Does it hold that</p> <p>$$\sqrt n(s^2 - \sigma^2) \rightarrow_d N\left(0,\mu_4 - \sigma^4\right)\;\; ?$$</p>
73,927
<p>In my research I'm comparing the variance of a method and I would like to describe the overall variance between individuals and the variance of the replicates of these individuals. </p> <p>Things like 'comparing the intra-individual variance and between-individual variance' seems to get people confused. I would like to make a short brief notice of this without having to go to much in details about the experiment.</p> <p>What would be a way of describing this setting more clearly but still within if possible one sentence? </p> <p>To clarify: I have 10.000 measurements for 60 individuals. For each measurement I could calculate for example the standard deviation as a method of variance. I also have 5 replicate measurements per individual. I could calculate the standard deviation for each of the 10.000 measurement in the replicates. So now I have the variance of the measurement when looking in a population AND I have the variance when looking at replicates. When you would now have to describe these 2 types of variance in a single sentence how would you do that without going into to much details? </p>
73,928
<p>If somebody can tell me what R commands I need to use for a repeated measures ANOVA, I'd really appreciate it. I have trouble with the random term. I've seen <code>random=id</code>, <code>random=id/(treatment*group)</code> and others. </p> <p>Also can you please indicate to me what the formula for the Bonferroni adjusted intervals is?</p>
49,613
<p>Suppose $X$ and $Y$ are two random samples (not necessarily iid, but one can make this assumption) and that $Z=X+Y$. If one computes the order statistics of $X$ and $Y$, what can be said about the relative order statistic of $Z$?</p> <p>To be more clear, let $\tilde{X}_{0.99}$ and $\tilde{Y}_{0.99}$ be the 0.99th quantiles of $X$ and $Y$, respectively, does it exist a relationship $f(\cdot)$ with $\tilde{Z}_{0.99}$ (i.e., the 0.99th quantile of $Z$) such that $\tilde{Z}_{0.99}=f(\tilde{X}_{0.99},\tilde{Y}_{0.99})$?</p> <p>Sorry for the possibly ill-posed question... I'm not a statistician.</p>
73,929
<p>I'm learning about boosting. I think I understand how adaptive boosting works for classification. I'm trying to get some intuition for regression boosting.</p> <p>At each iteration, adaptive boosting forces the next weak learner to focus more on incorrectly classified points. Intuitively, I can see why that should lead to a good classifier. I could be wrong, but L2 boosting doesn't seem to do anything like that. In L2 boosting, at each iteration, you're fitting a weak learner to the previous iteration's residuals. In a regression tree, when you're splitting a node, aren't you "fitting" the residuals from that node and that node's parents? In both cases, you're "fitting" unweighted residuals, so I don't understand why they're different.</p> <p>Maybe the the main advantage of L2 boosting over a single tree is that the former has many more regularization/bootstrap-ish options (e.g., randomly choosing subsets of features, the learning rate, the number of trees, individual tree depth, etc.)?</p>
36,488
<p>I am trying to analyse diagnostic plots of a spline object, I am using the package <strong>Fields</strong> from <strong>R</strong>.</p> <p>I am lost with the GCV function plot, since I can't find any guidelines of what to look for when comparing different models. Besides the value/degrees of freedom of the GCV function minimum for each model, can we also compare the GCV curve between models?</p> <p>This is an example of the different models and their correspondent diagnostic plots:</p> <p>(1)<strong>_________________________</strong>(2)<strong>__________________________</strong>(3)<strong>____________________</strong></p> <p><img src="http://i.stack.imgur.com/M8v7L.png" alt="enter image description here"></p> <p>The summary of each fit:</p> <pre><code> (1) Number of Observations: 745 Number of unique points: 745 Number of parameters in the null space 21 Parameters for fixed spatial drift 21 Effective degrees of freedom: 116.4 Residual degrees of freedom: 628.6 MLE sigma 0.4153 GCV sigma 0.4269 MLE rho 0.02785 Smoothing parameter lambda 6.193 DETAILS ON SMOOTHING PARAMETER: Method used: GCV Cost: 1 lambda trA GCV GCV.one GCV.model shat 6.1932 116.4326 0.2160 0.2160 NA 0.4269 (2) Number of Observations: 734 Number of unique points: 734 Number of parameters in the null space 21 Parameters for fixed spatial drift 21 Effective degrees of freedom: 134.4 Residual degrees of freedom: 599.6 MLE sigma 0.2788 GCV sigma 0.2912 MLE rho 0.02166 Smoothing parameter lambda 3.589 DETAILS ON SMOOTHING PARAMETER: Method used: GCV Cost: 1 lambda trA GCV GCV.one GCV.model shat 3.5890 134.3993 0.1038 0.1038 NA 0.2912 (3) Number of Observations: 716 Number of unique points: 716 Number of parameters in the null space 21 Parameters for fixed spatial drift 21 Effective degrees of freedom: 680.2 Residual degrees of freedom: 35.8 MLE sigma 0.04486 GCV sigma 0.0766 MLE rho 11.6 Smoothing parameter lambda 0.0001734 DETAILS ON SMOOTHING PARAMETER: Method used: GCV Cost: 1 lambda trA GCV GCV.one GCV.model shat 1.734e-04 6.802e+02 1.173e-01 1.173e-01 NA 7.660e-02 </code></pre> <p>The lower values of lambda the smoother the surface (3), and the number of degrees of freedom is higher(3). Althought I know from the data that the observed values vary little for model (3), while for (1) and (2) I have a higher variation of observed values.</p>
73,930
<p>I have a one dimensional List like this</p> <pre><code>public class Zeit_und_Eigenschaft { [Feature] public double Sekunden { get; set; } } //... List&lt;Zeit_und_Eigenschaft&gt; lzue = new List&lt;Zeit_und_Eigenschaft&gt;(); //fill lzue </code></pre> <p>lzue can be</p> <pre><code>lzue.Sekunden 1 2 3 4 8 9 10 22 55 ... </code></pre> <p>Goal is to find clusters in that list, ie elements that could form groups like f.i. in this example</p> <pre><code>lzue.Sekunden 1 2 3 4 8 9 10 22 55 </code></pre> <p>Which clustering algorithm is suitable(I don't know the number of clusters k)? GMM? PCA? Kmeans? Other?</p>
49,453
<p>I am running a 2x2x5 mixed factorial ANCOVA, with one within-groups variable, and two between groups variable. I want to check for violation of homogeneity of regression slopes. My covatiate does not interact with my between groups variables. It does however interact with my within-groups variable. Is this still a violation of the assumption? </p>
73,931
<p>Suppose we have a data set $X$. This data set consists of ordinal data (4 levels). To get estimates of the threshold coefficients and probit slope ($\beta_1, \dots, \beta_3$ and $\beta_4$ respectively), do most computational packages use maximum likelihood estimation? That is, given the data, MLE chooses the parameters that maximizes the probability of observing the data?</p>
73,932
<p>In Chemical industrial, samples are often analyzed multiple times, e.g. 5 samples and each was analyzed 2 times with 10 data points in total. What would be the confidence interval for the mean estimated from these data? The degrees of freedom is somewhere from 4 to 10 depending on the correlation between repeats.</p>
36,495
<p>I have some data I'm studying where functional data analysis seems like a promising approach. But having never tackled FDA before, I'm having trouble wrapping my head around it.</p> <p>For background, I have Ramsay and Silverman's "Functional Data Analysis" and Ramsay, Hooker, and Graves "Functional Data Analysis with R and Matlab" but only got them recently, and am still very much the newbie with FDA. I am using R with package fda as the analysis software.</p> <p>My data are a sample of hundreds of young adult subjects, each of whom were measured annually by performing a task with a binary outcome over some number of trials. The outcome of interest is the success rate on the task. I'll include a short sample of data at the end of my question.</p> <p>Several parameters were collected on each subject, including height, weight, education level, amount of prior training on the task, and the results of an aptitude pre-test. </p> <p>The year-to-year change in success rate over time is the curve I'd like to model with FDA. There is significant variation across subjects in overall performance, but I am mostly interested in how a subject's success rate evolves over time rather than a subject's actual performance. </p> <p>The subjects generally follow a pattern of starting off with a low success rate, improving yearly until reaching a personal maximum, then declining with age. The typical subject enters the study at age 20 with a low success rate, improves until age 25, then declines until age 30, at which point no further data is collected. One of the goals is to characterize the "canonical" development curve for the population. FDA seems like a good fit for this aspect of the problem.</p> <p>Some of my challenges are:</p> <ul> <li><p>Subjects may reach their maximum success rate at different ages, and may improve/decline faster than others, so both phase variation and amplitude variation are potentially present. So it seems like curve registration will be an important part of the analysis. </p></li> <li><p>The number of trials in the test is relatively small each year, so the outcome data is noisy, and registering the data properly seems daunting since the noise in one subject can obscure the landmarks in the data. </p></li> <li><p>I am also interested in whether the other variables predict the registration characteristics of the subject's curve. i.e. Do college-educated subjects reach their maximum performance at an earlier age than others (phase), or do particularly tall subjects sustain their peak performance longer than short subjects do (amplitude)?</p></li> <li><p>We would like to be able to produce a customized curve for a subject. e.g. If we observed the tests through age 23, we would be able to predict how much improvement to expect, at what age the maximum performance would be reached, and how sharp the decline phase would be through age 30. </p></li> </ul> <p>My questions include:</p> <ol> <li><p>Is FDA an appropriate method for this problem?</p></li> <li><p>Are there techniques in FDA for predicting a subject's phase and amplitude variation using other variables? What I've read so far seems to treat registration as a correction for noise rather than as containing features worth examination on its own.</p></li> <li><p>What are some good ways to handle registration where the individual curves have a lot of noise?</p></li> <li><p>Should I be able to produce individualized forecast curves that include subject-specific predicted phase/amplitude variation? What difficulties should I anticipate?</p></li> </ol> <p>Thank you for any and all suggestions.</p> <p>Sample data </p> <pre><code>Subject Education Ht Wt Training Aptitude A College 72 200 0 Medium B High School 77 250 100 High C High School 68 160 50 Low Subject Age Trials Success Success% A 20 15 3 20% A 21 18 5 28% A 22 30 7 23% A 23 28 8 29% A 24 32 13 41% A 25 8 2 25% A 26 20 8 40% A 27 40 11 28% A 28 33 10 30% A 29 18 5 28% A 30 10 2 20% B 20 24 4 17% B 21 27 5 19% B 22 30 8 27% B 23 33 2 6% B 24 41 8 20% B 25 39 5 13% B 26 39 5 13% C 24 13 4 31% C 25 19 6 32% C 26 18 5 28% C 27 23 6 26% C 28 16 6 38% C 29 9 3 33% </code></pre>
37,510
<p>The question is from a typical example for E-M algorithm. </p> <p>Let's say $(y_1,y_2,y_3)$ $\sim$ $\text{multinomial}(n;p_1,p_2,p_3)$, where $p_1+p_2+p_3=1$. </p> <p>How can we derive the conditional distribution of $y_2$ given $y_2+y_3=n$?</p> <p>The answer is $y_2|y_2+y_3 \sim \text{binomial}(n, p_2/(p_2+p_3)$). </p> <p>Any idea on how to derive this rigorously? </p>
73,933
<p>I do not know why testing homogeneity of variance is so important. What are the examples that require homogeneity of variance?</p>
36,497
<p>I am running a structural Panel vector autoregressive model on a panel of 13 countries over the period 1970-2012. I'm having problems in implementing the model. Does anyone have estimation programs (from Matlab, Eviews, RATS or any time series package tool) so that I can use them to estimate my model? Thanks in advance.</p>
21,562
<p>When analyzing data, using MLE or Bayesian methods, one needs to assume a distribution for the data. For continuous data the are a number of distributions that are often considered, for example, the normal distribution, the t-distribution, the log-normal, etc.</p> <p><strong>When analyzing the distribution of the SDs of a number of groups (or participants) what distributions could be appropriate?</strong> Of course this is dependent on the data, and varies from case to case, but what could be some reasonable distributions to try?</p>
42,522
<p>I have a binary outcome (success \ failure). I have 20 subjects, half of them gave 2 samples, the others just one, so I have 30 data points.</p> <p>All data points without any exceptions were success (1). So my point estimate of the success rate is clearly 100%. Now I want to calculate a CI for the success rate, mainly to see what the lower limit is, I want to be able to say that with 95% confidence the success rate is higher than....(80%, 85%, whatever comes up).</p> <p>The problem is, as you can see, is the clustering, I can't use n = 30 because I then ignore the correlation, I feel it is a waste to use n = 20. Is there a way to calculate the variance of a single sample proportion while taking into account the cluster ?</p> <p>Thanks !</p>
31,914
<p>I'm new to Stata and didn't find the answer in the help, navigating the menus, or online so far. I would just like to know how to access the quantile function of a distribution. For example in R, if I want to know the 0.95 point of the $\chi^2(1)$ distribution, I do:</p> <pre><code>&gt; qchisq(0.95,1) [1] 3.841459 </code></pre> <p>Is there an equivalent command in Stata ?</p>
48,119
<p>Let's say, we have simple "yes/no" question that we want to know answer to. And there are N people "voting" for correct answer. Every voter has a history - list of 1's and 0's, showing whether they were right or wrong about this kind of questions in the past. If we assume history as a binomial distribution, we can find voters' mean performance on such questions, their variation, CI and any other kind of confidence metrics. </p> <p>Basically, my question is: how to incorporate <strong>confidence information</strong> into <strong>voting system</strong>? </p> <p>For example, if we consider only mean performance of each voter, then we can construct simple weighted voting system: </p> <p>$$result = sign(\sum_{v \in voters}\mu_v \times (-1)^{1-vote})$$</p> <p>That is, we can just sum voters' weights multiplied either by $+1$ (for "yes") or by $-1$ (for "no"). It makes sense: if voter 1 has average of correct answers equal to $.9$, and voter 2 has only $.8$, than, probably, 1st person's vote should be considered as more important. On other hand, if 1st person have answered only 10 questions of this kind, and 2nd person have answered 1000 such questions, we are much more confident about 2nd person's skill level than about those of the 1st - it's just possible that 1st person was lucky, and after 10 relatively successful answers he will continue with much worse results. </p> <p>So, more precise question may sound like this: is there statistical metric that incorporates both - <strong>strength</strong> and <strong>confidence</strong> about some parameter? </p>
36,500
<p>the following problem came up recently while analyzing data. If the random variable X follows a normal distribution and Y follows a $\chi^2_n$ distribution (with n dof), how is $Z = X^2 + Y^2$ distributed? Up to now I came up with the pdf of $Y^2$: \begin{eqnarray} \psi^2_n(x) &amp;=&amp; \frac{\partial F(\sqrt{x})}{\partial x} \\ &amp;=&amp; \left( \int_0^{\sqrt{x}} \frac{t^{n/2-1}\cdot e^{-t/2}}{2^{n/2}\Gamma(n/2)} \mathrm{d}t \right)^\prime_x \\ &amp;=&amp; \frac{1}{2^{n/2}\Gamma(n/2)} \cdot \left( \sqrt{x} \right)^{n/2-1} \cdot e^{-\sqrt{x}/2} \cdot \left( \sqrt{x} \right)^\prime_x \\ &amp;=&amp; \frac{1}{2^{n/2-1}\Gamma(n/2)} \cdot x^{n/4-1} \cdot e^{-\sqrt{x}/2} \end{eqnarray}</p> <p>as well as some simplifications for the convolution integral ($X^2$ has the pdf $\chi^2_m$ with m dof):</p> <p>\begin{eqnarray} K_{mn}(t) &amp;:=&amp; ( \chi^2_m \ast \psi^2_n )(t) \\ &amp;=&amp; \int_0^t \chi^2_m(x) \cdot \psi^2_n(t-x) \mathrm{d}x \\ &amp;=&amp; \left( 2^{\frac{(n+m)}{2}+1} \Gamma(\frac{m}{2}) \Gamma(\frac{n}{2}) \right)^{-1} \cdot \int_0^t (t-x)^{\frac{n}{4}-1} \cdot x^{\frac{m}{2}-1} \cdot \exp(-(\sqrt{t-x}+x)/2) \mathrm{d}x \end{eqnarray}</p> <p>Does someone see a good way of calculating this integral for any real t or does it have to be computed numerically? Or am I missing a much simpler solution?</p> <p>Thanks!</p>
73,934
<p>In hypothesis testing, one must decide between two probability distributions $P_1(x)$ and $P_2(x)$ on a finite set $X$, after observing $n$ i.i.d. samples $x_1,...,x_n$ drawn from the unknown distribution. Let $A_n\subseteq X^n$ denote the chosen acceptance region for $P_1$. The error probabilities of type I and II can be expressed thus</p> <p>$$ \alpha_n = P^n_1(A^c_n)$$ $$ \beta_n = P^n_2(A_n)$$</p> <p>(Cover &amp; Thomas, Ch. 11 is an excellent reference for the definitions and facts mentioned in this post). </p> <p>Assume we have chosen the acceptance regions $A_n$'s ($n\geq 1$), so that both error probabilities approach zero as the number of observations grows: $\alpha_n\rightarrow 0$ and $\beta_n\rightarrow 0$ as $n\rightarrow \infty$. Stein's Lemma tells us that the maximum rate of deacrease of both error probabilities is determined, to the first order of the exponent, by the the KL-distance between the given distributions. More precisely</p> <p>$$ -\frac 1 n \log \alpha_n \rightarrow D(P_2||P_1)\tag{1}$$ $$ -\frac 1 n \log \beta_n \rightarrow D(P_1||P_2)\tag{2}$$</p> <p>Now, consider the Bayesian version of the hypothesis testing problem. In this case, $P_1$ and $P_2$ are given prior probabilities $\pi_1$ and $\pi_2$, respectively, and the error probability is obtained by weighting $\alpha_n$ and $\beta_n$:</p> <p>$$ e_n = \pi_1\alpha_n + \pi_2\beta_n.\tag{3}$$</p> <p>In this case, the optimal exponent for $e_n$ is given by Chernoff distance between the given distributions:</p> <p>$$ -\frac 1 n \log e_n \rightarrow C(P_1,P_2).$$</p> <p><strong>Question</strong>: what is wrong in the reasoning below? (Disclaimer: I'm <em>not</em> trying to be fully formal/detailed here).</p> <p>By (3), the decrease rate of $e_n$ is the minimum deacrease rate of $\alpha_n$ and $\beta_n$:</p> <p>$$ \lim -\frac 1 n \log e_n = \min\{\lim -\frac 1 n \log \alpha_n, \lim -\frac 1 n \log \beta_n\}$$.</p> <p>Since $e_n\rightarrow 0$, one must have both $\alpha_n\rightarrow 0$ and $\beta_n\rightarrow 0$ as $n\rightarrow \infty$. So, by the previous considerations on Stein's Lemma, and (1) and (2), one would get </p> <p>$$ \lim -\frac 1 n \log e_n = \min\{D(P_1||P_2), \,\,D(P_2||P_1)\}$$</p> <p>which is quite different from $C(P_1,P_2)$.</p> <p><strong>EDIT</strong>: I realize that now that (1) and (2) cannot hold simultaneously, for the same regions $A_n$'s, so this must be the bug in the reasoning.</p> <p>What one can infer through a similar reasoning is just, I think, </p> <p>$$C(P_1,P_2)\leq \min\{D(P_1||P_2), \,\,D(P_2||P_1)\}.$$</p>
73,935
<p>Conceptually, what is the reason why the correlation between growth rates of A and B would be different from correlation between actual A and B? Under what circumstances would the growth rate correlation be higher? How about lower? How about equal? Thanks.</p>
73,936
<p>I have produced a model using the <code>ctree</code> function in R, and want to know whether this tree is actually explaining my data well. </p> <p>I am trying to explain the presence or absence of landscape disturbances using environmental factors. <code>rpart</code> gives an $R^2$ value, but since my data are not well-balanced (PRESENCE contains a relatively small number of 1s), I believe the <code>ctree</code> model will work better. A (simplified) version of my model is: </p> <pre><code>fit &lt;- ctree (PRESENCE ~ A + B + C + D + E + F, data=dat); </code></pre> <p>where PRESENCE is a factor containing values of (0,1), five of the independent variables (denoted by the letters A-F) are numerical values (environmental data), and the remaining variable is categorical.</p> <p>I believe there is a way to make an $R^2$ value using the <code>predict</code> function (as in <a href="http://stats.stackexchange.com/questions/23772/getting-r-square-value-from-ctree">Getting R square value from ctree</a>), but I'm fairly new to R and haven't had any success trying to code this.</p> <p>Any help would be much appreciated.</p>
36,504
<p>Based on the lack of responses to <a href="http://stats.stackexchange.com/questions/78593/centralization-measures-in-weighted-graphs">my previous network question</a>, perhaps this is not quite the place to ask this question, but I'll give it a try.</p> <p>I am planning a series of studies that involve small groups of people. In a typical study, the participants will meet in groups of 4 to have a discussion about a particular topic. As a group, the (student) participants will reach a decision about the topic (e.g., whether to support or oppose a proposed change in university policy). The discussion will be videotaped, and we will use transcripts of the discussion to obtain information about who is talking to whom about what topics. In addition, the participants will complete ratings of themselves and each other (e.g., What is your opinion about the proposed change? What do you think is this person's opinion of the proposed change?). I will also have a variety of data about the individual characteristics of all the participants. Thus, each group from each study will give me a 4-node network with directed, valued edges and a variety of attributes about each node. In a typical study, I might have 20 - 30 of these small groups (i.e., small, 4-node networks).</p> <p>In all the studies, I will be interested in group-level, relationship-level, and individual-level outcomes. I have provided below a sampling of the kinds of questions I would want to ask from these data: </p> <ol> <li>Which groups express the most agreement in their discussions? Which groups express the most disagreement? Is group composition related to agreement / disagreement and to the final decision outcome?</li> <li>Which pairs of people like each other during the discussion? Which pairs of people dislike each other? How are the pairwise patterns of liking related to the final decision outcome?</li> <li>Who has the most influence on the course of the discussion? Who has influence over the final decision of each group? Are there any individual difference characteristics that are related to influence over the discussion and / or final decision?</li> </ol> <p>And so on. What I am looking is the proper statistical model to use to investigate samples of networks, rather than individual networks.</p> <p>I have read a bit about single-network methods (like <a href="http://en.wikipedia.org/wiki/Exponential_random_graph_models" rel="nofollow">exponential random graph models</a>, for which there is <a href="http://cran.r-project.org/web/packages/ergm/index.html" rel="nofollow">a nice R package available</a>), but these methods do not seem appropriate, since I am dealing with a set of independent networks, rather than a single network. In addition, a plain linear mixed model does not seem appropriate because I am collecting explicitly relational data. Finally, although a method like the <a href="http://davidakenny.net/srm/soremo.htm" rel="nofollow">social relations model</a> seems appropriate for this situation at first blush, the social relations model seems to only partition the variance in round-robin ratings into perceiver, target, and relationship sources rather than allowing me to, for example, relate the patterns within a set of networks to overall network outcomes.</p> <p>Could anyone offer some advice about which statistical model might be appropriate for my situation? Any readings suggestions and / or software recommendations would be greatly appreciated. (FYI, I am a proficient R user, so R package recommendations would be especially appreciated).</p> <hr> <p><strong>Edit:</strong></p> <p>At the request of Alex Williams, I have posted some example data <a href="http://pastebin.com/t2Z00k7K" rel="nofollow">here</a> in csv format to give everyone a concrete example of the type of data I'm working with. In the sample data, participants, identified by <code>participant_id</code>, are each in 4-person groups, identified by <code>group_id</code>. The participants discuss a proposal and decide as a group whether the support or oppose the proposal. <code>group_decision</code> is the result of the group discussion. The participants also rate their own attitude toward the proposal (<code>pers_att</code>), their perceptions of the attitudes of the other group members (<code>p1_att</code> through <code>p4_att</code>; <code>person_id</code> tracks who <code>p1</code> through <code>p4</code> are within each group), and their own enjoyment of the discussion.</p> <p>In the sample data, I might be interested in the following sorts of questions:</p> <ol> <li>Are personal attitudes related to group-level decisions?</li> <li>Are personal attitudes related to personal enjoyment of the group discussion?</li> <li>Were people accurate in their ratings of other peoples' attitudes?</li> <li>Were the ratings of others related to others' enjoyment of the discussion?</li> <li>Does disagreement within a group (quantified, for example, by the difference between the minimum and maximum attitude ratings) relate to group-level decisions?</li> <li>Does disagreement within a group cause people to enjoy the discussion less?</li> <li>Did people who enjoyed the discussion more have more influence over the outcome?</li> </ol>
73,937
<p>Can data sparseness appear due to either high sample size or high dimension? How different are the situations in the two cases? Thanks!</p>
73,938
<p>I have UMVUE $$\tilde\theta = \frac{(n-1)(U-n)}{(U-1)(U-2)}$$ for $P(Y=2)=\theta(1-\theta)$ where $U=\sum_{i=1}^n Y_i$</p> <p>$Y_i \sim \text{geometric} (\theta)$ I am using delta method to find the variance of $\tilde\theta$</p> <p>So far, I have defined: $$g(y)=\frac{(n-1)(y-n)}{(y-1)(y-2)}$$</p> <p>$$g'(y) = \frac{(n-1)[n(2y-3)-y^2+2]}{(y-2)^2(y-1)}$$</p> <p>Replacing $y$ with $\theta$ for $g'(\theta)$</p> <p>Since $\tilde \theta$ is unbiased estimator, $E[\tilde \theta] = \theta$</p> <p>$Var(\sum Y_i)=n^2(1-\theta)/\theta^2 $ [ This is the step where I am confused, do I have to take the variance of one of the $Y_i$'s or sum of $Y_i'$s</p> <p>So, $Var(\tilde \theta) = [g'(\theta)]^2Var (\sum Y_i)$</p> <p>Is this approach is correct? Any pointers would be helpful.</p>
73,939
<p>(I'm asking this question for a friend, honest...)</p> <blockquote> <p>Is there an easy way to convert from an SPSS file to a SAS file, which preserves the formats AND labels? Saving as a POR file gets me the labels (I think) but not the POR file. I tried to save to a SAS7dat file but it didn't work. Thanks,</p> </blockquote>
36,505
<p>I have animals, that could be virgin or mated (reproductive state is the fixed factor), which I've stimulated sequentially with 4 different doses of an odour (doses are the repeated measures, the same animal was blown with 4 increasing doses of the same odorant). Then, I measure the neuronal response (variable: number of spikes) of each animal to each dose of the odorant. This might be a typical case or repeated measures, however I have some missing values for the doses, I have not all the doses completed for some animals. For example, for animal 1, I missed recording 1 out of the 4 doses. what can I do? I have two statistical packages: SPSS 16 or Statistical. thanks for your help!</p>
73,940
<p>Obviously events A and B are independent iff Pr$(A\cap B)$ = Pr$(A)$Pr$(B)$. Let's define a related quantity Q:</p> <p>$Q\equiv\frac{\mathrm{Pr}(A\cap B)}{\mathrm{Pr}(A)\mathrm{Pr}(B)}$</p> <p>So A and B are independent iff Q = 1 (assuming the denominator is nonzero). Does Q actually have a name though? I feel like it refers to some elementary concept that is escaping me right now and that I will feel quite silly for even asking this.</p>
49,547
<p>As the title says, I'd like to calculate the percentage difference for two sets of points. For example, suppose I have $S_{1}=\{(1,x_{1}),(2,x_{2}),(3,x_{3})\}$ and $S_{2}=\{(1,y_{1}),(2,y_{2}),(3,y_{3})\}$. How can I know the difference in percentage between both sets of data. What is the correct way to do that? Is that kind of assessment meaningful to establish to which degree of precision a set of data is preferred over the other?</p> <p>In my particular case, $S_{1}$ is simply a set of numerical results obtained by <a href="http://en.wikipedia.org/wiki/Direct_simulation_Monte_Carlo" rel="nofollow">DSMC</a> and $S_{2}$ was obtained by a theoretical result. I'd like to quantify how much difference exist between each other in order to establish when it is convenient to use one or the other.</p> <p>By "difference in percentage" I mean <a href="http://en.wikipedia.org/wiki/Percent_difference" rel="nofollow">percent difference</a>. Hopefully that clarifies a bit the question.</p> <p><strong>UPDATE:</strong></p> <p>Another way to formulate my question would be: How can I arrive to conclusions such as "The results from experiment A are inaccurate by 10% with respect to experiment B", when experiment A and B are a set of values.</p>
73,941
<p>Alright, hopefully third time is the charm. I'm basically trying to build a predictive model with R and gbm. For various reasons, I can't explicitly state exactly what I'm doing. </p> <p>Basically I have a response variable that averages around zero. It has high volatility, is right skewed, and has very high kurtosis. I have a bunch of predictors I think contain information about the response. Some of them are numerical, others factors. I have partitioned available data into three sets 1, 2, 3. What I've done is trained gbm on set 1 with parameters as follows: interaction depth of 10, shrinkage of .001, 3 fold cross validation, about 200000 iterations (maybe overkill), out of bag estimation 50%, and training % at 80%. I've then tested how well the model predicts the responses in set 2 and it's significantly better than chance. The predictors are distributed similarly to the response (similar averages), but the responses have a higher stdev and kurtosis. I then proceed to update the model by training on set 1 + set 2 with the same parameters. I then test on set 3. The predictors now are extremely different from the actual responses (average of 25 for the predictor versus close to 0 for responses). I'm not sure what's causing this. I haven't changed anything so my initial thought was the data in set 2, where the range of the response variable is larger than set 1. How do I narrow down what might be causing this discrepancy with the predictions for set 3?</p> <h2>I realize this is somewhat open-ended, so please let me know what other information would be helpful in answering this question. I'm not sure what exactly you guys might find useful.</h2> <p>answers to comments the data is a time series so just took three time period of about 2 years, 1 year, and 3 months.</p>
31,217
<p>I read in <a href="http://www2.sas.com/proceedings/sugi30/203-30.pdf" rel="nofollow">this paper</a> (page 3) comparing pca to factor analysis that both methods need a number of observations about 5 times the number of variables. Why? and how would you reduce the number of variables if you have few observations only? </p>
73,942
<p>I am trying to get upto speed in Bayesian Statistics. I have a little bit of stats background (STAT 101) but not too much - I think I can understand prior, posterior, and likelihood :D. </p> <p>I don't want to read a Bayesian textbook just yet. I'd prefer to read from a source (website preferred) that will ramp me up quickly. Something like <a href="http://www.stat.washington.edu/raftery/Research/PDF/bayescourse.pdf">this</a>, but that has more details.</p> <p>Any advice?</p>
73,943
<p>I have two tables (matrix) of same dimensions, one contains correlation coefficients and other with p values. I want to combine them into one table. For example let's say I have correlation coefficient between variable A1 and A2 of 0.75 in table 1 and p value of 0.045 in table 2. Now in my combined table 3, I want to use:</p> <p>condition1 for table 1: if a coefficient value in a cell of table 1 is less than 0.4 then "+", 0.4 &lt;= coefficient &lt;0.7 then "++" else "+++",</p> <p>condition2 for table 2: if a pvalue in a cell of table 2 is less than 0.01 then "+++", 0.01 &lt;= pvalue &lt; .05 then "++" else "+".</p> <p>Thus corresponding cell value for A1 and A2 in table 3 should look like: +++/++ where "+++" correspond to table 1 value of 0.75 and ++ correspond to table 2 p value of 0.045 and "/" is just a separator.</p> <p>I would like to do this either is SAS or R. Thanks in advance.</p>
73,944
<p>I know this is a fairly specific <code>R</code> question, but I may be thinking about proportion variance explained, $R^2$, incorrectly. Here goes.</p> <p>I'm trying to use the <code>R</code> package <code>randomForest</code>. I have some training data and testing data. When I fit a random forest model, the <code>randomForest</code> function allows you to input new testing data to test. It then tells you the percentage of variance explained in this new data. When I look at this, I get one number.</p> <p>When I use the <code>predict()</code> function to predict the outcome value of the testing data based on the model fit from the training data, and I take the squared correlation coefficient between these values and the <em>actual</em> outcome values for the testing data, I get a different number. <em>These values don't match up</em>. </p> <p>Here's some <code>R</code> code to demonstrate the problem.</p> <pre><code># use the built in iris data data(iris) #load the randomForest library library(randomForest) # split the data into training and testing sets index &lt;- 1:nrow(iris) trainindex &lt;- sample(index, trunc(length(index)/2)) trainset &lt;- iris[trainindex, ] testset &lt;- iris[-trainindex, ] # fit a model to the training set (column 1, Sepal.Length, will be the outcome) set.seed(42) model &lt;- randomForest(x=trainset[ ,-1],y=trainset[ ,1]) # predict values for the testing set (the first column is the outcome, leave it out) predicted &lt;- predict(model, testset[ ,-1]) # what's the squared correlation coefficient between predicted and actual values? cor(predicted, testset[, 1])^2 # now, refit the model using built-in x.test and y.test set.seed(42) randomForest(x=trainset[ ,-1], y=trainset[ ,1], xtest=testset[ ,-1], ytest=testset[ ,1]) </code></pre> <p>Thanks for any help you might be willing to lend.</p>
73,945
<p>I found the following posts interesting and I was wondering if any of you guys know of good academic papers that describe methods/relationships of exogenous variables in VECM models. If so could you kindly point them out to me as I am very interested in learning. Thank you.</p> <p><a href="http://stats.stackexchange.com/questions/4030/finding-coefficients-for-vecm-exogenous-variables">Finding coefficients for VECM + exogenous variables</a></p> <p><a href="http://stats.stackexchange.com/questions/6487/lagged-exogenous-variables-in-vecm-with-r">Lagged Exogenous Variables in VECM with R</a></p>
48,143
<p>I've got a particular MCMC algorithm which I would like to port to C/C++. Much of the expensive computation is in C already via Cython, but I want to have the whole sampler written in a compiled language so that I can just write wrappers for Python/R/Matlab/whatever.</p> <p>After poking around I'm leaning towards C++. A couple of relevant libraries I know of are Armadillo (http://arma.sourceforge.net/) and Scythe (http://scythe.wustl.edu/). Both try to emulate some aspects of R/Matlab to ease the learning curve, which I like a lot. Scythe squares a little better with what I want to do I think. In particular, its RNG includes a lot of distributions where Armadillo only has uniform/normal, which is inconvenient. Armadillo seems to be under pretty active development while Scythe saw its last release in 2007.</p> <p>So what I'm wondering is if anyone has experience with these libraries -- or others I have almost surely missed -- and if so, whether there is anything to recommend one over the others for a statistician very familiar with Python/R/Matlab but less so with compiled languages (not completely ignorant, but not exactly proficient...).</p>
61
<p>I have a set of samples in which I assume there are 2 definite subsets in it. I plotted their values in a histogram and found that there are two distinct modes as shown in the figure below.</p> <p>My question is how do I differentiate two groups. i.e how do I choose a value that differentiates the two subsets? </p> <p><img src="http://i.stack.imgur.com/wbIvt.png" alt="enter image description here"></p>
36,512
<p>I am currently trying to do the following in R:</p> <p>I have thousands of <strong>measured spectra</strong> (x,y; see below). Each spectra has one or two peaks. Also I have sets of <strong>"training" spectra</strong> obtained in more controlled conditions and I would like to know which of my training spectra has the closest match to the measured spectra!? </p> <p>I was thinking that some sort of pattern recognition would be useful but I know too little to make an informed choice as this is a bit outside of my usual work-area</p> <ul> <li>What is the most promising way/function in R to do this kind of pattern recognition I want?</li> <li>In case pattern recognition (like PCA) is not the most promising way, what other options are there?</li> </ul> <p>I am looking for sample bits of code or literature dealing with this kind of data analysis. </p> <p><img src="http://i.stack.imgur.com/5mpqU.png" alt="enter image description here"></p> <p><strong>EDIT</strong> The peak position will most probably always be the same, however the laser used to record the spectra is temperature controlled and slight variations are possible. The intensity will change depending on experimental conditions. The two peaks sould be treated as independet peaks.</p>
73,946
<p>I am trying to understand standard error "clustering" and how to execute in R (it is trivial in Stata). In R I have been unsuccessful using either <code>plm</code> or writing my own function. I'll use the <code>diamonds</code> data from the <code>ggplot2</code> package.</p> <p>I can do fixed effects with either dummy variables</p> <pre><code>&gt; library(plyr) &gt; library(ggplot2) &gt; library(lmtest) &gt; library(sandwich) &gt; # with dummies to create fixed effects &gt; fe.lsdv &lt;- lm(price ~ carat + factor(cut) + 0, data = diamonds) &gt; ct.lsdv &lt;- coeftest(fe.lsdv, vcov. = vcovHC) &gt; ct.lsdv t test of coefficients: Estimate Std. Error t value Pr(&gt;|t|) carat 7871.082 24.892 316.207 &lt; 2.2e-16 *** factor(cut)Fair -3875.470 51.190 -75.707 &lt; 2.2e-16 *** factor(cut)Good -2755.138 26.570 -103.692 &lt; 2.2e-16 *** factor(cut)Very Good -2365.334 20.548 -115.111 &lt; 2.2e-16 *** factor(cut)Premium -2436.393 21.172 -115.075 &lt; 2.2e-16 *** factor(cut)Ideal -2074.546 16.092 -128.920 &lt; 2.2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 </code></pre> <p>or by de-meaning both left- and right-hand sides (no time invariant regressors here) and correcting degrees of freedom.</p> <pre><code>&gt; # by demeaning with degrees of freedom correction &gt; diamonds &lt;- ddply(diamonds, .(cut), transform, price.dm = price - mean(price), carat.dm = carat .... [TRUNCATED] &gt; fe.dm &lt;- lm(price.dm ~ carat.dm + 0, data = diamonds) &gt; ct.dm &lt;- coeftest(fe.dm, vcov. = vcovHC, df = nrow(diamonds) - 1 - 5) &gt; ct.dm t test of coefficients: Estimate Std. Error t value Pr(&gt;|t|) carat.dm 7871.082 24.888 316.26 &lt; 2.2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 </code></pre> <p>I can't replicate these results with <code>plm</code>, because I don't have a "time" index (i.e., this isn't really a panel, just clusters that could have a common bias in their error terms).</p> <pre><code>&gt; plm.temp &lt;- plm(price ~ carat, data = diamonds, index = "cut") duplicate couples (time-id) Error in pdim.default(index[[1]], index[[2]]) : </code></pre> <p>I also tried to code my own covariance matrix with clustered standard error using Stata's explanation of their <code>cluster</code> option (<a href="http://www.stata.com/support/faqs/stat/cluster.html">explained here</a>), which is to solve $$\hat V_{cluster} = (X&#39;X)^{-1} \left( \sum_{j=1}^{n_c} u_j&#39;u_j \right) (X&#39;X)^{-1}$$ where $u_j = \sum_{cluster~j} e_i * x_i$, $n_c$ si the number of clusters, $e_i$ is the residual for the $i^{th}$ observation and $x_i$ is the row vector of predictors, including the constant (this also appears as equation (7.22) in Wooldridge's <em>Cross Section and Panel Data</em>). But the following code gives very large covariance matrices. Are these very large values given the small number of clusters I have? Given that I can't get <code>plm</code> to do clusters on one factor, I'm not sure how to benchmark my code.</p> <pre><code>&gt; # with cluster robust se &gt; lm.temp &lt;- lm(price ~ carat + factor(cut) + 0, data = diamonds) &gt; &gt; # using the model that Stata uses &gt; stata.clustering &lt;- function(x, clu, res) { + x &lt;- as.matrix(x) + clu &lt;- as.vector(clu) + res &lt;- as.vector(res) + fac &lt;- unique(clu) + num.fac &lt;- length(fac) + num.reg &lt;- ncol(x) + u &lt;- matrix(NA, nrow = num.fac, ncol = num.reg) + meat &lt;- matrix(NA, nrow = num.reg, ncol = num.reg) + + # outer terms (X'X)^-1 + outer &lt;- solve(t(x) %*% x) + + # inner term sum_j u_j'u_j where u_j = sum_i e_i * x_i + for (i in seq(num.fac)) { + index.loop &lt;- clu == fac[i] + res.loop &lt;- res[index.loop] + x.loop &lt;- x[clu == fac[i], ] + u[i, ] &lt;- as.vector(colSums(res.loop * x.loop)) + } + inner &lt;- t(u) %*% u + + # + V &lt;- outer %*% inner %*% outer + return(V) + } &gt; x.temp &lt;- data.frame(const = 1, diamonds[, "carat"]) &gt; summary(lm.temp) Call: lm(formula = price ~ carat + factor(cut) + 0, data = diamonds) Residuals: Min 1Q Median 3Q Max -17540.7 -791.6 -37.6 522.1 12721.4 Coefficients: Estimate Std. Error t value Pr(&gt;|t|) carat 7871.08 13.98 563.0 &lt;2e-16 *** factor(cut)Fair -3875.47 40.41 -95.9 &lt;2e-16 *** factor(cut)Good -2755.14 24.63 -111.9 &lt;2e-16 *** factor(cut)Very Good -2365.33 17.78 -133.0 &lt;2e-16 *** factor(cut)Premium -2436.39 17.92 -136.0 &lt;2e-16 *** factor(cut)Ideal -2074.55 14.23 -145.8 &lt;2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Residual standard error: 1511 on 53934 degrees of freedom Multiple R-squared: 0.9272, Adjusted R-squared: 0.9272 F-statistic: 1.145e+05 on 6 and 53934 DF, p-value: &lt; 2.2e-16 &gt; stata.clustering(x = x.temp, clu = diamonds$cut, res = lm.temp$residuals) const diamonds....carat.. const 11352.64 -14227.44 diamonds....carat.. -14227.44 17830.22 </code></pre> <p>Can this be done in R? It is a fairly common technique in econometrics (there's a brief tutorial in <a href="http://sekhon.berkeley.edu/causalinf/sp2010/section/week7.pdf">this lecture</a>), but I can't figure it out in R. Thanks!</p>
48,146
<p>I have a Poisson regression where I regress a counts variable (called $H(x)$) on one covariate (called $x$) and one factor (categorical) variable (called $s$), which can take one of three values recoded to integers 1 through 3. I am including an offset variable called $P(x)$ and an intercept called $B_0$. I am trying to write the regression equation. </p> <p>I believe I have correctly expressed the dependent, offset, intercept, and $B_1$ terms, but am not sure about how to express the categorical variable (the $s$) term. Since it's a categorical variable that takes an integer value in the range 1 through 3, I would like to know if can I write it like this (in bold below):</p> <p>$$\log H(x) = \log P(x) + B0 + B1 \cdot x + B2\cdot s$$</p> <p>or if instead I should express each possible value in the equation (since the regression results give one beta parameter estimate for each of the three values) like this:</p> <p>$$\log H(x) = \log P(x) + B0 + B1 \cdot x + B2 \cdot 1 + B3 \cdot 2 + B4 \cdot 3$$ </p> <p>The value for this categorical factor coded as "3" is the reference variable and so the parameter estimate is set by the regression procedure in SPSS to 0.</p>
73,947
<p>I'm using R to calculate the median absolute deviation for a few distributions, but some of the values I'm calculating do not seem realistic at all. I have the following distribution:</p> <pre><code>x &lt;- [1] NA NA NA -0.003 -0.009 0.004 -0.001 -0.001 -0.003 0.001 -0.002 0.000 -0.003 0.000 0.006 -0.011 -0.003 [18] 0.002 -0.007 -0.002 0.006 -0.005 0.000 0.008 0.001 0.009 -0.002 0.001 0.001 0.002 0.003 NA NA 0.001 [35] NA 0.005 -0.002 0.003 0.016 0.007 -0.003 -0.017 0.000 -0.013 0.000 0.002 0.002 0.000 NA 0.000 0.000 [52] 0.000 0.000 0.004 -0.001 0.000 -0.002 -0.003 -0.007 -0.001 -0.001 0.000 -0.002 0.001 0.003 0.000 -0.011 -0.002 [69] -0.003 0.004 -0.007 NA -0.009 0.005 -0.001 0.001 -0.001 0.001 -0.001 0.006 0.002 -0.006 0.002 -0.002 0.004 [86] 0.006 0.001 0.000 0.002 -0.002 0.007 0.004 0.003 0.004 0.005 -0.005 0.003 -0.003 0.002 0.004 0.003 -0.002 [103] -0.002 0.001 0.002 0.000 0.000 0.003 -0.001 0.004 0.001 0.001 0.005 -0.001 NA -0.005 0.000 -0.002 -0.004 [120] 0.004 NA 0.007 0.000 0.002 0.003 -0.006 -0.002 0.000 -0.002 -0.001 -0.001 -0.001 -0.006 -0.001 -0.001 -0.008 [137] 0.000 0.003 0.001 0.001 -0.001 0.000 0.011 -0.017 NA NA NA </code></pre> <p>Then I used the following code to generate my MAD value:</p> <pre><code>MADx &lt;- mad(x, center = median(x, na.rm = TRUE), constant = (1/(quantile(x, probs=0.75, na.rm = TRUE, names = FALSE, type = 1))), na.rm = TRUE, low = FALSE, high = FALSE) </code></pre> <p>I get a value of 1 when doing this, which seems unrealistic because the values I have are much less than 1.</p> <p>I used the quantile function to get the 75th quantile of the distribution.</p>
114
<p>So, it seems to me that the weights function in lm gives observations more weight the larger the associated observation's 'weight' value, while the lme function in lme does precisely the opposite. This can be verified with a simple simulation. </p> <pre><code>#make 3 vectors- c is used as an uninformative random effect for the lme model a&lt;-c(1:10) b&lt;-c(2,4,6,8,10,100,14,16,18,20) c&lt;-c(1,1,1,1,1,1,1,1,1,1) </code></pre> <p>If you were now to run a model where you weight the observations based on the inverse of the dependent variable in lm, you can only generate the exact same result in nlme if you weight by just the dependent variable, without taking the inverse.</p> <pre><code>summary(lm(b~a,weights=1/b)) summary(lme(b~a,random=~1|c,weights=~b)) </code></pre> <p>You can flip this and see the converse is true- specifying weights=b in lm requires weights=1/b to get a matching lme result. </p> <p>So, I understand this much, I just want validation on one thing and to ask a question about another.</p> <ol> <li>If I want to weight my data based on the inverse of the dependent variable, is it fine to just code weights=~(dependent variable) within lme?</li> <li>Why is lme written to handle weights completely differently than lm? What is the purpose of this other than to generate confusion? </li> </ol> <p>Any insight would be appreciated!</p>
73,948
<p>I am working with an investigator planning a study to validate a device to measure blood pressure (BP). The investigator states a difference within +/- 5 mmHg, when the two devices are used by the same participant, as "similar". I obtained the following summary statistics (slightly modified) from a similar study which used the ANSI/AAMI SP10A-2002 protocol:</p> <p>n = 100 participants<br> mean +/- SD systolic BP = -0.70 +/- 5.00<br> mean difference +/- diastolic BP = 0.20 +/- 5.20</p> <p>How many participants would the investigator need to recruit to validate the device considering the typical 80% power and alpha level of 0.05? I am using R (specifically power.t.test). I assume I want to focus on diastolic BP due to the greater variation.</p> <p>I defined the null and alternative hypotheses as follows: Null Hypothesis: The average difference between the two measurements is 5 mmHg. Alternative Hypothesis: The average difference between the two measurements are no greater than 5 mmHg (one-sided hypothesis).</p> <p>But apparently my thinking is off... is there a piece of information I am lacking? Thank you in advance!</p>
73,949
<p>So I am given the following question</p> <blockquote> <p>Data set sample5.txt has a 20-dimensional input $x$ in $\mathbb{R}^{20}$ but we suspect that many of these are actually irrelevant. Could you model the function $y = f(x)$ while - at the same time - figuring which dimensions contribute to the output?</p> </blockquote> <p>So it is a feature selection task - I understand that. But I'm sort of confused by the </p> <blockquote> <p>at the same time</p> </blockquote> <p>part. I know many feature selection algorithms but they do not actually produce models for the data, they just produce decisions regarding which features are important and which are not. Conversely, a model (alone) doesn't really give much information regarding which features are important and which are not.</p> <p>Perhaps you could do simple linear regression and then select features based on the weights (but I have never heard of anyone doing this). Or do you think that I am over-analyzing the question and what I should do is simply do feature selection first, then create the model? </p>
36,518
<p>So I realize this has been asked before: e.g. <a href="http://stats.stackexchange.com/questions/48425/what-are-the-use-cases-related-to-cluster-analysis-of-different-distance-metrics">What are the use cases related to cluster analysis of different distance metrics?</a> but I've found the answers somewhat contradictory to what is suggested should be possible in the literature. </p> <p>Recently I have read two papers that have mention using the kmeans algorithm with other metrics, for example edit distance between strings and the "Earth Mover Distance" between distributions. Given that these papers mention using kmeans with other metrics without specifying <em>how</em>, particularly when it comes to computing the mean of set of points, suggests to me that maybe there is some "standard" method to dealing with this that I'm just not picking up on.</p> <p>Take for example this <a href="http://machinelearning.wustl.edu/mlpapers/paper_files/icml2003_Elkan03.pdf" rel="nofollow">paper</a>, which gives a faster implementation of the k-means algorithm. Quoting from paragraph 4 in the intro the author says his algorithm "can be used with any black box distance metric", and in the next paragraph he mentions edit distance as a specific example. His algorithm however still computes the mean of a set of points and doesn't mention how this might affect results with other metrics (I'm especially perplexed as to how mean would work with edit distance).</p> <p>This other <a href="http://www.cs.cmu.edu/~sganzfri/PotentialAware_AAAI14.pdf" rel="nofollow">paper</a> describes using k-means to cluster poker hands for a texas hold-em abstraction. If you jump to page 2 bottom of lefthand column the author's write "and then k-means is used to compute an abstraction with the desired number of clusters using the Earth Mover Distance between each pair of histograms as the distance metric".</p> <p>I'm not really looking for someone to explain these papers to me, but am I missing some standard method for using k-means with other metrics? Standard averaging with the earth mover distance seems like it could work heuristically, but edit distance seems to not fit the mold at all. I appreciate any insight someone could give.</p> <p><strong>(edit)</strong>: I went ahead and tried k-means on distribution histograms using the earth mover distance (similar to what is in the poker paper) and it seemed to have worked fine, the clusters it output looked pretty good for my use case. For averaging I just treated the histograms as vectors and averaged in the normal way. The one thing that I noticed is the sum over all points of the distances to the means did not always decrease in a monotone manner. In practice though, it would settle on a local min within 10 iterations despite monotone issues. I'm going to assume that this is what they did in the second paper, the only question that remains then is, how the heck would you average when using something like edit distance?</p>
49,393
<p>In a representative sample of country population I have very few missing data, around 3%. But when I checked the missing data among communities, I found that one of them has almost 30% of data lost. That's consistent in all ages.</p> <p>Should I try to impute the data, remove that community from the analysis, or keep all the communities but warn the reader about this possible bias?</p>
73,950
<p>I'm new here and have a question regarding ANOVA in R. </p> <p>I have an ANOVA table like this from running <code>anova(model)</code> in R, where <code>model</code> is a multiple linear regression model built with the <code>lm</code> command:</p> <pre><code>Analysis of Variance Table Response: log(price) Df Sum Sq Mean Sq F value Pr(&gt;F) mileage.residual 1 10.79 10.79 733.329 &lt; 2.2e-16 *** model 5 138.31 27.66 1880.427 &lt; 2.2e-16 *** body_type 4 108.87 27.22 1850.269 &lt; 2.2e-16 *** age 1 654.08 654.08 44463.584 &lt; 2.2e-16 *** transmission 1 3.87 3.87 263.416 &lt; 2.2e-16 *** four_wd 1 0.86 0.86 58.718 2.353e-14 *** nav 1 1.41 1.41 96.168 &lt; 2.2e-16 *** fuel 2 2.35 1.17 79.809 &lt; 2.2e-16 *** age:transmission 1 0.79 0.79 53.827 2.719e-13 *** Residuals 3429 50.44 0.01 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 </code></pre> <p>From reading this I have three questions:<br/></p> <ol> <li><p>How is the the model for MS(Model) chosen when calculating the different F-values for the independent variables? Is it the model with only that variable? This leads to the second question.<br/></p></li> <li><p>Are the different F-values comparable? Does it make sense to draw the conclusion that <code>age</code> is by far the strongest predictor among the considered covariates and that <code>model</code> and <code>body_type</code> are of about the same strength? <br/></p></li> <li><p>If I train a new model, but with different data (in this case representing a different car model), does it make sense to compare the Mean Sq(Residuals) as a measure across different R-models of how well the model describes the different car models. That is, if this R-model is first trained with car model A's data and has a MSE of 0.01 (as in the table) and the same R-model is then trained with car model B's data and then has a MSE of 0.1, can I conclude that the R-model is a much better fit for model A than B?</p></li> </ol> <p>This all seems like reasonable things, but I just want to be sure to not jump to any unjustified conclusions.</p> <p>For completeness a summary of the model is shown below:</p> <pre><code>Call: lm(formula = log(price) ~ mileage.residual + model + body_type + age * transmission + four_wd + nav + fuel, data = sample.data) Residuals: Min 1Q Median 3Q Max -0.74556 -0.07250 0.00126 0.07135 0.46688 Coefficients: Estimate Std. Error t value Pr(&gt;|t|) (Intercept) 12.5718774 0.0104270 1205.710 &lt; 2e-16 *** mileage.residual -0.0318929 0.0006863 -46.471 &lt; 2e-16 *** model320 0.0827985 0.0063347 13.071 &lt; 2e-16 *** model325 0.2139785 0.0088962 24.053 &lt; 2e-16 *** model328 0.2689928 0.0137284 19.594 &lt; 2e-16 *** model330 0.2956210 0.0105510 28.018 &lt; 2e-16 *** model335 0.4724337 0.0114792 41.156 &lt; 2e-16 *** body_typecab 0.3254419 0.0102776 31.665 &lt; 2e-16 *** body_typecoupe 0.1200356 0.0087123 13.778 &lt; 2e-16 *** body_typehatchback 0.1082495 0.0096499 11.218 &lt; 2e-16 *** body_typestation_wagon 0.0165039 0.0054116 3.050 0.00231 ** age -0.1588759 0.0013296 -119.491 &lt; 2e-16 *** transmissionautomatic 0.1061707 0.0076932 13.801 &lt; 2e-16 *** four_wdTRUE 0.0478496 0.0067139 7.127 1.25e-12 *** navTRUE 0.0567863 0.0061501 9.233 &lt; 2e-16 *** fueldiesel 0.0910481 0.0067196 13.550 &lt; 2e-16 *** fuelelectric 0.0309334 0.0471583 0.656 0.51190 age:transmissionautomatic -0.0113273 0.0015439 -7.337 2.72e-13 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Residual standard error: 0.1213 on 3429 degrees of freedom Multiple R-squared: 0.9481, Adjusted R-squared: 0.9478 F-statistic: 3684 on 17 and 3429 DF, p-value: &lt; 2.2e-16 </code></pre> <p>Thank you very much!</p>
73,951
<p>I have an interesting question that I think has not been asked yet here.</p> <p>I am building an AI that has as goal to predict how wrong a standard based-on-history model is. This is done based on Natural Language Processing(NLP), so from an external source (yes I have thousands of articles timed by the minute to do this!!).</p> <p>I come to point where I need to select my original model. The most simple way is linear regression from bars 0-40 (where 40 is 'now') and compare it to bars 5-45. Compare the difference of value 40 and 45 and voila we have the 'difference' in our model we can attempt predict/classify using news articles. </p> <p>Ofcourse, using linear regression is not suitable for this kind of task but what is? And even better are there libraries in .NET(C#) that can do that, since my math-to-code skills are not that great.</p> <p>I have been looking at polynomial regression but find that the difference can be out of proportion big. Also been looking at GARCH but I cannot seem to find a library(so not in R) that can do this without paying a part of my body. What also seems interesting is AR(i)MA but there I stopped and decided to get some advice on my problem.</p> <p>What would be suited for such a 'supervisor'? Would I need very complex regression or would linear even be good enough? Or in this case just use a neural network(already made a pretty good one for this once)?</p> <p>Greets</p>
28,680
<p><strong>Context:</strong> I'm reading up on sampling in <a href="http://cis.temple.edu/~latecki/Courses/RobotFall07/PapersFall07/andrieu03introduction.pdf" rel="nofollow">MCMC for Machine Learning</a>. On page 5, it mentions rejection sampling:</p> <pre><code>repeat sample xi ~ q(x) and u ~ U(0,1) if uMq(xi) &lt; p(xi): accept xi until enough samples </code></pre> <p>(where $M$ is chosen such that $p(x) \le Mq(x)$ for all $x$)</p> <p><strong>Question:</strong> In the analysis, the paper says that it may work bad for high-dimensional settings. While I can see that, the paper gives a reason that I don't understand: $P(x \text{ accepted}) = P \left(u &lt; \frac{p(x)}{Mq(x)} \right) = \frac{1}{M}$. This doesn't make sense to me. If the probability were constant, why even bother to evaluate $p(x)$? Should this be a "at most $\frac{1}{M}$? Or am I just misinterpreting the statement? </p>
36,531
<p>I asked a group of subjects to make a series of 12 binary choices regarding preferences. </p> <p>Let's say for arguments sake, these were between ugly (<code>ug</code>), attractive (<code>att</code>), and neutral (<code>neut</code>) faces. Hence, we have 4 <code>ug</code> vs <code>att</code>, 4 <code>ug</code> vs <code>neut</code> and 4 <code>att</code> vs <code>neut</code> choices. For each subject I summed the number of times each face was chosen. Hence, I have a 3 column table comprising a score (max 8) for <code>Att</code>, <code>Ug</code> and <code>Neut</code> for each subject. Each row sums to 12 hence the variables are negatively correlated.</p> <p>My questions:</p> <ul> <li>Are attractive faces preferred to ugly and if so:</li> <li>Is this driven by an attraction to <code>att</code> or an aversion to <code>ug</code> or both? - this is why we have choices with the neutral faces.</li> </ul> <p>I originally thought to do a repeated measures ANOVA followed by <em>post hoc</em> tests to look for differences in ratings but i'm wondering if the fact that the DVs all sum to a constant is problematic because in essence the third variable - say $neut = 12-(ug+att)$. If so, is MANOVA the way to go, or how about chi-square?</p>
36,534
<p>Assuming I have a data set with $d$ dimensions (e.g. $d=20$) so that each dimension is i.i.d. $X_i \sim U[0;1]$ (alternatively, each dimension $X_i \sim \mathcal N[0;1]$) and independent of each other.</p> <p>Now I draw a random object from this dataset and take the $k=3\cdot d$ nearest neighbors and compute PCA on this set. In contrast to what one might expect, the eigenvalues aren't all the same. In 20 dimensions uniform, a typical result looks like this:</p> <pre><code>0.11952316626613427, 0.1151758808663646, 0.11170020254046743, 0.1019390988585198, 0.0924502502204256, 0.08716272453538032, 0.0782945015348525, 0.06965903935713605, 0.06346159593226684, 0.054527131148532824, 0.05346303562884964, 0.04348400728546128, 0.042304834600062985, 0.03229641081461124, 0.031532033468325706, 0.0266801529298156, 0.020332085835946957, 0.01825531821510237, 0.01483790669963606, 0.0068195084468626625 </code></pre> <p>For normal distributed data, the results appear to be very similar, at least when rescaling them to a total sum of $1$ (the $\mathcal N[0;1]^d$ distribution clearly has a higher variance in the first place).</p> <p>I wonder if there is any result that predicts this behavior? I'm looking for a test if the series of eigenvalues is somewhat regular, and how many of the eigenvalues are as expected and which ones significantly differ from the expected values.</p> <p>For a given (small) sample size $k$, is there a result if a correlation coefficient for two variables is significant? Even i.i.d. variables will have a non-0 result occasionally for low $k$.</p>
73,952
<p>I know that widget X has a population of N_x. However, widget Y has an unknown population that I'd like to estimate. </p> <p>Both widgets appear with a differential frequency over time as I sample them.</p> <p>Time ... X counts .... Y counts<br> 1 .......... 120 ............... 2<br> 2 .......... 212 ............... 3<br> 3 .......... 321 ............... 5<br> 4 .......... 149 ............... 0<br> 5 .......... 321 ............... 1 </p> <p>What is the total population of Y? With what confidence? I'm particularly interested in any special considerations that I many need to make when Widget Y is much rarer than the reference Widget X.</p> <p>Best,</p> <p>Paul</p>
48,181
<p>A new virus breaks out on a cruise ship. I want to test the hypothesis that males and females are equally likely to contract the virus.</p> <p>I am going to test 100 men and 100 women. Presumably if I find 87 women infected and 89 men I cannot safely reject the null. If on the other hand I find 11 men are affected and 20 women this would seem a reasonable basis on which to reject the null. </p> <p>So before running the test I want to define a rejection region. The rejection region is to be defined so that the probability of rejecting the null given that it is true is <em>at most</em> 5%. Since this is a composite null – the probability of getting the disease can range from 0 to 1 – this condition must hold for each and every member of the null.</p> <p>How do I construct such a region?</p>
37,289
<p>We often average models together to create an aggregate prediction model. Some recent research suggests that simple model averages perform as well or better than model averages weighted by functions of information criterion scores. Weighted or simple, model averaging often (but not always) performs better than choosing the model with the best score. Many information criterion scores can be seen as approximations to different cross-validation schemes. Even though one model performs worse in cross-validation than another, it is still informative. I argue that an average model with weights as functions of cross-validation scores makes sense. Yet most people who do cross-validation to tune hyperparameters just choose the value of the hyperparameter that minimizes cross-validation error. In what circumstances might we expect a cross-validation-error-weighted average model to perform better than a single "best" model? I would love it if someone could point me to the most relevant research on this. Thank you.</p>
37,290