question
stringlengths
37
38.8k
group_id
int64
0
74.5k
<p>I recently read a paper in which the authors claim that in order to compare the forecasting performance of two non-nested models, models A and B, a valid procedure is to fit models A and B on the same data set, and compare the average likelihood of the fitted models computed on the hold-out data set. All that is required is that models A and B are expressed as probability densities for the same variable. I am using the language of the paper: strictly speaking the quantities being compared are fitted models evaluated at data points, not likelihoods. Accepting this likelihood as the forecasting accuracy metric, this method for checking forecast accuracy has some intuitive appeal, although I have misgivings. Can these likelihoods be meaningfully compared without some sort of normalization? The out of sample probabilities will not add up to one. I'm not able to come up with a straightforward example in which this test would give a spurious result.</p> <p>update: I was able to produce a simple example in which direct comparison of out of sample likelihoods gave a misleading result: let the dependent variable y be a linear trend plus normally distributed error term, and let the explanatory variable x be a linear trend plus independent normally distributed error term. Generate 100 points for y and x. Model A is a linear regression, while for model B I used a regression with Student t-distributed errors (with two degrees of freedom). I trained on the first 50 points and tested on the second set of 50 points, and repeated with training and test sets interchanged. I repeated with three choices of variance for the data generating process. Model B gave higher average out of sample likelihoods in all cases. This example is a bit contrived but does illustrate my concern.</p>
74,016
<p>Which 2-tailed test is best to use to compare means/medians when one variable has a normal distribution and the other does not?</p>
39,451
<p>Is there a difference between ANCOVA (as performed under the 'General Linear Model (GLM)') and Hierarchical Regression (as performed under 'Regression') in SPSS?</p> <p>I am testing the main effects and interaction of X1 (continuous) and X2 (categorical) on Y (continuous).</p> <p>I understand GLM incorporates regression and one of the main advantages of using GLM over the regression function is that GLM (factor) creates dummy coding. In regression, this must be done prior to inputting the categorical factor in the analysis box. The same also applies to the interaction product.</p> <p>I personally find GLM (ANCOVA) output easier to understand.</p>
74,017
<p>Can anyone tell me what exactly a cross-validation analysis gives as result? Is it just the average accuracy or does it give any model with parameters tuned? </p> <p>Because, I heard somewhere that cross-validation is used for parameter tuning.</p>
36,638
<p>I am studying a population of individuals who all begin with a measureable score of interest (ranging from -2 to 2) [call it "old"], then they all undergo a change to a new score (also ranging from -2 to 2) ["new"]. Thus all the variation is in the change (which can be positive or negative), and there are also a variety of predictors that help to explain variation in the amount of change.</p> <p>My initial model is simply:</p> <pre><code>change = a + bx + e </code></pre> <p>where x is my vector of predictors.</p> <p>But now I'm concerned that some of these predictors could be correlated with the baseline (old) score. Is this, then, a better specification?</p> <pre><code>change = a + bx + old + e </code></pre> <p>Or perhaps</p> <pre><code>new = a + bx + old + e </code></pre> <p>Thanks!</p>
74,018
<p>I ran a within subjects repeated measures experiment, where the independent variable had 3 levels. The dependent variable is a measure of correctness and is recorded as either correct / incorrect. Time taken to provide an answer was also recorded.</p> <p>A within subjects repeated measures ANOVA is used to establish whether there is significant differences in correctness (DV) between the 3 levels of the IV, there is significant. Now, I'd like to analyze whether there is significant differences in the time taken to provide the answers when the answers are 1) correct, and 2) incorrect.</p> <p>My problem is: Across the levels there are different numbers of correct / incorrect answers, e.g. level 1 has 67 correct answers, level 2 has 30, level 3 has 25. </p> <p>How can I compare the time take taken for all correct answers across the 3 levels? I think this means its unbalanced? Can I do 3 one way ANOVAS to do a pairwise comparison, while adjusting p downwards to account for each comparison?</p> <p>Thanks</p>
38,142
<p>Based on the responses from the question <a href="http://stats.stackexchange.com/questions/28086/how-to-draw-a-side-by-side-plot-mentioned-in-graphical-display-as-an-aid-to-ana">How to draw a side-by-side plot mentioned in &quot;Graphical Display as an Aid to Analysis&quot;</a>, I am considering implementing the function myself. The problem is that I don't know how to utilize the result of <code>aov()</code> function. I have to see how many factors there are, the names of factors, what are their main effects and interaction effects, etc. I can get residuals from <code>aov(y~x)$residuals</code>. I consulted the documentation for <code>aov()</code>, but I still have no clue how to do this. There must be some other way much complex than something like <code>t.test(x,y)$p.value</code>.</p> <p>Would you help me?</p>
41,740
<p>Standard deck has 52 cards, 26 Red and 26 Black. A run is a maximum contiguous block of cards, which has the same color.</p> <p>Eg.</p> <ul> <li>(R,B,R,B,...,R,B) has 52 runs.</li> <li>(R,R,R,...,R,B,B,B,...,B) has 2 runs.</li> </ul> <p>What is the expected number of runs in a shuffled deck of cards?</p>
74,019
<p>Following to the recent questions we had <a href="http://stats.stackexchange.com/questions/1818/how-to-determine-the-sample-size-needed-for-repeated-measurement-anova/1823#1823">here</a>.</p> <p>I was hopping to know if anyone had come across or can share <strong>R code for performing a custom power analysis based on simulation for a linear model?</strong></p> <p>Later I would obviously like to extend it to more complex models, but lm seems to right place to start. Thanks.</p>
74,020
<p>What is the distribution of the following monomial? $$X^a \cdot Y^b$$ where $X$ and $Y$ are normal random variables and $a$ and $b$ are natural numbers.</p> <p>For example, when $X \sim N(0,1)$, $a=2$, and $b=0$ it is a Chi-squared distribution, which has a variance of 2.</p> <p>What if we have $n$ independent variables $X_1, X_2, \dots , X_n$, with $X_i \sim N(0,\sigma^2)$ and some natural numbers $p_1, p_2, \dots,p_n$. What can we say about the variance of the following r.v.?</p> <p>$$X_1^{p_1} \cdot X_2^{p_2} \cdots X_n^{p_n}$$</p>
74,021
<p>When studying time series, I once heard the statement that</p> <blockquote> <p>unit root test is less powerful.</p> </blockquote> <p>I hereby have two questions:</p> <ol> <li>What does it mean for a test to be powerful?</li> <li>What causes the unit root test to be less powerful?</li> </ol>
36,642
<p>My lecturer just covered the <strong>Lindberg-Levy central limit theorem</strong> and the multivariate version, the <strong>Lindberg-fuller CLT</strong>. I understood the basic concept and I can derive it, etc. But it would help my understanding a lot if someone could explain how all this used in real life applications of <strong>econometric</strong> analysis?</p> <p>Ive read some claims that the CLT is nice only on a piece of paper.</p> <p>Some really cool industry applications or references would be appreciated.</p>
74,022
<p>I tried to find some relation between these distance (loss) measures, but couldn't find any references. However, I think it must something like this: $$ \sqrt 2*D_{KL} &lt; L_1 &lt; L_2 $$ Is that right?</p>
36,644
<p>Which correlation coefficient is the most appropriate to compare 2 time series? I want to compare the variation of one variable for 2 regions, have regional data for the last 30 years. Is Pearson correlation ok or should I rely on Kendall's tau b or Spearman's rho and why? I tried to google it and analyse what I found, but I'm still not sure.</p>
74,023
<p>I have learnt the definition of AIC for parametric model. But, what I want to know is that is there a semi-parametric version of AIC? Have you heard about it? And if you know, please show me a link, so I can have a look. </p>
41,742
<p>Here is the problem: I am making a machine learning algorithm that takes the inputs and outputs of some software I've written, and I don't know how many datalines to produce to get results that are a 'good' fit. I realize the answer is 'the more the better' but I'm looking for any sort of minimum requirement. I also realize that the greater the number of variables the greater the number of datalines required. </p> <p>So I'm looking for a rule of thumb for number of variables to number of datalines. </p>
74,024
<p>I have implemented a three-way anova with type III sum of squares in c++. Since some of my experiments (observations) are more important (more informative), I want to give them a higher weight in my analysis. For example, an experiment which is very important has a weight of 10, and a relatively important one has a weight of 5, and so on. To implement it, I repeat such observations in respect to its corresponding weight, 10 times, 5 times, ...<br> I used the same concept in 2way anova, but there I used the conventional formulas of Sum of Squares, because my design was balanced. So there I just multiplied the value of each observation and the number of times I have seen that to the weight.<br> Here, repeating the items makes the design matrix so big and increase the computational complexity. Now the problem is that what if I don't want to repeat them, but use a weight matrix? and what if the weights are not integer values (so I really can't repeat one item 0.3 times)?<br> I found this formula <a href="http://www.stat.umn.edu/pub/macanova/docs/manual/manchp03.pdf" rel="nofollow">here</a>: </p> <pre><code>H^=X(X'WX)-1XW </code></pre> <p>So I put my weights into W matrix and used this formula. To check if it works, I used the conventional method and repeated observations as often as the weight value, and gave it to MATLAB. But I got different SS values.</p> <p>Could you please kindly tell me how I should change this formula? </p> <p>Thanks,</p>
74,025
<p>I iterated my 10-fold cross validation 100 times for several methods. Now I want to use a t-test to test if the results are significant. However, I'm not sure what the sample size is. Is the sample size the original amount of samples, or is it the original amount of samples x 100?</p> <h2>edit:</h2> <p>For university we need to classify 3 cancer types and give an estimation of how well our model will perform. We received a dataset with 100 samples. We split the data up into a training and test set using stratified sampling with a ratio of 0.3 and 0.7. The resulting training set consists of 69 samples, and the test set out of 31 samples.</p> <p>We used repeated cross validation because of this paper: <a href="http://www.cse.iitb.ac.in/~tarung/smt/papers_ppt/ency-cross-validation.pdf" rel="nofollow">http://www.cse.iitb.ac.in/~tarung/smt/papers_ppt/ency-cross-validation.pdf</a></p> <p>The repeated cross-validation is done on the same training set, but with the folds are randomly chosen every time, so they should be different every time.</p> <p>The significance we want to test is if the accuracy of one model is significantly better than the accuracy of a different model. </p>
46,718
<p>I have a collection of $n$ datapoints $(y_i,\bf{x}_i)$ in $\mathbb{R}^{p+1}$ and would like to estimate the following model in <code>R</code>:</p> <p>$$\underset{\bf{b}\in\mathbb{R}^{p}}{\arg.\min}\;\sum_{i=1}^n(y_i-\bf{x}_i'\bf{b})^2$$ </p> <p>$$u.c.\;\;\;0\leq \bf{x}_i'\bf{b}\leq 1\;\;\;\forall i$$</p> <p>anybody has a pointer to an efficient way of doing this? is there a re-parametrization of the OLS problem that would allow for this? </p>
74,026
<p>Let's assume we have predictors x1-x5 and dependent variables y1-y9 in one dataset. We have a certain hypothesis about x1: it should have differently strong effects on our 9 dependent variables y1-y9. </p> <p>We perform 9 regressions, and find that x1 significantly predicts all y with p&lt;.001. Now we want to find out whether these (highly significant) effects are different from each other (just because they are significant effects does not mean they are equally strong). Two questions: </p> <p>(1) What information that a regression provides us with would give us insight into this question? Unstandardized beta? Standardized beta? t?</p> <p>(2) Is there a statistical test we can perform to find out whether the strengths of predictions are different between (y1 ON x1) and (y2 ON x1) and (y3 ON x1)? </p> <p>Software I can use is R and SPSS; field is psychology/medicine. </p>
36,649
<p>I'm trying to program impulse response functions for a VAR model using Cholesky decomposition. The thing is I do not completely understand how I should do this when I read in the literature. Suppose I have:</p> <p>$$ \begin{bmatrix}x_t\\y_t\\z_t\end{bmatrix}=\begin{bmatrix}\alpha_1\\\alpha_2\\\alpha_3\end{bmatrix}+\begin{bmatrix}b_{11}&amp;b_{12}&amp;b_{13}\\b_{21}&amp;b_{22}&amp;b_{23}\\b_{31}&amp;b_{32}&amp;b_{33}\end{bmatrix}\begin{bmatrix}x_{t-1}\\y_{t-1}\\z_{t-1}\end{bmatrix}+\begin{bmatrix}u_{1t}\\ u_{2t}\\ u_{3t}\end{bmatrix} $$ which we can write as $$ \mathbf{x}_t=\mathbf{a}+\mathbf{B}\mathbf{x}_{t-1}+\mathbf{u}_t. $$</p> <p>Further, suppose the covariance matrix of $\mathbf{u}_t$ is $$ Cov(\mathbf{u}_t)=\Sigma_u=PP^\prime. $$</p> <p>Now, let's say I want the impulse responses to a unit shock in $u_{1t}$. I want the effects then on say $\mathbf{x}_t, \, \mathbf{x}_{t+1}, \, \mathbf{x}_{t+2}$ and $\mathbf{x}_{t+3}$. As I understand it, the orthogonalization is done by multiplying the error vector with $P$. Let's call the responses in period $t$ to the shock $\mathbf{x}_t^*$. Would what I am interested in then be (assume unit shock in $u_{1t}$ such that $\mathbf{u}_t^*=\begin{bmatrix}1&amp;0&amp;0\end{bmatrix}^\prime$):</p> <p>$$ \mathbf{x}_t^*=P\mathbf{u}^*_t=P\begin{bmatrix}1\\0\\0\end{bmatrix}\\ \mathbf{x}_{t+1}^*=\mathbf{B}\mathbf{x}^*_t=\mathbf{B}P\mathbf{u}^*_t\\ \mathbf{x}_{t+2}^*=\mathbf{B}\mathbf{x}^*_{t+1}=\mathbf{B}\mathbf{B}P\mathbf{u}^*_t\\ \mathbf{x}_{t+3}^*=\mathbf{B}\mathbf{x}^*_{t+2}=\mathbf{B}\mathbf{B}\mathbf{B}P\mathbf{u}^*_t $$</p> <p>Now, extend this to include more lags (for example 4). The model is then $$ \mathbf{x}_t=\mathbf{a}+\sum_{k=1}^4\mathbf{B}_k\mathbf{x}_{t-k}+\mathbf{u}_t. $$</p> <p>Thus the impulses are:</p> <p>$$ \mathbf{x}_t^*=P\mathbf{u}^*_t=P\begin{bmatrix}1\\0\\0\end{bmatrix}\\ \mathbf{x}_{t+1}^*=\mathbf{B}_1\mathbf{x}^*_t=\mathbf{B}_1P\mathbf{u}^*_t\\ \mathbf{x}_{t+2}^*=\mathbf{B}_1\mathbf{x}^*_{t+1}+\mathbf{B}_2\mathbf{x}^*_t=\mathbf{B}_1\mathbf{B}_1P\mathbf{u}^*_t + \mathbf{B}_2P\mathbf{u}^*_t\\ \mathbf{x}_{t+3}^*=\mathbf{B}_1\mathbf{x}^*_{t+2}+\mathbf{B}_2\mathbf{x}^*_{t+1}+\mathbf{B}_3\mathbf{x}^*_{t}=\mathbf{B}_1\mathbf{B}_1\mathbf{B}_1P\mathbf{u}^*_t + \mathbf{B}_1\mathbf{B}_2P\mathbf{u}^*_t+\mathbf{B}_2\mathbf{B}_1P\mathbf{u}^*_t+\mathbf{B}_3P\mathbf{u}^*_t $$</p> <p>Is this line of thinking correct? If so, then this simple R code should be fine:</p> <pre><code>library(vars) set.seed(1) x &lt;- rnorm(100) set.seed(2) y &lt;- rnorm(100) set.seed(3) z &lt;- rnorm(100) data &lt;- cbind(x, y, z) model &lt;- VAR(data, p=4, type = "const") u &lt;- matrix(c(1, 0, 0), ncol=1) P &lt;- chol(cov(residuals(model))) B1 &lt;- Acoef(model)[[1]] B2 &lt;- Acoef(model)[[2]] B3 &lt;- Acoef(model)[[3]] B4 &lt;- Acoef(model)[[4]] xt &lt;- P %*% u xt1 &lt;- B1 %*% xt xt2 &lt;- B1 %*% xt1 + B2 %*% xt xt3 &lt;- B1 %*% xt2 + B2 %*% xt1 + B1 %*% xt </code></pre> <p>Any input would be very much appreciated!</p>
36,651
<p>I estimated a Partial Least Squares model where the X matrix had normalized columns. Now I want to predict the value for a new instance (which is a frequency vector summing to one.) I assume that if I just use the raw frequency values, the predicted value won't be on the same scale as the scenario where my 'new' instance was taken from the normalized X matrix. (i.e. Comparing the fitted values of the model with predicted value of new instance.)</p> <p>I was thinking of adding the new instance as the bottom row of the original non-normalized X matrix, normalizing, and then using the values from this new bottom row to predict.</p> <p>Alternatively, I could standardize by using the column means and standard deviations from the original non-normalized X. </p> <p>Is one method preferred to the other? Is there a better way?</p>
50
<p>My work involves building statistical / econometric models using R, SPSS modeler. I am also doing my PhD (part time) in econometrics. In order to do more advanced data / model visualisation I am thinking about to pick up another programming language. Any suggestion will be much appreciated. </p>
74,027
<p>A treatment was given to one hand of a subject, and a single outcome metric is measured for both hands, twice pre and several times post treatment. </p> <p>What is best practice for assessing effectiveness of treatment? </p> <p>Treated and Untreated "groups" really are paired.</p>
74,028
<p>I have to design an experiment where tool wear is a factor (not a response). I'm not trying to study tool wear minimizing. I'm trying to study the effect of tool wear along with 3 other factors on the number of a certain defect type.</p> <p>What kind of design should I use? I considered dividing the tool wear into 3 conditions (low, med, high) and then running each treatment within the conditions. But I'm not sure that is a correct method. Tool wear is considered to be a highly significant contributor to the defect rate. Interactions are also important here.</p> <p>This is a forging operation. The other factors are billet temp, spotting location and pancake height. I have plenty of replication opportunities and plan at least 10 parts for each treatment.</p> <p>Many Thanks - Kelly</p>
36,653
<p>What is a good loss function for a predictive model used by gamblers?</p> <p>I've been reading a bit about loss functions recently. I've always just went with MSE (e.g., for a couple of neural network projects) and didn't ask questions. I didn't realize exactly how arbitrary MSE actually is. And speaking of this, could somebody explain a simple practical situation where MSE can be derived as the "correct" loss function?</p> <p>Anyway, I came across Log-Loss (which is the same as cross entropy) and am interested because I'm curious about some probabilistic models. I understand how one derives this loss function from information theory, but when we talk about probabilities, we're often involved in some form of gambling on an outcome. It's not clear to me that efficient transmission of information translates to the type of utility one is usually looking for in a predictive/probabilistic model.</p> <p>If, for example, I had a model that was meant to predict the winner of the 2012 US presidential election, and I had used this to move money around on a future's market like Intrade, how might I determine a loss function for my prediction - assuming I'm able to continue making bets as the market fluctuates? Same type of thing should apply for any market where I am able to make handicapped bets on the occurrence of some event.</p> <p>Or is this really regret, and is regret completely different than a loss function?</p>
36,654
<p>I have hourly utility bills for two years. The first year's data are from before a retrofit was performed on a home and the second year is after the retrofit was performed. </p> <p>How do I compare the two time series statistically to make claims of demand savings during a specific time period of the day. Can this be done in R, if so how?</p>
48,321
<p>My first and I think naive question here.</p> <p>I am trying to model a certain business, and the simplest model I am willing to test is: 1. there is a bag of differently biased coins. 2. every step, a single coin is chosen with equal probability. 3. the chosen coin is flipped and returned to the bag.</p> <p>The business goal is to predict the rate of heads in future trials (probably using Bayesian inference).</p> <p>This got me thinking - am I not over-complicating things? Isn't this process (observationally) equivalent to a single biased coin?</p> <p>Thanks in advance!</p>
74,029
<p>I want to study relations between sites categories and species abundances through PCoA or CAP using vegan::capscale. For doing so, I overlay species scores on my ordination. However, I am getting confused with the different scaling options and their interpretation. From what I understood: With scaling=1, arrow shows the direction from the origin for which sites have larger abundances for this species. With scaling=2, we want to analyze the correlations among species. Species that have a small angle between their arrows are expected to be strongly positively correlated. With scaling=-2, const = sqrt(nrow(dune)-1), we get correlations between species and axes. This comes from <a href="https://stat.ethz.ch/pipermail/r-sig-ecology/2010-August/001448.html" rel="nofollow">Jari Oksanen</a>.</p> <p>I did compare the 3 different options (see codes below), differences seem to be only a matter of arrow length. Hence, by considering that the most important are the relative length of arrows (relative to each other), am I allowed to use scaling=-2 (species axes correlations) for both analyzing site-species associations and species-species correlations? One more questions, is there a well admitted threshold in the value of species-axes correlations, below which we consider that species are not correlated (I mean species don't differ in abundance across the sites, excluding the cases of non linear relationships). If yes, do this threshold change depending on the scaling method used and on either the ordination is constrained or not. Morover, if I want to overlay a vector for a quantitative environnemental variable, may I use also scaling=-2, from which correlation threshold?</p> <pre><code>library(vegan) library(ggplot2) library(grid) data(dune) data(dune.env) dune=sqrt(dune) mcap=capscale(dune~1,dist="bray") #PCoA #sites scores dims=c(1,2) site=scores(mcap,display="wa",choices=dims) site.env=cbind(site,dune.env) #spider for management levels dev.new();plot(mcap);coord_spider=with(dune.env,ordispider(mcap,Management,col="blue",label=F,choices=dims));dev.off() coord_spider=as.data.frame(cbind(coord_spider[,],site.env)) names(coord_spider)[1:4]=c("MDS1","MDS2","MDS1end","MDS2end") #species scores #scaling1 cor.min=0.6 #below this threshold, arrows will be not plotted because correlation is considered too much week cor_sp=as.data.frame(scores(mcap, dis="sp", scaling=1,choices=dims)) cor_sp$cor=with(cor_sp,sqrt(MDS1^2+MDS2^2)) cor_sp$sup=FALSE;cor_sp$sup[cor_sp$cor&gt;=cor.min]&lt;-TRUE cor_sp$lab=row.names(cor_sp) cor_sp=cor_sp[cor_sp$sup==TRUE,] cor_sp_s1=cor_sp #scaling2 cor.min=0.6 #below this threshold, arrows will be not plotted because correlation is considered too much week cor_sp=as.data.frame(scores(mcap, dis="sp", scaling=2,choices=dims)) cor_sp$cor=with(cor_sp,sqrt(MDS1^2+MDS2^2)) cor_sp$sup=FALSE;cor_sp$sup[cor_sp$cor&gt;=cor.min]&lt;-TRUE cor_sp$lab=row.names(cor_sp) cor_sp=cor_sp[cor_sp$sup==TRUE,] cor_sp_s2=cor_sp #scaling -2, correlations between species and axes #from Jari Oksanen at https://stat.ethz.ch/pipermail/r-sig-ecology/2010-August/001448.html cor.min=0.6 #below this threshold, arrows will be not plotted because correlation is considered too much week cor_sp=as.data.frame(scores(mcap, dis="sp", scaling=-2, const = sqrt(nrow(dune)-1),choices=dims)) cor_sp$cor=with(cor_sp,sqrt(MDS1^2+MDS2^2)) cor_sp$sup=FALSE;cor_sp$sup[cor_sp$cor&gt;=cor.min]&lt;-TRUE cor_sp$lab=row.names(cor_sp) cor_sp=cor_sp[cor_sp$sup==TRUE,] cor_sp_s3=cor_sp #plot mon.plot1=ggplot(data=site.env)+theme_bw()+ geom_point(aes(x=MDS1,y=MDS2,color=Management))# les sites #add spider mon.plot2=mon.plot1+ geom_segment(data=coord_spider,aes(x=MDS1,y=MDS2,xend=MDS1end,yend=MDS2end,colour=Management),lwd=0.5,alpha=1/3)+ geom_point(data=coord_spider,aes(x=MDS1,y=MDS2,colour=Management),cex=3,shape=19) #add species scores as vector #scaling1 mon.plot_s1=mon.plot2+ggtitle("Scaling 1")+ geom_point(aes(x=0,y=0),shape=21,fill="black",color="black",size=3)+#central point geom_segment(data=cor_sp_s1,aes(x=0,y=0,xend=MDS1,yend=MDS2),arrow = arrow(length = unit(0.3,"cm")))+#arrows geom_text(data = cor_sp_s1, aes(x = MDS1, y = MDS2, label = lab), size = 3)#labels #scaling2 mon.plot_s2=mon.plot2+ggtitle("Scaling 2")+ geom_point(aes(x=0,y=0),shape=21,fill="black",color="black",size=3)+#central point geom_segment(data=cor_sp_s2,aes(x=0,y=0,xend=MDS1,yend=MDS2),arrow = arrow(length = unit(0.3,"cm")))+#arrows geom_text(data = cor_sp_s2, aes(x = MDS1, y = MDS2, label = lab), size = 3)#labels #scaling -2, correlations between species and axes mon.plot_s3=mon.plot2+ggtitle("Scaling -2")+ geom_point(aes(x=0,y=0),shape=21,fill="black",color="black",size=3)+#central point geom_segment(data=cor_sp_s3,aes(x=0,y=0,xend=MDS1,yend=MDS2),arrow = arrow(length = unit(0.3,"cm")))+#arrows geom_text(data = cor_sp_s3, aes(x = MDS1, y = MDS2, label = lab), size = 3)#labels #plot everything grid.newpage() pushViewport(viewport(layout = grid.layout(2, 2))) print(mon.plot_s1,vp=viewport(layout.pos.row = 1, layout.pos.col = 1)) print(mon.plot_s2,vp=viewport(layout.pos.row = 2, layout.pos.col = 1)) print(mon.plot_s3,vp=viewport(layout.pos.row = 1, layout.pos.col = 2)) </code></pre>
74,030
<p>This is probably going to sound extremely stupid, but it's part of a larger question, about sample distribution of means, that I'm having problems understanding, so please bear with me.</p> <p>If I record the time between planes landing, and have 250 observations, are those 250 observations my "population", or does the term "population" refer to all the values, i.e. if I had sat there till the end of time recording?</p> <p>I'm assuming its the latter but just want to double check.</p>
74,031
<p>I don't know a lot about sampling methods. </p> <p>I have a large population of size 2,000,000. I used one of those sample size calculators. It says that I need sample size of approximately 10,000.</p> <p>I am trying to find the probability <strong>p</strong> of success for the population. It is not feasible for me to test all 2,000,000 members of the population. That is why I am sampling.</p> <p>I assume that the sample size calculator means a simple random sample without replacement. I have read that a simple random sample with replacement ensures that the covariance between two variables is 0<strike>, i.e., independent</strike>. </p> <p>When should one choose with replacement instead of without replacement?</p> <p>If we sample with replacement, then we are simply performing Bernoulli trials. I suppose this makes applying statistical tools (which?) easier.</p> <p>Again, sampling ignoramus here.</p>
74,032
<p>I am trying to estimate the price elasticity of supply for small scale farmers in Malawi and I have time series data for 34 years. I have two problems: First, the prices are very low such that when I take their logs am getting negative values, as a remedy i added 1 to each value and i got positive logs but i don't know how to interprete the result with the 1 that I added. Second i want to find out the specific way of estimating the model in <code>Stata</code>, I just used regress. I will appreciate your help</p>
74,033
<p>In the English Championship division, there are 24 teams, 8 of which have names starting with the letter B (e.g.Bolton) Tonight, (5th March 2013) all 24 teams in this division are playing each other. By a coincidence, the 8 teams starting with B are playing each other, i.e. 4 of the games involve these 8 teams (there are 2 teams per game!) What is the probability of this happening ?</p>
74,034
<p>I performed principal components analysis on continuous variables describing 16 languages. Using the first two axes, which explain 76% of variance, I need to calculate the distance between each pair of languages as appeared on the first two axes; to test in a Mantel test the correlation between distances in linguistic variables and geographic distances. Could anyone help me: how can I do that?</p> <p>cheers</p>
74,035
<p>I have time series data for a set of cities that goes back for about 10 years. I also have the data at the state level for almost 30 years. There was an event that occurred about 20 years ago, that is captured in the longer, state level data, but not the city data, that I would like to investigate at the city level. </p> <p>What I think might be useful is to create some kind of ARIMA model that regresses the state data as an exogenous variable. If I were to do this, how do I use the model to backfill the city data such that it ends at the same point as where the actual city data starts? Is there already a canonical method of doing something like this? Thanks for any help you can give (literature references, R libraries, etc.)</p>
36,659
<p>What is the best out-of-the-box 2-class classifier? Yes, I guess that's the million dollar question, and yes, I'm aware of the <a href="http://www.no-free-lunch.org/">no free lunch theorem</a>, and I've also read the previous questions:</p> <ul> <li><a href="http://stats.stackexchange.com/questions/258/poll-what-is-the-best-out-of-the-box-2-class-classifier-for-your-application">Poll: What is the best out-of-the-box 2-class classifier for your application?</a></li> <li>and <a href="http://stats.stackexchange.com/questions/5987/worst-classifier">Worst classifier</a></li> </ul> <p>Still, I'm interested in reading more on the subject.</p> <p>What is a good source of information that includes a general comparison of the characteristics, advantage, and features of different classifiers?</p>
48,332
<p>When estimating the confidence interval of mean, I think both the bootstrap t method and the nonparametric bootstrap method can apply, but the former requires a little more computation. </p> <p>I wonder what the advantages and disadvantages of bootstrap t over the normal nonparametric bootstrap are? Why? </p> <p>Are there some references for explaining this?</p>
74,036
<p>I have a total population of 31, of these I have a treatment finishing population of 20.</p> <p>I want to run a test on SPSS to identify if there are any significant differences between the 20 individuals that completed the treatment and the group as a whole.</p> <p>The variables I am trying to compare are age, weight, height and BMI.</p> <p>I have run descriptives for both the entire population and the sub group seperatley, I can see there is little difference between the groups but would like to know the statistical significance</p>
74,037
<p>(Possibly related to <a href="http://stats.stackexchange.com/questions/17342/is-there-a-nonparametric-equivalent-of-tukey-hsd">Is there a nonparametric equivalent of Tukey HSD?</a>)</p> <p>Given a set of several exponentially distributed variables (representing data which is modeled this way), I'd like to check whether one of them has a significantly higher mean than the others. If the variables were normally distributed, I would have used Tukey's test; however, this is not the case, and I'm looking for as simple alternative as possible. </p> <p><strong>Some background on the data</strong> </p> <p>Alternative methods and markets from which we buy goods are compared every day. Data is aggregated on a daily basis (i.e. I only have prices and number of transactions per day, which translate to a mean). The distribution of funds for the next day is based on previous' day analysis: best method gets most of the funds, and the other methods share the rest. Typically, 5-10 alternative methods are compared, with 10,000 and more transactions per method per day.</p> <p><strong>Current Alternatives</strong></p> <p>Based on my limited knowledge, I came up with two possible ways of comparing methods:</p> <ol> <li><p>Based on the referenced question above, a nonparametric versions of Tukey's test can be used.</p></li> <li><p>Since only the highest-mean variable is compared to the rest of the variables, I think it can be compared to the second highest with <em>bonferroni correction</em>. i.e. use effective alpha = 0.05 / number of variables. This might be very conservative, but since the numbers are high, the results are usable (i.e. most days have significent results).</p></li> </ol> <p><strong>My questions</strong></p> <ol> <li><p>Do the methods above make sense as solutions to the presented case? specifically, method (2) is simple and yields usable results, but I'm not sure it's statistically sound.</p></li> <li><p>I'm interested in any other suggestions for performing this analysis. As noted, I'm more interested in simplicity and generalization, and less on getting the best significance values.</p></li> </ol>
74,038
<p>In the following problem, check that it is appropriate to use the normal approximation to the binomial. Then use the normal distribution to estimate the requested probabilities.</p> <p>It is known that 85% of all new products introduced in grocery stores fail (are taken off the market) within 2 years. If a grocery store chain introduces 68 new products, find the following probabilities. (Round your answers to four decimal places.)</p> <p>(a) within 2 years 47 or more fail</p> <p>(b) within 2 years 58 or fewer fail . </p> <p>(c) within 2 years 15 or more succeed . </p> <p>(d) within 2 years fewer than 10 succeed</p>
36,663
<p>I am trying to grasp what exactly is "estimated" in the E-step of the algorithm. </p> <p>According to all definitions, in E-step the "conditional expectation values , or posterior probabilities of the hidden variables" are computed, using the Bayes formula ( posterior probability=prior( or marginal prob.) x likelihood/probability of evidence). Now, my question is, are these posterior probabilities, that I "estimate" or "compute", actually numbers or are they functions with respect to the current iteration's parameters ( and they are the ones I need to plug into M-step)? Where the probability of evidence would be likelihood, calculated with the previous argmax , calculated in previous M-step. </p>
36,664
<p>I like to set up a 2 way repeated measures anova. I compare 4 models with 4 ages and see if the results differ, this should result in around 16 groups, since 4*4=16 (so far so good)</p> <p>However from some groups I don't have the data so I still have 2 factors to test for but not longer 16 groups but 15-14.</p> <p>This is not due to mistakes but due to the fact that the groups under this condition are just bad and don't meet standard recommondations. </p> <p>If however I'll try to do 2 way repeated measurments including my missing groups, I can't execute it.</p> <p>Is there a way to handle this missing groups?</p> <p>Greetings </p>
36,665
<p>I have a textual classification problem that consists of two categories- zero an one. Up until now I tried solving it by creating a Document Term Matrix, and to run it through SVM (using RTextTools package). Here's a code snippet: (R)</p> <pre><code>models &lt;- train_models(container, algorithms=c("SVM")) results &lt;- classify_models(container, models) analytics &lt;- create_analytics(container, results) View(summary(analytics)) &gt;&gt;ALGORITHM PERFORMANCE &gt;&gt;SVM_PRECISION SVM_RECALL SVM_FSCORE &gt;&gt; 0.64 0.63 0.63 </code></pre> <p>My questions are as follows:</p> <ol> <li><p>Why are all the predicted values in the result matrix between 0.5-1? isn't it supposed to be 0-1? </p></li> <li><p>How can I see (in R) under which <strong><em>threshold</em></strong> are these precision and recall values being calculated? How can I change this threshold to get different values?</p></li> <li>How exactly are the recall/precision scores being calculated? I mean, supposed we have <em>theta</em> to separate that all scores above it are of class 1, and the rest are 0. Are these output scores just a simple mean of the two classes precision/recall? Is it a weighted average?</li> <li>How can I create in R two different thresholds values for each class (with what's left in between labeled as "unidentified")? </li> </ol>
74,039
<p>If I would like to fit random intercept and slope and if I write it as (color|writer) compared to (1+color|writer), are they the same?</p>
74,040
<p>For example, the lists can be something like: $$\{1.123213, 5.154543, 2.134121, 7.34534, 12.223432, 8.16571, 100.45645, 222.423\}$$ I want to remove $\{100.45645, 222.423\}$.</p> <p>$$\{232.123213, 323.154543, 232.134121, 123.34534, 222.223432, 8545.16571,\\ 4335.45645, 1222.423\}$$ I want to remove $\{8545.16571, 4335.45645, 1222.423\}$.</p> <p>The size is not fixed, and the range may vary. Are there any ways to remove these "abnormally large" values from my lists?</p>
36,666
<p>Let's say that I have a picture like the one in figure. This look like an histogram, but actually is not, in the sense that I did not produce the figure starting from a dataset of values. In fact, I only have the probability (the y axis) for each bar. I also have the center point of each bin, and the bin width of each bin. Notice that the graph is normalized such that the area under the bins is equal to one. </p> <p><img src="http://i.stack.imgur.com/CQWGd.jpg" alt="enter image description here"></p> <p>What I want to do now, is to calculate the reciprocal of this probabilities. Usually I would use this formula: </p> <p>$g(y) = \frac{ 1 }{ y^2 } f\left( \frac{ 1 }{ y } \right) $</p> <p>But I cannot do that now, since I do not want to assume any f(x). Having an histogram, I would just plot the (1./dataset) to get the reciprocal (in histogram form), and then I would normalize (assuming that the original dataset was not already normalized like the one in the example), so to have a good approximation of the rate distribution. However, in this case, I don't have the dataset. I only have the y points (the p(x), actually) and the x (bin center for each bin).</p> <p>I have been adviced to use the following formula (in MATLAB): plot(1./x, y) And this will give me a function with the same shape as the reciprocal function, but not normalized. I <em>think</em> I could take this result and normalize it, and that should work fine. However, I wonder if there is another method that uses only the x and the p(x) in order to get the reciprocal values.</p> <p>Thank you. </p>
74,041
<p>Im newbie in Image Processing field, and I got many term that wierd. There are :</p> <ol> <li><p>Moments</p></li> <li><p>Local and global characteristic</p></li> <li><p>Energy of Image</p></li> <li><p>Color Temperature</p></li> </ol> <p>What each of them represent in Image??</p>
36,670
<p>Particles are suspended in a liquid medium at a concentration of 4 particles per mL. A large amount of the suspension is thoroughly agitated, and then 3 mL are withdrawn . Let $X$ be the number of the particles in the 3 mL. Answer the following:</p> <p>a- The distribution of X is a Poisson distribution, what is its parameter ($\lambda$)?</p> <p>b- Find the probability that no particle is withdrawn from the 3 mL.</p> <p>c- Find the probability that one particle is withdrawn from the 3 mL.</p>
74,042
<p>I noticed that in the Normal distribution, the probability $P(x=c)$ equals zero, while for the Poisson distribution, it will not equal zero when $c$ is a non-negative integer.</p> <p>My question is: Does the probability of any constant in the normal distribution equal zero because it represents the area under any curve? Or it is just only a rule to memorize? </p>
36,675
<p>I've studied machine learning and statistics only very briefly. I've used linear regression to solve problems with fixed sets of variables, but I'm not sure how to approach the following problem.</p> <p>Given a bunch of tagged photos, I'd like to use the tags to estimate the probability that a specific object is contained within the photo. </p> <p>I have access to a wealth of data which has already been classified as either containing the objects we're looking for or not (however note that this is not a classification problem; the probabilities are being used to rank photos).</p> <p>For example. If a photo is tagged with <em>coke</em> and <em>drink</em>, there is a high probability it contains coca-cola. However if a photo is tagged with <em>coke</em> and <em>crack</em>, there is a high probability it does not contain coca-cola. If a photo is tagged with either <em>crack</em> or <em>drink</em> separately, that doesn't tell us much about what might be in the photo.</p> <p>How would one go about building a hypothesis formula for this problem?</p>
74,043
<p>There is quite some content online interpreting odds in a logistic model with a dichotomous predictor. My problem is understanding coefficients when there are more than 2 levels for a categorical variable. How do you <em>define the odds</em> then?</p> <pre><code>Data: X is a single categorical predictor with 4 levels: teenager, adult, mature, senior. Y: 1=smoking, 0=non smoking. LR: We use n-1 dummy variables. I chose adult as the reference bin as it had the highest concentration. (ok??) ________ | Intercepts | p adult | -4.3801 | 0 teenager | -0.32456 | 0 mature | 1.45119 | 0 old | -0.9891 | 0 </code></pre> <p><strong>Interpreting the coefficients</strong></p> <p>Teenager: Teen is less likely to smoke (w.r.t adult?). In fact, a teen is 28% (exp-0.32456 -1) less likely to smoke THAN AN ADULT. Is odds of teenager smoking always mentioned against the reference group?</p> <p>Mature: Matures is more to smoke (w.r.t adult?). In fact, a mature is 326% more likely to smoke THAN AN ADULT. Is odds of mature smoking always mentioned against the reference group? </p>
33,684
<p>Let's say you have $X$ coins, each with a differing probability of landing heads (e.g. coin 1 has 10% chance of landing heads, coin 2 has 20% chance of landing heads, etc.).</p> <p>Now, let's say that you flip coin $i$ coin $Y_i$ times (each coin has a differing amount of flips). We know how many times each coin was flipped.</p> <p>Now let's say you do this every day for a really long time and record that info. Example, with 2 coins, on day 1: 50 flips total, 30 heads total, coin1 was flipped 20 times and coin2 flipped 30 times. Day 2: 80 flips, 66 heads total, coin1 60 flips, coin2 20 flips.</p> <p>What we know: total flips, total heads, how many times each coin was flipped.</p> <p>Given that, is there a way to determine an approximate probability that the given coin will flip heads? In the above example, coin1 has a 100% probability of heads and coin2 has a 33% probability.</p>
74,044
<p>Is it plausible to have a positive coefficient with a negative marginal / impact effect after running multinomial logit model?</p>
1,031
<p>I need to calculate the temporal trends for some climate variables with missing values. For example, last frost days defines as the last day of year with minimum temperature less than 0C. However, there are no any frost days in some years. </p> <p>My data look like: </p> <pre><code>lfd &lt;- c(NA, NA, NA, NA, NA, 190, 192, 189, 200, 185, 205, 203, 200, 207, NA, NA, 205) years &lt;- seq(1957, length.out = length(lfd)) </code></pre> <p>Now I use the linear regression in R (function lm) to calculate temporal trend. It seems the results are unreasonable for datasets with many missing values.</p> <p>How could I calculate the temporal trends with missing values? Thanks for any suggestions. </p>
74,045
<p>I am interested in developing a computational search of some types of non-coding RNAs (ncRNAs) on a new genome. I have some questions related to the nature of my experiment, because it is an exploratory study, and I can generate different results by applying different BLAST search strategies. The final output consists of different counts of each type of ncRNA for each BLAST strategy. </p> <p>How I can design good search strategies, taking into account that it is the first search on this new genome?</p> <p>References will be appreciated. </p> <p>Thanks!</p>
48,340
<p>Let's say I ask a robot a question: "Why is partial differential equations no fun at all?" </p> <p>Suppose the robot is stupid and categorizes my question inside the class of "cats" so it replies with: "Cats are fluffy and awesome."</p> <p>So I reword my question over and over and sadly the robot keeps hitting the "cats" class. Sequential hits is an example of a risk factor. It indicates that perhaps something is wrong with the "cats" class in the robot's brain. Maybe the word "PDEs" appears as sample text under the class to draw from. </p> <p>Suppose I have a collection of classes (e.g. "cats", "dogs") that all share the same risk factors(e.g. "sequential hits","bad user scores",etc). </p> <p>Each risk factor has its own weight which I can adjust from 0-10 where 0 essentially means that I can completely drop the risk factor and 10 is the most important.</p> <p>For each class, I can multiply the occurrence of each risk factor by its respective weight and sum them all to obtain a risk score for the class.</p> <p>Thus, each class has a risk score and I can focus on fixing the classes with the highest risk scores.</p> <p>My question is this: How can I optimally choose the weights of the risk factors to give me what are truly the riskiest classes?</p> <p>This is a pretty open question. I am looking for ideas, papers, books, or statistical techniques that might be helpful to this endeavor. Much thanks.</p>
36,680
<p>I was reading this <code>[paper][1]</code> related to sparse online gaussian processes. However, I didn't get how the denominator in the equation 1 was derived?<img src="http://i.stack.imgur.com/vr3dG.png" alt="enter image description here"></p> <p>It was supposed to be P(D). I didn't get how the denominator was obtained. Further, the paper is saying that the posterior may not be gaussian. I don't know why it is so?</p>
36,681
<p>Suppose I have a dataset with the following information:</p> <ul> <li>N objects, each of which can be rated</li> <li>The number of ratings per object is variable (raters are unidentified), e.g., there are 30 objects each with a unique id</li> <li>Each row has the predicted rating, the actual rating, and the difference between the two, which is an ordinal value, say between 0 and 10</li> </ul> <p>How do I measure the consistency of ratings per object and for the whole dataset? Basically, there is a distribution of "differences" per object. What are the various ways to approach this problem? Thank you. </p>
6,464
<p>I'm trying to validate a series of words that are provided by users. I'm trying to come up with a scoring system that will determine the likelihood that the series of words are indeed valid words.</p> <p>Assume the following input:</p> <pre><code>xxx yyy zzz </code></pre> <p>The first thing I do is check each word individually against a database of words that I have. So, let's say that <code>xxx</code> was in the database, so we are 100% sure it's a valid word. Then let's say that <code>yyy</code> doesn't exist in the database, but a possible variation of its spelling exist (say <code>yyyy</code>). We don't give <code>yyy</code> a score of 100%, but maybe something lower (let's say 90%). Then <code>zzz</code> just doesn't exist at all in the database. So, <code>zzz</code> gets a score of 0%.</p> <p>So we have something like this:</p> <pre><code>xxx = 100% yyy = 90% zzz = 0% </code></pre> <p>Assume further that the users are either going to either:</p> <ol> <li>Provide a list of all valid words (most likely)</li> <li>Provide a list of all invalid words (likely)</li> <li>Provide a list of a mix of valid and invalid words (not likely)</li> </ol> <p>As a whole, what is a good scoring system to determine a confidence score that <code>xxx yyy zzz</code> is a series of valid words? I'm not looking for anything too complex, but getting the average of the scores doesn't seem right. If some words in the list of words are valid, I think it increases the likelihood that the word not found in the database is an actual word also (it's just a limitation of the database that it doesn't contain that particular word).</p> <p>NOTE: The input will generally be a minimum of 2 words (and mostly 2 words), but can be 3, 4, 5 (and maybe even more in some rare cases).</p> <p><strong>EXAMPLE 1:</strong></p> <p>Say the following scores:</p> <pre><code>xxx = 100% yyy = 100% zzz = 0% </code></pre> <p>The average is 66.66%. But since already two words in the list exist in the database, the chances are that <code>zzz</code> is also a real word. Scoring this series of words as just 66.66% seems low.</p>
36,683
<p>I have seen an apparently interesting quantity called the Bayesian complexity, which in this case is defined as $-2(\overline{\ln\mathfrak{L}} - \ln\mathfrak{L}_\mathrm{max})$. I am not sure what this is used for. I.e., what sorts of tests can be made on this quantity and what does it imply? </p>
36,684
<p>I am currently working on a project in which I have to eliminate outliers from non-normally distributed data sets. </p> <p>The data sets are subsets of a fairly large database (order of millions of observations) segmented into groups ranging from 30-200+ observations based on the type of product being looked at.</p> <p>The data is pricing data which could be abnormal in the data set due to the possibility of rush orders and other types of abnormal circumstances under which purchases are made which can drive prices above or below a typical range). The difficulty in trying to identify these outlier values is that not every group of observations follows a normal distribution so tests for outliers based on normality are not accurate in testing for outliers in this type of data. </p> <p>My current approach to this problem is to measure the standard deviation from the mean to flag all values less than three deviations from the mean of the group, this is used to flag the observations for administrative review later.</p> <p>then I measure skewness using the Pearson's coefficient of skewness. If the value of this statistic is beyond a certain threshold ±.4 then I get rid of the 1st percentile (if skew is lower than -.4) or I remove the 99th percentile of data (if the skew is greater than .4).</p> <p>This measurement of skew is repeated with a new measurement of skew each iteration with the same type of observation removal based on the parameter listed in the previous paragraph. The removed values are then given over to administrative review to determine whether they are true outliers with special attention being paid to any values within 3 standard deviations. </p> <p>is this a good approach to removal of outliers in this case? I don't have much experience with outlier detection in data not belonging to a normal distribution. </p> <p>so the steps listed explicitly are</p> <ol> <li><p>Calculate deviation from mean for each observation</p></li> <li><p>Calculate skewness of the pool of observation if skew>.4 remove 99th percentile observations if skew&lt;.4 remove 1st percentile observations</p></li> <li><p>values removed are then put under review to determine outlier status with special attention paid to values falling within 3 standard deviations of mean.</p></li> </ol>
74,046
<p>I am using R for data analysis. R provides a <code>corr</code> function for calculating the correlation. <a href="http://www.statmethods.net/stats/correlations.html">This function</a> provides three different approaches/algorithms to estimating the <code>corr</code> which are Pearson, Spearman and Kendall. When should I use each of each of these methods? What factors determine which method should be used?</p>
74,047
<p>I manipulate in R time series that can present <strong>revisions</strong>. I use in general the package 'xts' but it doesn't support revisions.</p> <p>Is there a package that deals with that?</p> <p>Alternatively, is there a best practice to store and manipulate revisions?</p>
36,687
<p><img src="http://i.stack.imgur.com/qTI9v.jpg" alt="enter image description here"></p> <p>How can I calculate the mean of the first schema? I don't have the individual values of the sample. Std. Deviation (B) is 0.6441.</p>
74,048
<p>This is an ill posed problem I'm faced with at work.</p> <p>I have observations $X_i$, $i = 1,2,3,...N &gt; M$. I need want to compute $S = \sum_{i=1}^M X_i$. Unfortunately, $M$ is random, and extremely noisy. It turns out, I have some rough "goodness" measures, $w_i$ for each of the $X_i$. That is, $w_i$ low, means I don't trust $X_i$ very much, $w_i$ high, means I trust $X_i$ a lot. </p> <p>Moreover, "generally" as $i$ gets large, the $X_i$ get small. (There's a lot of dependence between $M$ and the $X_i$)</p> <p>In this application, it turns out that the <em>average</em> of the $X_i$ can very reliably (i.e. with relatively small standard deviation) be computed by: $A = \frac{\sum_{i=1}^M w_i X_i}{\sum_{i=1}^M w_i}$. We know the standard deviation of $A$ is relatively small by running a number of independent trials, and computing the sample standard deviation.</p> <p>But, I similarly need a "good" estimate (i.e. small standard deviation) of the straight sum of the $X_i$ and the obvious approach: $M * A$ has far too much variance for our application.</p> <p>Understand this is pretty vague, but any thoughts on how to proceed?</p>
74,049
<p>Violin players were asked to rate 5 violins from least to most rich on a <code>[0 1]</code>, <code>0.05-increment</code> scale, <code>most always = 1</code> and <code>least always = 0</code>. They were then asked to set a "limit of acceptability," i.e., to set a point along the scale above which violins are considered acceptable in terms of richness. By setting <code>acceptable = 1</code> and <code>not acceptable = 0</code>, a binary variable is obtained for each player, in addition to their rating. So, as an example, for a player I have the variables <code>rating = [0.3 0.45 0 1 0.85]</code> and for a <code>limit = 0.45</code>, <code>acc = [0 1 0 1 1]</code>. I use Lin's concordance correlation coefficient to explore consistency within and between individuals for the case of ratings. However, this is not possible for the "acceptable" variable: when a player sets their <code>limit = 0</code>, then <code>acc = [1 1 1 1 1]</code> and hence the presence of zero variance becomes an issue. I am trying to figure out a good, interesting way to analyze the binary data, perhaps compare consistency in ratings with consistency in acceptability limits, or more in-depth relations between the two variables. I am not very experienced in statistics, so any suggestions and ideas are welcome. I primarily work with MATLAB.</p>
74,050
<p>I am computing SVD on a matrix which is the empirical version of $E[XY^{\top}]$ for some $X \in \mathbb{R}^{m \times 1}$ and $Y \in \mathbb{R}^{n \times 1}$.</p> <p>I am wondering if there are standard ways to preprocess $x_1,\ldots,x_l$ and $y_1,\ldots,y_l$ before doing that (other than subtracting the mean and dividing by standard deviation).</p> <p>This is related to the question here: <a href="http://stats.stackexchange.com/questions/12200/normalizing-variables-for-svd-pca">&quot;Normalizing&quot; variables for SVD / PCA</a>.</p>
162
<p>I have two models. First, $$z=\alpha_1x+\alpha_2+\epsilon$$ and second, $$z=\beta_1x+\beta_2y+\beta_3+\epsilon.$$</p> <p>What factors affect whether $\alpha_1\approx\beta_1$? Note that I don't care how accurately either model predicts $z$. I only care about how similar the coefficient of the $x$ term is in both models.</p> <p>Assume both $x$ and $y$ actually predict $z$ and don't interact.</p>
36,691
<p>We have two time series: $X_t$ and $R_t$, and a model saying that $R_{t+1} = (\mu(X_t) - \frac{1}{2}\sigma^2(X_t))\Delta T + \sigma(X_t) \sqrt{\Delta T} \epsilon_t$, where $\Delta T$ is given constant and $\epsilon_t$-s are independent normally distributed with zero mean and unit variance. Further we assume that the functions $\mu(x)$ and $\sigma(x)$ are linear for simplicity. I would like to use some standard method (MLE comes to my mind) to estimate parameters of functions $\mu(x)$ and $\sigma(x)$, but I am not sure how to do this. </p> <p>I would be grateful for detailed answers, because I am not really experienced with statistics.</p>
74,051
<p>I'm only a linguist, so my knowledge of statistics is very basic.</p> <p>I fitted a logistic regression model with R (with <code>lrm(formula, y=T, x=T)</code>), and when I use the option <code>validate(lrm)</code>, I get some statistics I don't really understand.</p> <pre><code>index.orig training test optimism index.corrected n Dxy 0.5984 0.6112 0.5461 0.0651 0.5333 40 R2 0.3258 0.3676 0.2929 0.0747 0.2511 40 Intercept 0.0000 0.0000 -0.0105 0.0105 -0.0105 40 Slope 1.0000 1.0000 0.8427 0.1573 0.8427 40 Emax 0.0000 0.0000 0.0399 0.0399 0.0399 40 D 0.2713 0.3176 0.2394 0.0782 0.1931 40 U -0.0177 -0.0177 0.0092 -0.0269 0.0092 40 Q 0.2890 0.3353 0.2302 0.1051 0.1839 40 B 0.1864 0.1772 0.1972 -0.0201 0.2064 40 g 1.4632 1.6642 1.3460 0.3182 1.1449 40 gp 0.2840 0.3011 0.2703 0.0308 0.2532 40 </code></pre> <p>I don't really understand most of that. I think <code>R2</code> and <code>Dxy</code> are supposed to be statistics of how good the predictors are, but I'm not sure how I should interpret the values, does the corrected <code>Dyx = 0.651</code> mean that there is a strong correlation, while the corrected <code>R2 = 0.0747</code> means that the correlation is very weak? I think the model is overfitted, but I'm not sure if I'm right.</p> <p>Also, the other statistics are totally strange to me. What are <code>Emax, D, U, Q, B, g</code>, and <code>gp</code>?</p>
1,722
<p>i try to implement a simple gaussian process regression in java. I almost got every step from the book <a href="http://www.GaussianProcess.org/gpml" rel="nofollow">http://www.GaussianProcess.org/gpml</a> .</p> <p>With my implementation of algorithm 2.1 on page 19 i'm able to produce results but:</p> <p>I have the strange behavior that all predicted datapoints where i do not have any target tend to be of zero value. Does anyone know this behavior or can explain where i could have taken a mistake?</p> <p>In my plot the red line marks the predicted values, black are the targets.</p> <p>Here is the prediction and covariance java code. The parameter array includes length scale and noise, I work with the idexes of the arrays as x inputs for the covaiance function:</p> <pre><code>private double generateCOV_RADIAL(int i, int j) { double covar = Math.pow(parameter[1], 2.0) * Math.exp(-1.0 * (Math.pow(i - j, 2.0)/(2 * Math.pow(parameter[0], 2.0)))); if(i==j) covar += Math.pow(parameter[2], 2); return covar; } public void predict(int predictFuture) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException{ RealMatrix y = MatrixUtils.createColumnRealMatrix(this.targets); double mean = StatUtils.mean(this.targets); for(int i = 0; i &lt; this.targets.length;i++) { targets[i] -= mean; } RealMatrix K = MatrixUtils.createRealMatrix(cov); //identity matrix for I RealMatrix k_eye = MatrixUtils.createRealIdentityMatrix(cov.length); //choleski(K + sigman^2*I) CholeskyDecomposition L = null; try { L = new CholeskyDecompositionImpl( K.add( k_eye.scalarMultiply(Math.pow(parameter[2], 2)) ) ); } catch (NonSquareMatrixException e) { e.printStackTrace(); } catch (NotSymmetricMatrixException e) { e.printStackTrace(); } catch (NotPositiveDefiniteMatrixException e) { e.printStackTrace(); } //inverse of Ltranspose for left devision RealMatrix L_transpose_1 = new LUDecompositionImpl(L.getLT()).getSolver().getInverse(); //inverse of Ltranspose for left devision RealMatrix L_1 = new LUDecompositionImpl(L.getL()).getSolver().getInverse(); //alpha = L'\(L\y) RealMatrix alpha = L_transpose_1.multiply(L_1).multiply(y); double L_diag = 0.0; for(int i = 0; i &lt; L.getL().getColumnDimension();i++) { L_diag += Math.log(L.getL().getEntry(i, i)); } double logpyX = - y.transpose().multiply(alpha).scalarMultiply(0.5).getData()[0][0] - L_diag - predictFuture * Math.log(2 * Math.PI) * 0.5; double[] fstar = new double[targets.length + predictFuture]; double[] V = new double[targets.length + predictFuture]; for(int i = 0;i &lt; targets.length + predictFuture;i++) { double[] kstar = new double[targets.length]; for(int j = 0; j &lt; targets.length;j++) { double covar = (Double)covMethod.invoke(this,j,i); kstar[j] = covar; } //f*=k_*^T * alpha fstar[i] = MatrixUtils.createColumnRealMatrix(kstar).transpose().multiply(alpha).getData()[0][0]; fstar[i] += mean; //v = L\k_* RealMatrix v = L_1.multiply(MatrixUtils.createColumnRealMatrix(kstar)); //V[fstar] = k(x_*,x_*) - v^T*v double covar = (Double)covMethod.invoke(this,i,i); V[i] = covar - v.transpose().multiply(v).getData()[0][0] + Math.pow(parameter[2],2); } this.predicted_mean = fstar; this.predicted_variance = V; } </code></pre> <p>Thank you</p> <p><img src="http://i.stack.imgur.com/hfjBL.jpg" alt="GP Example"></p>
74,052
<p>I want to create an equation that consists of two independent variables "duration" and "data_transferred", and one dependent variable "meeting_quality"</p> <p>This is what I want:</p> <p>meeting_quality would be good, if more data is transferred in less duration.</p> <p>meeting_quality would be bad, if less data is transferred in more duration.</p> <p>What should be the equation presentation of above sentence? Thanks in advance.</p> <p>In fact, I want to use the values above equation in regression analysis. I would really be grateful for any further advice.</p>
74,053
<p>I have a spreadsheet where the content is organized like that:</p> <pre> content | category1 | category2 | category3 | ... _________|___________|___________|___________| ... content 1| x | | x | ... content 2| | x | | ... ... </pre> <p>How can I present this information more effectively? For example, is there a Javascript library I could use or something like that, where we could choose to show only the rows which are associated to some categories, or set a category to a row, etc? Thanks in advance</p>
36,694
<p>In <code>R</code>, I would like to give multiline names to some data sets in <code>boxplot</code>, like this:</p> <pre><code>boxplot(rnorm(10), rnorm(10, mean=2), names=c("Normal", "Shifted\n*")) </code></pre> <p>Here, the names seem to be aligned with their bottom lines, which causes the text to overlap with the axis. How can I have multiline names which simply extend to the bottom instead of upwards?</p>
36,695
<p>I wanted to create a bubble chart which means a scatter plot with a separate variable indicating diameter of bubble. Is there any way to use bubble then as pie chart? I would really like to do this in MS Excel but I just don't know whether it is possible. </p> <hr> <p>I have another problem with the code proposed earlier in the post. Now that the first problem appears to be solved there is a problem in the line </p> <p><code>Set rngRow = Range(ThisWorkbook.Names(“PieChartValues”).RefersTo)</code></p> <p>and I tried to replace PieChartValues with the range from the first row to the last in which the pie charts values ( the values from which the pie chart is generated) - like "A1:G12" and all the data are in that range. The tab in the sheet I am working in is called "B" without quotes should I write </p> <p><code>Set rngRow = Range(ThisWorkbook.Names(“B”).RefersTo)</code></p> <hr> <p>i tried to implement the code of a link proposed earlier in VBA this is</p> <pre><code>Sub PieMarkers() Dim chtMarker As Chart Dim chtMain As Chart Dim intPoint As Integer Dim rngRow As Range Dim lngPointIndex As Long Application.ScreenUpdating = False Set chtMarker = ActiveSheet.ChartObjects(“chtMarker”).Chart Set chtMain = ActiveSheet.ChartObjects(“chtMain”).Chart Set chtMain = ActiveSheet.ChartObjects(“chtMain”).Chart Set rngRow = Range(ThisWorkbook.Names(“PieChartValues”).RefersTo) For Each rngRow In Range(“PieChartValues”).Rows chtMarker.SeriesCollection(1).Values = rngRow chtMarker.Parent.CopyPicture xlScreen, xlPicture lngPointIndex = lngPointIndex + 1 chtMain.SeriesCollection(1).Points(lngPointIndex).Paste Next lngPointIndex = 0 Application.ScreenUpdating = True End Sub </code></pre> <p>when I execute the code I get an error pointing to the line Set chtMarker = ActiveSheet.ChartObjects(“chtMarker”).Chart</p> <p>does anybody have an idea why? I have an object a pie chart named exactly chtMarker so the object should be there</p>
36,696
<p>What is the PDF of the difference of two i.i.d Laplace distributed random variables?</p> <p>I know that the difference of two i.i.d Normal variables is still the Normal distribution. Since the properties of the Laplace distribution are similar to the Normal distribution, I am guessing that the difference is also the Laplace distribution.</p> <p>I have tried to solve this using Mathematica. Neither <code>PDF</code> nor <code>Integrate</code> functions give a correct answer. Thank you!</p>
74,054
<p>I am comparing energy expenditure outputs from two devices that track physical activity in a free-living setting over 10 days. Doubly labeled water is the accepted criterion measurement for this, but 1) This technique only gives total energy expenditure and 2) I did not use this in my study...</p> <p>Both of the devices I used have previously been validated against DLW. These devices give a lot more information than DLW, namely time spent at specific PA thresholds and number of 'bouts' at each threshold. </p> <p>I would like to assess the level of agreement between the two devices. I know that Bland-Altman plots are commonly used to assess agreement between measuring devices but I am struggling to decide which test to use without a reference/criterion measurement. I would guess that a basic correlation would provide some information but was wondering if anybody could offer some more information and/or guidance for a more sophisticated statistical anaylsis. Maybe a regression / sum of squares would be appropriated?</p> <p>Any help would be much appreciated.</p>
74,055
<p>I've used chi-square analysis in school, but was absent for the lesson, and I never quite got the hang of it. I've been successful with it in the past, but now I'm trying to use percentage values, and I'm not getting the answers I'm expecting. I'm doing a survey and comparing my findings with national averages, but when I use percentages (such as the percent of participants between the ages of 21 and 29) I get understandably tiny numbers (2.3% is 0.023). If I'm using percentages for the expected value, should it matter?</p> <p>Ex: Chi² for 21–29 age group is $(.084 - 0.1642)^2 / 0.1642 = .039187$</p> <p>The total chi-square value is 0.1368, which is much smaller than the 0.5% chi squared table value for 8 degrees of freedom, 17.535, but the numbers seem way off. Do I have to find the whole numbers?</p>
74,056
<p>If we have two variables A={20 values} and B={20 values} and we want to measure the correlation between these two variables. Lets assume that the first n values are highly correlated but the remaining m values are not correlated. The overall correlation between A and B may not represent the importance of the relationship. Imagine that there is only weak correlation in some values, but several values show really good correspondence between A and B. Traditional correlation coefficients will not capture such relationship. Is there any measure or statistical method that can detect such relationship between A and B?</p>
36,700
<p>Do you know of any study where researchers have used Statistical Parametric Mapping for <strong>spatial data only</strong>? I found a lot of studies that used it for time-series analysis, but does this analysis work for data that don't share a time dimension? </p>
74,057
<p>I have a time series $x_t$. If I use the transformation $u_t = log(x_t) - log(x_{t-1})$, my new time series $u_t$ has properties of white noise (random). I wonder whether there is any practical interpretation for $u_t$?</p> <p>Thanks for your help.</p>
74,058
<p>I am unclear about how to interpret the value of the Standard Error of the Mean (SEM) directly. For example, when a mean is reported as 5.00 + 0.50SEM, how do you directly relate the 0.50 to 5.00?</p> <p>To quickly run through the basic theory concerning the standard error: </p> <ol> <li>The standard deviation (SD) is a measure of dispersion around the mean </li> <li>The SEM is the SD of the sampling distribution for the sample mean</li> <li>This sampling distribution is derived from the means of an infinite number of samples from a statistical population and is normally distributed according to the Central Limit Theorem</li> <li>In a normal distribution, 68.3% of (randomly selected) values fall within +1SD, 95.4% +2SD and 99.7% +3SD</li> <li>The SEM decreases with increasing sample size</li> </ol> <p>QUESTION 1: Given points 2) and 4) is it correct to interpret the SEM in the same way as SD as a descriptive statistic? That is, for a sample with mean 5.00 and SEM 0.50, is it correct to conclude the true population mean lies between 4.50 and 5.50 with probability 68.3%?</p> <p>QUESTION 2: For a given statistical population being sampled, does the sampling distribution change with sample size (given point 5)? i.e. should point 3) be: "...derived from the means of an infinite number of samples of a given size from a statistical population..."? </p> <p>QUESTION 3: Since the SEM is not calculated directly but estimated from the SD of a sample, what effect does departure from a normal distribution of the sample have on calculation of the SEM? Put another way, if a sample has for example a highly skewed distribution, calculating the SD is inappropriate (it is a property of a normal distribution only), so can the SEM still be estimated reliably from the SD?</p>
48,370
<p>I've been working with spatial autocorrelation for a while and now I'm trying to move from more traditional estimators such as Moran's I or Geary's C to the new APLE estimator. I read Li's papers on APLE and also the reference on R-project. But I am still missing something very basic, as I still didn't understand something: the results of the APLE statistic are said to have a closed form but I didn't figure which one it is.</p> <p>For instance, while Moran's I can gom from -1 to 1, I couldn't figure which are the possible boundaries of APLE statistics. More interesting, I've been running tests in my older data, using the spdep package in R, and it seems to me that APLE can go up to 2. Any one has any hint on that? Please, any insights here would be very much appreciated.</p>
48,374
<p>My question might be easy for most of you to answer. I am starting to learn statistics and coding in R so my questions are on the basic level.</p> <p>I have a dataset of (two groups, replicates). My data is split based on the groups so I have 24 samples in group 1 and 20 samples in group 2. My data has replicates. So each set has 4 replicates, hence I have 6 sets in group 1 and 5 sets in group 2. Hence I have assigned indices to them to make it easier during permutation (indices from 1-11). What I want to do now is a routine permutation analysis to obtain the test statistic. I am using non paramteric method with resampling with replacement. </p> <p>My doubt\problem in R coding is that I have to pool the data together and then resample the groups. When I try to do this, <strong>I have to make sure I maintain the sample size for respective groups (that is after resampling the group lables, my new dataset should still contain 6 sets (24 samples) in group 1 and 5 sets (20 samples) in group 2.</strong> I am unable to achieve the latter. </p> <p>I tried something like this in my code:</p> <pre><code>n_resamples=100 permdat=function(dat){ for (i in 6:nc) { for (j in 1:n_resamples) { inds=unique(dat$ind) cc=dat$grp[dat$ind==inds] pcc=sample(cc,replace=T) ##for resampling with replacement #m&lt;-list() pdat=dat for(i in 1:length(inds)) { pdat$grp[pdat$ind==inds[i]]=pcc[i] #browser() } </code></pre> <p>This does not produce the desired result. Kindly help with R codes. </p> <p>Thanking you all for your help. :)</p> <p>Regards Ap</p>
74,059
<p>I have a logit model and am trying to understand and compare the predicted and observed values generated by the model. Let's say data set had 100 values and I generate all the predicted probabilities, and then I find the actual probabilities from the data set.</p> <p>If I'm comparing the predicted vs observed values, I'm thinking there are two ways to do it. One is to do it value by value, while the second would be to group by the 'predicted probabilities.'</p> <pre><code>Method 1: x_value pred_val obs_val 100 0.30 0.34 102 0.33 0.36 104 0.35 0.37 106 0.40 0.40 ... </code></pre> <p>I'm also thinking there has to some way to aggregate these values. So I'm thinking of aggregating all x values where the predicted probabilities is between 10 to 20% percent, then find the avg predicted value from that range, followed by the predicted value for that range.</p> <pre><code>Method 2: Pred_probs pred_val obs_val 10 to 20% vals 0.10 0.11 21 to 30% vals 0.12 0.16 31 to 50% vals 0.15 0.30 </code></pre> <p>What I'm wondering is:</p> <ol> <li><p>When there are a large number of data points, what use is having a list of the predicted and observed values for any given value of x?</p></li> <li><p>Does it ever make sense to do something as identified in 'Method 2'?</p></li> </ol>
74,060
<p>I have a lot of data (gigs) that may be useful in predicting equity prices. I can import these as a series of features (columns) in a table where the companies are rows. I have time series information too.</p> <p>I have some machine learning experience but no experience as a trader.</p> <p>Is there some software or platform where I could easily import my data and it could backtest/forward test my data to see if it's useful?</p> <p>I understand that any machine learning system that's out there in public won't outperform the market enough to cover brokerage fees, but given my data, there's a chance it would. So I probably don't need state of the art in machine learning but I'd like to find a solution where I don't spend the next 6 months learning the stock market. I'd rather spend that time getting feedback on, and iterating on the input data, because as a data-oriented developer, that's where I can add value.</p> <p>Any help, much appreciated.</p>
44,147
<p>I read that the k-means algorithm only converges to a local minimum and not to a global minimum. Why is this? I can logically think of how initialization could affect the final clustering and there is a possibility of sub-optimum clustering, but I did not find anything that will mathematically prove that.</p> <p>Also, why is k-means an iterative process? Can't we just partially differentiate the objective function w.r.t. to the centroids, equate it to zero to find the centroids that minimizes this function? Why do we have to use gradient descent to reach the minimum step by step?</p>
44,377
<p>Could anyone suggest the best method to predict the time gap between two events?</p> <p>For example, given that diabetics are at higher risk of developing hypertension, I would like to predict the time gap until patients develop hypertension after being diagnosed with diabetes. </p> <p>I will have both time dependent and independent covariates. Does this require a longitudinal approach? </p>
74,061
<p>I have been using successfully DBNs for classification for MNIST and some other tasks.</p> <p>I was thinking of doing dimensionality reduction before training, in order to boost performance. However, when I tried out using the output of PCA to train a deep belief network the results were quite bad.</p> <p>I understand that DBNs learn features from the image and that PCA might project the data into a space where things do not look as similar anymore, but I was not expecting such a bad result.</p> <p>Any ideas why this could be? I was especially suprised because this paper has succesfully used it: <a href="http://cs.stanford.edu/people/ang/papers/nips07-sparsedeepbeliefnetworkv2.pdf" rel="nofollow">paper here</a></p> <p>Thank you!</p>
74,062
<p>I am a little bit confused about the machine learning outcomes on my dataset. I would be grateful if anyone can enlighten me on this:</p> <p>When features are statistically different between two groups, shouldn't the machine learning able to predict the right class with a reasonable accuracy? The data consists of 9 features where 8 of them were significantly different between the groups (two tailed independent ttest). Total observations were 71 (30 and 41). However, the machine learning were only able to produce around 65% classification accuracy (10 fold cross validation). I have used SVM with RBF kernel, kNN, Random Forest, and Naive Bayes. My question is, is it normal? Is it possible to get higher classification accuracy when the features were not significantly different between the groups? Thanks.</p>
36,705
<p>Suppose I have a big online company, and many of my customers churned (i.e. they were paying, and then stopped). My goal is to understand why each of them churned.</p> <p>First I identify the complete set of reasons for churning, $H_1,\ldots,H_n$. E.g. "the website is confusing", "the customer lost interest", etc. Then by observation I identify a set of customer behaviors which are correlated with the churn rate, $O_1,\ldots,O_m$. Each of them is directly measurable, e.g. the number of days he logged in, or the amount of time he spent at my website the day he signed up. Assume that computing $\Pr[O_j=\cdot]$ is easy, and that all of my customers are mutually independent.</p> <p>I know that each $H_i$ is related to a subset of the $O_j$'s. E.g. if a churned customer logged in $d$ days, then the probability that he lost interest is $f(d)$, with $f$ a decreasing function. (One of my problems is accurately computing this function $f$.)</p> <p>My goal is to compute, for each churned customer, and for each $i$, the probability that he churned because of $H_i$, i.e. $$ \Pr[H_i] = \sum_j \Pr[H_i\ |\ O_j=x_j] \Pr[O_j=x_j] $$ where $x_j$ is the value of $O_j$ for the given customer.</p> <p>Now, suppose I send a survey to all churned customers, asking for their reasons, but certainly only a few of them answer it. I want then to update all the functions $f_{ij}(\cdot) = \Pr[H_i\ |\ O_j=\cdot]$ based on the new evidence, so that I can compute more accurately $\Pr[H_i]$ for all the churned customers which haven't answered the survey.</p> <p>My question is how can I do this exactly? and how can I get "good" estimates for the original $f_{ij}$'s? Is my approach correct?</p> <p>EDIT: Now I realize that the formula I gave for $\Pr[H_i]$ is obviously wrong. The $O_j$'s don't form a partition. I think I should compute $\Pr[H_i\ |\ (O_1,\ldots,O_m)=(x_1,\ldots,x_m)]$ directly, since the $O_j$'s are not even pairwise independent.</p>
74,063
<p>In calculating the pseudoinverse of a matrix $A$, of size (m,n), I need to choose a tolerance threshold for the eigenvalues. I'm trying to understand how I should pick this. Matlab default is to use the $norm(A)*max(m,n)*eps$, where the norm is norm-2 and the eps is the machine epsilon. Any ideas as to why this is the default value?</p> <p>Moreover, there are cases in which $ A $ is only interesting in comparison to other values. For example, I'd like to calculate the eigenvalues of $C + BB^TA^{-0.5}$ where A,B,C are matrices, but B can have a very large second dimension. In this example, how should we choose the tolerance? </p>
33,239
<p>I have a dataset that includes variables about customer income levels. The income was collected in binned fashion (<code>Which range describes your income? 0-25k, 25k-50k,...</code>). My question is how best use this for modeling using <code>glmnet</code> and <code>gbm</code> packages in <code>R</code>.</p> <p>I have looked at the <code>grouped</code> packaged in <code>R</code> but it seems to do everything (coursening and regression) for you. Is there a package that converts binned data back to continuous data for use in with other algos?</p> <p>EDIT: The current method I'm using is to convert them to the mid-point of the range (<code>0-25k -&gt; 12500</code>), then using an <code>ifelse()</code> stmt to code a few variables to convey the fact that there is a relationship between the levels.</p> <pre><code>incOver25k &lt;- ifelse(df1$income &gt;= 25000,1,0) incOver50k &lt;- ifelse(df1$income &gt;= 50000,1,0) </code></pre> <p>Then use these flags instead of using <code>model.matrix()</code>. </p> <p>Was curious if there were any better ideas.</p>
74,064
<p>I am interested in fitting a Poisson/negative binomial distribution to estimate the number of times a phenomenon happens within a period, let's just say 10 years. I can count the events from monthly reports, but unfortunately, there are reports missing. So for one sample, I might have 120 observational slots, but for some others I might have 30. The event can happen if it is not observed. </p> <p>The missing slots pattern is random (ie not correlated between samples), and it can result in anything from a nearly complete observational record to a very decimated one.</p> <p>How can I cope with this?</p>
74,065
<p>I am using the <code>flexmix</code> package to estimate latent class multinomial logit models in R. In choice theory, there can be variables associated with the alternative (generic) or that vary with the agent (alternative-specific).</p> <p>The <code>nnet</code> package that underlies <code>FLXMRmultinom</code> can't accommodate generic variables. So far, I haven't seen that the <code>FLXMRcondlogit</code> can handle alternative-specific ones.</p> <p>The most flexible package for MNL models is <code>mlogit</code>. Has anyone seen an implementation of this for <code>flexmix</code>?</p>
36,709
<p>I need to predict with a simple model if a bank customer is still active.</p> <p>I have some information like the date of the last meeting with the customer, but some of them didn't have any meeting with their advisor.</p> <pre><code>Customer_Id | Last_meeting_date (in days from now) | Gender |... A 115 F ... B NA F ... </code></pre> <p>Is it possible to use a special value/flag like 1000 to "indicate" to the tree that we don't have any date ?</p> <p>In my mind, the decision tree isn't linear like regression, so it can make an exception, like if Last_meeting_date=1000 then... but I'm not sure of it.</p>
36,710
<p>I have one categorical dichotomous dependent variable (yes/no - retention of newly learned words) and two independent variables - both are categorical. One independent variable is "delay between exposure and test" and has two levels (short time delay/week time delay). The other IV is "word type" and relates to the types of words children were exposed to - object label, colour label etc. I have run a binary logistic regression on my data (440 participants - 44 per condition of word type and time delay) using the "Enter" Method and defined both IVs as categorical using the categorical tab. The initial outputs - Block 0 "Variables not in the Equation", Block 1 Omnibus Tests of Model Coeffcients and the Hosmer and Lemeshow Test all suggest that the model is significant and the Block 1 Classification Table suggests there's been an 11% improvement in predicting the DV (when compared to the Block 0 classification table). Yet the Block 1 "Variables in the Equation" Table shows no main effect of either of the IVs nor an Interaction effect - and not by a long way (time delay p=0.37; word type p=0.37; Word type*Time Delay p=0.401. I don't know how to interpret this as all the examples I've seen in my book and online show at least one significant main effect when the model is significant. It also seems somewhat illogical for SPSS to report that the model is significant but that neither of the IVs are having a significant effect! Also, my data are quite clear and it seems very likely that there is a main effect of time delay. Retention of all word types except one fall from around 65% in the short delay to around 33% after one week. </p> <p>I have tried changing the indicator from first to last (ie from short delay to the week delay) for the time delay variable and concerningly the results change - I thought when the IV was dichotomous it didn't matter which of the categories was used as the control group (indicator). Even more of a concern, it produces a main effect of word type rather than time delay.</p> <p>I have used dummy coding and effectively treated each of my word type categories as different IVs and this produces completely different results. Again - this is a concern as I would expect very similar results since it should effectively be running the same data.</p> <p>Can anyone help?</p>
33,244
<p>I need help with the following question:</p> <p>Consider $m$ observations $(y_1; n_1); ... ; (y_m; n_m)$, where $y_i \sim Bin(n_i; θ_i)$ are binomial variables. Assume that $θ_i \sim w_1Beta(α_1; β_1) + w_2Beta(α_2; β_2)$ are mixture from two Beta distributions $(w_1 + w_2 = 1)$. </p> <ul> <li>Derive a Laplace approximation of the likelihood and each mixture component of the prior. </li> <li>Derive the empirical Bayes likelihood of the data by integrating out i using the Laplace approximation, and leave the hyper-parameters $(w_j ; α_j ; β_j)$ (j = 1; 2) </li> <li>Derive the EM algorithm to estimate the hyperparameters (you may use also mixture of Gaussian prior to approximate mixture of Beta prior)</li> </ul>
48,384
<p>I have a huge dataset (millions of rows, thousands of columns) and <code>glmnet</code> Lasso regularised regression is too slow.</p> <p>I am wondering what other libraries there are that try to implement regularised linear model estimation extremely efficiently? I don't necessarily need a distributed solution (it can estimate on one CPU), and it is fine for me to load all data to RAM (so an <code>R</code> solution may be fine, if one exists that's faster somehow than <code>glmnet</code>). It also does not necessarily need to be Lasso or Elastic Net.</p> <p>I can use java, python or R.</p> <p>My data is pretty messy, some columns are sparse, some have heavy autocorrelation, etc. </p>
33,249
<p>I have a data set and I want to examine an hypothesis in there and probably Network Analysis should prove or reject my theory.</p> <p>I have a list of products and a group of people who give the product to each other. Anyone can sell it to another in any price he want (hopefully someone will buy it).</p> <p>So, imagine a chain of products transfers between the people. Except the sell price, one will also take a percentage of the sale only of the next one who will re-sell the product. And also, if I sell for the first time a product, I will take a percentage forever for every sale.</p> <p>Example:</p> <p>A -> B -> C</p> <p>Person A will take the money from sale to Person B. Person B will take the money of the sale to person C and Person A will take a percentage of this sale as the previous owner and the creator of the chain. If person C sell the product to someone else, only person B will take the previous percentage, but person A will take the percentage of the creator.</p> <p>I believe that there are people that try to exploit the system and create small or bigger circles to take advantage of these percentages of next sales to "hide" themselves.</p> <p>Is there any way to identify those people?</p>
74,066