question
stringlengths
37
38.8k
group_id
int64
0
74.5k
<p>I have been running a multiple moderator analysis in a pretty simple model.</p> <p>X are Google search queries normalized to 0-100. Y are the new registrations of cars in one country and one moderator.</p> <p>After checking for normality of the dependent variable Y, the result has been that it is skewed to the left and not normally distributed.</p> <p>Here my two questions:</p> <ol> <li>Can I just go on with a moderator analysis via multiple regression?</li> <li>There has been an outlier in the dependent variable which could cause the violation of not being normally distributed, but must I exclude this item? Or can I keep it?</li> </ol>
36,799
<p>I would like to evaluate the goodness-of-fit of the following (<em>Pareto</em>-like) distribution: $$ f(r) = \sigma \centerdot r^{-\rho} $$ The function estimates the population of cities given the rank $r$ in a popularity ranking.</p> <p>I have not estimated the parameters ($\sigma$ and $\rho$) from the sample. However, I am unsure how to apply <em>Kolmogorov-Smirnov</em> (or similar tests) because I do not know the <em>CDF</em> of $f(r)$.</p> <p>How can I solve this problem?</p>
74,130
<p>I am curious to know exactly, what are the (possible) differences between inductive and deductive statistical inferences in applied statistics.</p> <p>Suggestions for some good resources to learn their differences, pros and cons properly are greatly appreciated. </p>
36,801
<p>I have several markets across the US where a marketing program was launched, and I want to compare the mean weekly unit sales before and after the launch. I'm using the 10-weeks prior to launch as my baseline and the 10 weeks post-launch as my result. Now at the end I will use a paired t-test to compare the change relative to the control markets (not in the program).</p> <p>However my question is for the interim. Management wants to see updated results every week.</p> <ol> <li>Is it valid to recompute statistical significance each week while the program is still running?</li> <li>Which test is appropriate? The samples are linked so it should be a paired t-test, but that requires the same observations which I don't have until the end.</li> </ol>
74,131
<p>Its been a while since I did any serious statistics. I have been reading about contingency tables recently and it seems like they may offer a solution to my problem. There are people on here that know more about statistics than what I can ever expect to know, so rather than trying to "discover" things by myself, (and wasting time in the process), I decided to come here first, explain what I'm trying to do and ask the gurus here first, if it makes any sense, and second, if there is a better (and probably more simple/robust) way of achieving what I'm trying to do (highly likely I suspect).</p> <p>Here is an overview of what I am trying to model:</p> <p>I have a process which at any time, can be in one of N states. I have identified 4 other 'factors' that I believe are predictors of the state in which the system will be. The 'factors' are a mixture of categorical and real variables.</p> <p>I am thinking of creating a multidimensional contingency table like this:</p> <pre><code>Factor1, # Dimension 1 Factor2, # Dimension 2 ... Factor4, State1 | State2 | State3 | ... | StateN | </code></pre> <p>where the value in a cell is the count of the number of times that the system has been in that state given the 'levels' of the four factors.</p> <p>I am hoping to use this table then to build a probabilistic model (maybe using a suitable logistic function?) to be able to answer questions of the form:</p> <p>What is the probability of the system changing from state <em>i</em> to state <em>j</em>, given Factor1, .. Factor4 are at a certain specified level?</p> <p>Am I able to use a contingency table to do this?</p> <p>Last but not the least, I don't know if contingency tables are part of non parametric statistics - but I hope there are no strong assumptions of normality of the variables etc. If there are any strong assumptions required for the use of a contingency table, I will be grateful if someone could alert me of the fact, and hopefully suggest another method that can allow me to model the probabilities in the manner I described above.</p> <p>PS: If I have fudged up any of the terms in my question, please advise so I can edit my question to make it more clear. I know that the use of 'factors' and 'levels' in my question may be a tad confusing, but I could not think of any better labels.</p>
36,803
<p>Say I have a random vector $Y\sim N(X\beta,\Sigma)$ and $\Sigma\neq\sigma^2 I$. That is, the elements of $Y$ (given $X\beta$) are correlated.</p> <p>The natural estimator of $\beta$ is $(X'\Sigma^{-1}X)^{-1}X'\Sigma^{-1}Y$, and $\text{var}(\hat{\beta})=(X'\Sigma^{-1}X)^{-1}$</p> <p>In a design context, the experimenter can fiddle with the design which will result in different $X$ and $\Sigma$ thus different $\text{var}(\hat{\beta})$. To choose an optimal design, I see that people often try to minimizes the determinant of $(X'\Sigma^{-1} X)^{-1}$, what is the intuition behind this?</p> <p>Why not, say, minimizes the sum of its elements?</p>
74,132
<p>My apologies if this has been answered here. It's my first time here.</p> <p>I am a developer by trade. I am not really into this thing, but I was asked to do some data simulation using SPSS. But I am not sure what significance level to use. I was given a small data set of 24 cases. This is medical research so I assume I have to use .01.</p> <p>I just don't know if it is proper to change the default level of .05 in SPSS given I have only a small data set. Somebody please shed some light.</p>
36,804
<p>I asked a question earlier in the forum on auto arima click here <a href="http://stats.stackexchange.com/questions/68261/performance-evaluation-of-auto-arima-in-r-and-ucm-on-one-dataset">Performance evaluation of auto.arima in R and UCM on one dataset</a>. The auto.arima provided strange forecast, upon further looking at the code I did not find anything wrong in my R code see code below. This seems a very straightforward problem. If auto.arima does not fit a simple straightforward dataset, I would be very cautious in using this function to fit more complicated datasets. I would encourage using other tools/ functions and verify forecast. </p> <pre><code>plot(eggs) ## Hold out 10 data points - 1984 thru 1993 egg_price = ts(eggs,start = 1900, end = 1983) ## Fit arima model fit &lt;- auto.arima(egg_price) fcast &lt;- forecast(fit,h=10) plot(fcast) </code></pre>
49,926
<p>Many of the questions I've posted on SE in the last month have been in the goal of helping me solve this particular problem. The questions have all been answered, but I still can't come up with a solution. So, I figured that I should just ask the problem I'm trying to solve directly. </p> <p>Let $X_n \sim F_n$, where $F_n = (1-(1-F_{n-1})^c)^c$, $F_0 = x$, $c\geq 2$ (integer), and every $F_n$ is a cdf over $(0,1)$.</p> <p>I want to prove that $\mathbb{E}X_n$ decreases with $n$ for all $c$ (or even, for any particular $c$)! I can show that $F_n$ converges to a Dirac mass at the unique solution to $x_c = (1-(1-x)^c)^c)$ For $c=2$, $x_2 = (3-\sqrt{5})/2 \approx .38$. When looking at a plot of cdfs for increasing $n$'s for the same $c$, all the cdfs cross at $x_n$. The value of $F(x)$ decreases for values of $x$ less than $x_n$ and increases for values of $x$ greater then $x_n$ (as $n$ increases) converging to a vertical line at $x_n$.</p> <p>Below is a plot of $\mathbb{E}X_n$ for $n = 1$ to $40$ for $c = 2$ to $7$. It is of course a discrete plot, but I have the lines joined for ease of viewing. To generate this plot, I used NIntegrate in Mathematica, though I needed to do it on $1-F^{-1}_n$, as for some reason Mathematica couldn't generate answers on high values of $n$ for the original function. The two should be equivalent, as per Young's theorem, $\int_0^1F(x)\,dx = \int_0^1 1-F^{-1}(x)\,dx$. In my case, $F^{-1}_n(x) = 1-(1-(F^{-1}_{n-1})^{\frac{1}{c}})^{\frac{1}{c}}$, $F^{-1}_n = x$.</p> <p><img src="http://i.stack.imgur.com/GDFU6.png" alt="enter image description here"></p> <p>As you can see, the $EX_n$ moves very quicky to a minute distance from its fixed point $x_c$. As $c$ increases, the fixed point decreases (eventually will go to 0).</p> <p>So, it certainly SEEMS to be true that $EX_n$ decreases with $n$ for all $c$. But I can't prove it. Can anyone help me out? (again, I'd be somewhat happy with even just a single $c$) And, if you can't, but you have insight as to why this particular problem may be unsolvable, please share that insight as well. </p>
49,160
<p>I've been studying both discrete- and continuous-time Markov chains under stationarity assumption. Now I'm trying to move to non-stationary Markov chains. I did some google search and checked both Ross and Karlin on stochastic processes, but couldn't find anything. I'm hoping that someone can provide me with some references on this topic. Much appreciated.</p>
74,133
<p>When teaching regression, I used to do an exercise where students would try to guess where the line of best fit is on a scatterplot, and get the sums of squares. They'd move the line around, see how the slope and intercept changed, and see how the sum of squared residuals changed.</p> <p>It shows that the line of best fit really does minimize the sums of squares, and it's fun, because you have a little competition to see who can get the lowest sum of squares. </p> <p>(It also teaches a little about how iterative approaches work.) </p> <p>This relied on a horrible, horrible feature of Excel - that you could click on a graph and drag a point, and it would update the data. (Not that we need a reason to dislike Excel for data analysis).</p> <p>I've a vague memory of seeing a java (possibly) app, many years ago that did this, but I can't find it now. Is there something else out there?</p>
74,134
<p>I have this statement, but I want to be able to add a probability statement to it, like "I'm 87% sure.."</p> <p>Here is the data I have</p> <blockquote> <p>I'm 100.00% sure that grpn will go down the next day, because it's happened 3 of the last 3 times.<br> Min move of 0.66%<br> Max move of 16.54%<br> Avg move of 8.74% </p> </blockquote> <p>I want to say (obviously replacing X, Y, and Z)</p> <blockquote> <p>I'm 100.00% sure that grpn will go down the next day, because it's happened 3 times. I'm X% sure it will move at least .66% and Y% sure it will move 8.7% and Z% sure it will move 16.5%.</p> </blockquote>
36,808
<p>I am using a relevance vector machine as implemented in the kernlab-package in R, trained on a dataset with 360 continuous variables (features) and 60 examples (also continuous, so it's a relevance vector regression).</p> <p>I have several datasets with equivalent dimensions from different subjects. Now it works fine for most of the subjects, but with one particular dataset, I get this strange results:</p> <p>When using leave-one-out cross validation (so I train the RVM and try to subsequently predict one observation that was left out of the training), most of the predicted values are just around the mean of the example-values. So I really don't get good predictions, but just a slightly different value than the mean.</p> <p>It seems like the SVM is not working at all; When I plot the fitted values against the actual values, I see the same pattern; predictions around the mean. So the RVM is not even able to predict the values it was trained on (for the other datasets I get correlations of around .9 between fitted and actual values).</p> <p>It seems like, that I can at least improve the fitting (so that the RVM is at least able to predict the values it was trained on) by transforming the dependent variable (the example-values), for example by taking the square root of the dependent variable.</p> <p>so this is the output for the untransformed dependent variable:</p> <p>Relevance Vector Machine object of class "rvm" Problem type: regression </p> <pre><code>Linear (vanilla) kernel function. Number of Relevance Vectors : 5 Variance : 1407.006 Training error : 1383.534902093 </code></pre> <p>this, if I first transform the dependent variable by taking the square root:</p> <p>Relevance Vector Machine object of class "rvm" Problem type: regression </p> <pre><code>Linear (vanilla) kernel function. Number of Relevance Vectors : 55 Variance : 1.711355 Training error : 0.89601609 </code></pre> <p>How is it, that the RVM-results change so dramatically, just by transforming the dependent variable? And what is going wrong, when an SVM just predicts values around the mean of the dependent variable (even for the values and observations it was trained on)?</p>
74,135
<p>When preparing a summary document for policy-makers, it's fairly common to include a graphic that represents how the optimal policy solution varies with two variables. It may be the result of a formal or informal optimisation analysis. Below is an example of such a graphic. The graphic partitions the chart area into contiguous areas, with each area indicating that combinations of (x,y) within that area all share an optimal policy solution.</p> <p>What's the proper name for such data visualisation? And is there an easy way in Matlab (or even Excel) to generate such graphics relatively automatically from data, without having to just draw every component manually as a custom shape (because if I was going to do that, I'd just use Illustrator)?</p> <p>For now, please ignore the "CALM" / "SEGREGATE" labels and arrows - I'm interested in the partitioning of the chart into separate areas, and the distinct shading and labelling of each area. I'm not assuming this graph was generated from data: it's just to illustrate the chart style, nothing more. I want to generate a similar style of graph from data. As with the graph below, some of the boundaries may be non-linear.</p> <p><img src="http://i.stack.imgur.com/2LwvN.jpg" alt="enter image description here"></p>
74,136
<p>I am new to R. I am building predictive model with gbm package. I have a problem that I retrieve different results for data from data frame that was used for building of the model and for separate data frame with same values.</p> <p>I randomly divide my data to two sets, training set is loaded to `head':</p> <blockquote> <p>head &lt;- read.csv(...)</p> </blockquote> <p>I build a model with gbm:</p> <blockquote> <p>fit1000x3 &lt;- gbm(V1 ~ V2+V3+V4+V5+V6+V7+V8+V9+V10+V11, data=head, n.trees=1000, distribution="gaussian", interaction.depth=3, bag.fraction=0.5, train.fraction=1.0, shrinkage=0.1, keep.data=TRUE)</p> </blockquote> <p>When I create a data frame with values equal to head[1,]:</p> <blockquote> <p>xxx &lt;- data.frame(V1=...)</p> </blockquote> <p>I receive different values for:</p> <blockquote> <p>predict(fit1000x3, newdata=head[1,], n.trees=100)</p> </blockquote> <p>and</p> <blockquote> <p>predict(fit1000x3, newdata=xxx, n.trees=100)</p> </blockquote> <p>Here is the series of commands I have run:</p> <pre> > head &lt;- read.csv(...) > fit1000x3 &lt;- gbm(V1 ~ V2+V3+V4+V5+V6+V7+V8+V9+V10+V11, data=head, n.trees=1000, distribution="gaussian", interaction.depth=3, bag.fraction=0.5, train.fraction=1.0, shrinkage=0.1, keep.data=TRUE) Iter TrainDeviance ValidDeviance StepSize Improve 1 0.1707 -nan 0.1000 0.0152 2 0.1581 -nan 0.1000 0.0122 3 0.1478 -nan 0.1000 0.0100 4 0.1395 -nan 0.1000 0.0079 5 0.1326 -nan 0.1000 0.0067 6 0.1267 -nan 0.1000 0.0056 7 0.1211 -nan 0.1000 0.0052 8 0.1168 -nan 0.1000 0.0039 9 0.1133 -nan 0.1000 0.0032 10 0.1103 -nan 0.1000 0.0027 100 0.0773 -nan 0.1000 -0.0002 200 0.0734 -nan 0.1000 -0.0002 300 0.0714 -nan 0.1000 -0.0002 400 0.0695 -nan 0.1000 -0.0002 500 0.0681 -nan 0.1000 -0.0002 600 0.0672 -nan 0.1000 -0.0002 700 0.0663 -nan 0.1000 -0.0002 800 0.0655 -nan 0.1000 -0.0002 900 0.0648 -nan 0.1000 -0.0001 1000 0.0643 -nan 0.1000 -0.0001 > predict(fit1000x3, newdata=head[1,], n.trees=100) [1] 0.1420456 > head[1,] V1 V2 V3 V4 V5 V6 V7 V8 V9 1 0 0.35 m01xrfn2 Effective resolution 5.1 Nu null null niceCharacter unitName V10 V11 1 null nextag > xxx &lt;- data.frame(V1=0, V2=0.35, V3="m01xrfn2 Effective resolution", V4="5.1", V5="Nu", V6="null", V7="null", V8="niceCharacter", V9="unitName", V10="null", V11="nextag") > xxx V1 V2 V3 V4 V5 V6 V7 V8 V9 1 0 0.35 m01xrfn2 Effective resolution 5.1 Nu null null niceCharacter unitName V10 V11 1 null nextag > head[1,] V1 V2 V3 V4 V5 V6 V7 V8 V9 1 0 0.35 m01xrfn2 Effective resolution 5.1 Nu null null niceCharacter unitName V10 V11 1 null nextag > predict(fit1000x3, newdata=xxx, n.trees=100) [1] 0.2068787 > str(head[1,]) 'data.frame': 1 obs. of 11 variables: $ V1 : int 0 $ V2 : num 0.35 $ V3 : Factor w/ 113 levels &quot;m01t_ Contains&quot;,..: 4 $ V4 : Factor w/ 884 levels ".","0","01","02",..: 503 $ V5 : Factor w/ 11 levels &quot;aN&quot;,&quot;aNu&quot;,&quot;aU&quot;,..: 4 $ V6 : Factor w/ 4 levels "null","propertyAlias",..: 1 $ V7 : Factor w/ 9 levels &quot;attach&quot;,&quot;block&quot;,..: 6 $ V8 : Factor w/ 8 levels "attach","block",..: 5 $ V9 : Factor w/ 4 levels &quot;null&quot;,&quot;propertyAlias&quot;,..: 4 $ V10: Factor w/ 2 levels "null","undef": 1 $ V11: Factor w/ 368 levels &quot;101reviews&quot;,&quot;123football&quot;,..: 223 &gt; str(xxx) &#39;data.frame&#39;: 1 obs. of 11 variables: $ V1 : num 0 $ V2 : num 0.35 $ V3 : Factor w/ 1 level "m01xrfn2 Effective resolution": 1 $ V4 : Factor w/ 1 level &quot;5.1&quot;: 1 $ V5 : Factor w/ 1 level "Nu": 1 $ V6 : Factor w/ 1 level &quot;null&quot;: 1 $ V7 : Factor w/ 1 level "null": 1 $ V8 : Factor w/ 1 level &quot;niceCharacter&quot;: 1 $ V9 : Factor w/ 1 level "unitName": 1 $ V10: Factor w/ 1 level &quot;null&quot;: 1 $ V11: Factor w/ 1 level "nextag": 1 </pre>
36,809
<p>I am trying to build an index that summarizes health care quality on different departments for a number of hospitals. I have selected a number of variables, each representing quality in a medical specialization.</p> <p>The weighting scheme is quite obvious for me. Since the index will be related to costs on an aggregated level it make sense to weight the diffrent quality indicators according to the specializations costs.</p> <p>I have another problem however. Diffrent indicators are measured on different scales. The usual solution is to Z - standardize them but this will introduce a problem where an indicator with irrelevant differences, say an indicator where hospitals differ but the difference is deemed non relevant medically will have the same effect on the index as an indicator with medically relevant differences and with the same costs. How should I handle this?</p> <p>To explain it further. Say I have reoperations of hip surgery, it has a cost of 200 and has values between 0.1 to 0.2 with 0.15 as mean. All results are deemed very good here and the diffrence is mostly related to sample size. </p> <p>I also have blood sugar levels that go from 100 to 200 with 150 as mean, this represent diabetes and has 200 in cost weighting too. This is however a medically relevant difference and not related to sample size. </p> <p>Using Z-standarization the highest value in the hip surgery indicator will have about the same effect on the index component as the highest value in the blood sugar levels while one is interesting and the other is not.</p> <p>Any ideas on how to handle this?</p>
74,137
<p>I am working on a model which predicts a binomial variable. I have millions of records and hundreds of variables to sample from. I have millions of records from individuals from each of the past several years. Most of the variables have specific data to the individual and to the year. However I do have a few variables that vary only on year and not with the individuals. Meaning that if I have ten years worth of data, I have only 10 unique variables within those millions of records. Furthermore, I have multiple year only variables that I would like to test to find out which ones will give the best predictive results. Because there are so few different values within these variables, and they are extremely highly correlated, they give dramatically different predictive results when substituted in and out. </p> <p>Will the Random Forest analysis allow me to substitute in and out these sets of year variables in order to come up with an average of models based on these different variables?</p> <p>If this is acceptable, can I, for instances say in R, specify which variable to not substitute, and which variables to substitute only keeping one of them at a time?</p>
35,031
<p>I am reading about influence diagrams, and I want to know how their utility function is calculated.</p> <p>In all of the examples listed in the literature, I am not able to find the formula of the utility function.</p>
74,138
<p>am doing my dissertation involving cox model and i would like to understand how you interpret the survival table at mean of covariates. how i do u i determine the survival function from the output</p> <pre><code>Survival Table At mean of covariates Time Baseline SurvivalSE Cum Hazard 0 .023 .991 .006 .009 3 .033 .987 .007 .013 5 .044 .982 .009 .018 8 .058 .977 .011 .024 9 .088 .965 .014 .036 11 .107 .957 .017 .044 12 .128 .949 .019 .052 14 .173 .932 .024 .071 24 .232 .910 .033 .095 34 .326 .875 .048 .133 45 .481 .821 .068 .197 46 .668 .761 .088 .273 49 .946 .679 .109 .387 56 1.769 .485 .178 .723 Correlation Matrix of Regression Coefficients Sex Birthweight MartenaAgeBreatsfeeding Birthweight -.151 MartenalAge -.108 -.077 Breatsfeeding .030 .222 .057 Immunisation .240 -.085 .016 -.168 Covariate Means Mean Sex .552 Birthweight .129 MartenalAge 3.067 Breatsfeeding .238 Immunisation .124 </code></pre> <p>my output from my cox model and it contains the above covariates and immunisation and breatsfeeding are siginificant but i want to intepret the survival and hazard functions for young children from 0-60 months</p>
25,389
<p>I turn to this forum for advice with the following problem. If you could please shed some light on any aspect of this question I'd be very grateful.</p> <p><strong>Problem decription:</strong><br> I'm trying to use an SVM to segment a grayscale image of a puncture in polymer (original res. 1280x1024, can't post, no reputation :)</p> <p>Now, I know this isn't probably the most conventional way to approach this problem, but still I'd like to try whether it is possible in any way.</p> <p><strong>My work so far:</strong><br> I think of the segmentation problem as follows: classify a given pixel based on its value and neighborhood pixel values, i.e. determine whether pixel belongs to the foreground (puncture) or background (anything other than puncture).</p> <p>I labeled this image using GIMP (for the SVM training purposes, i.e. marked the location of the puncture, i.e. each pixel is given a class (1 - puncture, -1 - background)) and tried to extract some simple features:</p> <ol> <li>central pixel value + neighborhood pixel values with varying size of neighborhood</li> <li>central pixel value + differences between central pixel value and neighborhood pixel values</li> <li>central pixel value + 2D FFT spectrum of the neighborhood (amplitude and phase components)</li> <li>standard deviation of the neighborhood</li> </ol> <p>Note that I varied neighborhood size from 3x3 = 9 dimensions to 11x11 = 121 dimensions. I couldn't go higher (I use MATLAB for this, i'm getting out of memory errors). None of these were found sufficiently discriminatory. (I used PCA to inspect, calculated between-class distance of centroids and their respective class covariance matrices).</p> <p><strong>Soo, at last, to my question:</strong><br> Could you think of any usefull features to use for this task? (I was thinking some measures of homogentity of the neighborhood would be helpfull, since inside of the puncture is more or less uniform in brightness, but haven't found any. Also maybe some texture features would be helpfull, but who knows.)</p> <p>Cheers, Jacob</p>
36,817
<pre><code>library(mvtnorm) set.seed(1) x &lt;- rmvnorm(2000, rep(0, 6), diag(c(5, rep(1,5)))) x &lt;- scale(x, center=T, scale=F) pc &lt;- princomp(x) biplot(pc) </code></pre> <p><img src="http://i.stack.imgur.com/Jj6C7.png" alt="enter image description here"></p> <p>There are a bunch of red arrows plotted, what do they mean? I knew that the first arrow labelled with "Var1" should be pointing the most varying direction of the data-set (if we think them as 2000 data points, each being a vector of size 6). I also read from somewhere, the most varying direction should be the direction of the 1st eigen vector.</p> <p>However, reading into the code of biplot in R. The line about the arrows is:</p> <pre><code>if(var.axes) arrows(0, 0, y[,1L] * 0.8, y[,2L] * 0.8, col = col[2L], </code></pre> <p>Where <code>y</code> is the actually the loadings matrix, which is the eigenvector matrix. So it looks like the 1st arrow is actually pointing from <code>(0, 0)</code> to <code>(y[1, 1], y[1, 2])</code>. I understand that we are trying to plot a high dimensional arrow onto a 2D plane. That's why we are taking the 1st and 2nd element of the <code>y[1, ]</code> vector. However what I don't understand is:</p> <p>Shouldn't the 1st eigenvector direction be the vector denoted by <code>y[, 1]</code>, instead of <code>y[1, ]</code>? (Again, here <code>y</code> is the eigenvector matrix, obtained by PCA or by eigendecomposition of <code>t(x) %*% x</code>.) i.e. the eigenvectors should be column vectors, not those horizontal vectors. </p> <p>Even though we are plotting them on 2D plane, we should draw the 1st direction to be from <code>(0, 0)</code> pointing to <code>(y[1, 1], y[2, 1])</code>? </p>
36,818
<p>I am conducting an ordinal logistic regression. I have an ordinal variable, let's call it Change, that expresses the change in a biological parameter between two time points 5 years apart. Its values are 0 (no change), 1 (small change), 2 (large change).</p> <p>I have several other variables (VarA, VarB, VarC, VarD) measured between the two time points. My intention is to perform an ordinal logistic regression to assess whether the entity of Change is more strongly associated with VarA or VarB. I'm really interested only in VarA and VarB, and I'm not trying to create a model. VarC and VarD are variables that I know <em>may</em> affect Change, but probably not very much, and in any case I'm not interested in them. I just want to know if the association in the period of observation (5 years) was stroger for VarA or for VarB.</p> <p>Would it be wrong to not include VarC and VarD in the regression?</p>
74,139
<p>I'm a bit of a novice at maths and am trying to get my head around a problem.</p> <p>I have 3 independent variables which affect 1 dependent variable. I want to create a 4D model which will give me the 4th dimension when I give it an (x, y, z) triplet. </p> <p>I am programming in Java and already have a regression function which will take a set of independent variables and give me coefficients of those independent variables which best fit the data supplied. </p> <p>What I am trying to figure out is which independent variables to use.</p> <p>I have tried various cubic functions, with independent variables something like: {1 + x + y + z + xx + xy + xz + yy + yz + zz + xxx + xxy + xxz + yyy + yyx + yyz + zzz + zzy + zzx + xyz}</p> <p>Then when the resulting model looked a bit wrong I thought, ah maybe the x and y values don't mean anything when multiplied, so took out the variables where x and y were together. Now it still isn't right and I'm worried I'm just going about it in entirely the wrong way. Maybe there's an exponential in there somewhere?</p> <p>Is there some mathematical method to finding exactly which variables I should be using?</p> <p>Cheers!</p> <hr> <p>The data I already have is like this. It's to do with calculating final scores in a cricket game based on which batsmen are in and how far through the game we are, the final score is the dependent variable:</p> <p>X axis ranges from 0 to 10 inclusive (the order of the first batsman in). Y axis ranges from 0 to 10 inclusive (the order of the second batsman in). Z axis ranges from 0 to 19 inclusive (the over we are in, basically means how far through the game we are).</p> <p>The lower x and y are, the higher the final score will be, as the team have better batsmen still in. The higher the over (when x and y are the same), the higher the final score will be, because the batsmen have lasted longer and so should have their eye in.</p> <p>I guess it's "what I expect the dependent variable to be" which is the question. How does each parameter effect the final score. I can post some sample data if you want.</p> <p>I have calculated a data point for each (X, Y, Z) combination, so can't get more. I have data from all the cricket games, and each data point is the average final score of games where this situation has occurred. Some situations ((x, y, z) triples) are far more likely to occur (in more average games) and have been weighted in the regression function accordingly.</p>
37,613
<p>I was wondering what relations and differences are between pivotal statistic versus distribution free statistic?</p> <ol> <li><p>From <a href="http://en.wikipedia.org/wiki/Pivotal_quantity" rel="nofollow">Wikipedia</a></p> <blockquote> <p>a pivotal quantity or pivot is a function of observations and unobservable parameters whose probability distribution does not depend on the unknown parameters <a href="http://en.wikipedia.org/wiki/Pivotal_quantity" rel="nofollow">1</a> (also referred to as nuisance parameters).</p> </blockquote></li> <li>If I understand correctly, a statistic T is said to be distribution free, if the distribution of T(X) doesn't depend on the distribution of X. Examples are Kolmogorov-Smirnov test statistic.</li> </ol> <p>My understanding about their differences are: a pivotal statistic still depends on the form of the distribution, but not on the value of its parameter? A distribution free statistic doesn't depend on either form or parameter of the distribution. Am I correct? </p> <p>My question comes from <a href="http://math.stackexchange.com/a/324653/1281">a reply at MSE.</a> I also appreciate that if you could answer my question there.</p> <p>Thanks and regards!</p>
74,140
<p>Quick background:</p> <p>I am working on a political science project that involves analyzing the impact of different variables on the extent to which a candidate mentions other users when he or she tweets. </p> <p>One of these variables is whether the candidate answers the Political Courage Test (PCT). If he/she does, the value is 1. If they don't, it's 0.</p> <p>Another variable is the amount of money that the candidate raises over the course of his/her campaign. </p> <p>Someone more experienced crunched the data for me via a regression, and sent me the following results:</p> <pre><code>PCT Coefficient: 9.580 Standard error: 8.144 Amount of $$$ Raised Coefficient: 0.000 Standard error: 0.000 PCT*Amount of $$$ Raised Coefficient: 0.000 Standard error: 0.000 </code></pre> <p>I have basically no background in statistics, so I am at a loss of how to interpret this outcome effectively. </p> <p>From what I can tell, neither the amount of money raised nor whether the candidate answered the CPT has much of an effect on the tweets, but I am confused about the third one (CPT*Money Raised), which I am told is an interaction. What exactly is that saying? </p> <p>Thank you in advance for your help.</p>
74,141
<p>I have some data gathered from a survey conducted within my city. All responses include an approximate geo location of where they were gathered (accurate to probably a couple of hundred yards which is relatively small), and things like the respondents age, sex, income range, number of dependents, etc. There are approx. 4000 responses.</p> <p>What I would like to is to be able to generate what I guess you would call a model, so that given a geo point (or box) I could characterize the typical respondent from there (it doesn't really have to be really rigorous, although some kind of formal confidence measurement would be nice).</p> <p>So, is the right thing to do to simply treat all the gathered attributes separately and say "Well the age of your typical respondent in that area is m with stdev s, and their income range is ..., etc."</p> <p>Or is there some better way to analyse the data together to get a better profile of the respondents.</p> <p>Some key phrases to google would even help at this stage, because I'm a bit lost. I thought this might be "data fusion" but I don't think it is.</p>
74,142
<p>I am doubting myself on which analysis to run for the following: 18 participants were evaluated at 4 time points with different conditions at each time. They were given scores (on a discrete visual analog scale) by 2 raters.</p> <p>The scores were calculated for a pair of participants: the pairs changed at each time point. I do know which participant comprises each pair.</p> <p>Is that a 2-way repeated measures ANOVA? Some variation of Friedman test?</p>
37,931
<p>I have an experimental sample, size of about 1000 values​​. I need to generate a much larger sample for simulation. I can create a samples like this:</p> <pre><code>library(ks) x&lt;-rlnorm(1000) y&lt;-rkde(fhat=kde(x=x, h=hpi(x)), n=10000, positive = TRUE)# z&lt;-sample(x, 10000, replace = TRUE) par(mfrow=c(3,1)) hist(x, freq=F, breaks=100, col="red") hist(y, freq=F, breaks=100, col="green") hist(z, freq=F, breaks=100, col="blue") </code></pre> <p>What fundamental limitations when using KDE or bootstrap? How else can I create such a sample?</p>
36,822
<p>I am required to use the Naive Bayes classifier to classify example 8, to see whether it is poisonous or not. </p> <p><img src="http://i.stack.imgur.com/SnscY.png" alt="enter image description here"></p> <p>I gained the following results:</p> <p>p(x|Poisonous=Y) = 0.0267857 and </p> <p>p(x|Poisonous=N) = 0.0101989</p> <p>If I am given extra information at a later stage that there is a 0.05 chance of poisonous plants being found, hence 0.95 chance of them not being found. How should I go about classifying example 8 based on the new data?</p> <p>Any help would be appreciated.</p>
36,823
<p>This is probably a far too basic question for this board - but on the other hand, I know I'll get good answers. "Stats 101" is a metaphor, by the way. I'm asking for help with my work, not my homework!</p> <p>I am looking at aggregate financial data for hospitals. I have identified two hospital systems that accumulate unusually large operating surpluses (profits) compared to their peers - in the 8% to 12% range when the standard for a non-profit hospital is 3%. This amounts to hundreds of millions of dollars after expenses. I created a metric by dividing these profits by annual case-adjusted admissions and the results negate volume or mix of patient type as reasons for the difference. I've also looked at expenses and they are about the same as peer hospital, so low expenses is not an explanation, either. This suggests pricing as the remaining reason for the difference.</p> <p>Only aggregate data is available - I do not have case-level data. By simply ranking my list of 85 hospitals, the annual "profit per patient" for these two hospitals rises to the top of the list. The difference between these two hospitals is great enough that I am certain that the variance would be statistically significant if I ran the right test. I'd like to do that - show that it is highly unlikely that this is chance variation.</p> <p>Can you recommend the best test to run on these figures? By the way, I do not have access to SPSS or SAS through my employer, so I'd likely be trying this in Excel or possibly Access.</p>
38,185
<p>I wonder if <a href="http://en.wikipedia.org/wiki/Categorical_data" rel="nofollow">categorical data</a> by definition can only take finitely or countably infinitely many values? And no more i.e. not uncountably many values? </p> <p>Related question: is the distribution of a categorical variable always a discrete distribution or a continuous distribution?</p> <p>Thanks and regards!</p>
74,143
<p>I'm trying to implement an ordered probit model in pymc, and I'm stuck. The model is similar to <a href="http://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;ved=0CCkQFjAA&amp;url=http://www.vision.caltech.edu/visipedia/papers/WelinderEtalNIPS10.pdf&amp;ei=fj17T-CvCoek8QS5--HTBA&amp;usg=AFQjCNEj4UZUHHVyjOQP7_RuWTKzjKSRdw" rel="nofollow">Welinder's "multidimensional wisdom of crowds"</a>, with coders (indexed by i) and documents (indexed by j). Coders assign codes to documents, but the coding process is noisy.</p> <p>We wish to estimate two things. First, $z_j$, the true, underlying value of each document along some latent dimension. Second, $\beta_i$, bias terms revealing how accurately each coder's assessments line up with the group average.</p> <p>This would be pretty easy if codes were on a continuous scale, but the data I have is ordinal. Codes fall in the range 1,2,...,5 .</p> <p>Formally, we can represent this as follows:</p> <p>$x_{ij} \sim Normal( \alpha_i + \beta_iz_j, 1)$</p> <p>$code_{ij} = cut(x_{ij}, w)$</p> <p>Where $cut$ is a cutoff function that assigns values based on the index of the highest cutpoints, $w$, exceeded by $x_{ij}$.</p> <p>So far, so good. The problem with this model is that the cutpoint function is deterministic, and codes are observed. But in pymc (and in other MCMC programs, e.g. JAGS), a deterministic node cannot also be observed. So this model can't be built directly in pymc.</p> <p>It seems that there's probably a way to treat $x$ as deterministic, and $codes$ as a random function of $x$. This would probably make $code$ involve a Categorical node. But I'm not sure how to specify the probability function, and I'm a little worried my whole approach may be off. Can anyone set me straight?</p> <p>Also -- long shot -- if there's anyone who codes in pymc, it'd be great to see source code for this. Ordinal probit/logit is a pretty standard model, but I can't find a pymc example anywhere online.</p>
36,825
<p>For a method calculating expected claims in insurance I have to assume lognormal distribution. For testing I would use annual cumulated data. With a small sample capped at 20 years, my idea is to use disaggregated data - monthly, or individual claims. Now I have found hat the sum of lognormal claims is not a lognormally distributed. Are there any ideas to improve the power of the test?</p> <p>Edit: The general aim is to calculate the volatility of claims by maximum likelihood estimation. The test for the distribution is Kolmogorov-Smirnov. The problem is that I would have annual data with as few as 5 years of data, and a cap after 20 years. </p> <p>In pretesting with random samples from lognormal, normal and gamma distributions I get good results if the data is indeed lognormal even with 5 years of data, but the test will only decline about 5% of the gamma-distributed sample with 5 years of data (in 10,000 samples). </p> <p><img src="http://i.stack.imgur.com/e5Le9.png" alt="Probability of declining H0 = sample is lognomally distributed"></p> <p>Here the code for the simulation, I'd be grateful for comments if there is a problem with the way I set it up. </p> <pre><code>n = 5:20 Loops = 10000 GammaRes = LogNormRes = NormRes = matrix(rep(NA, length(n)*Loops),nrow = Loops) for(j in 1:Loops) { count = 0 for(i in n) { count = count + 1 GammaVerluste = rgamma(i, shape =2 ) LogNormVerluste = rlnorm(i) NormVerluste = rnorm(i) GammaRes[j,count] = ks.test(GammaVerluste, "plnorm")$p.value LogNormRes[j,count] = ks.test(LogNormVerluste, "plnorm")$p.value NormRes[j,count] = ks.test(NormVerluste, "plnorm")$p.value } } Alpha = 0.01 DeclineGamma = DeclineNormal = DeclineLogNormal = rep(NA, length(n)) count = 0 for(i in 1:length(n)) { DeclineGamma[i] = sum(GammaRes[,i] &lt; Alpha)/Loops DeclineNormal[i] = sum(NormRes[,i] &lt; Alpha)/Loops DeclineLogNormal[i] = sum(LogNormRes[,i] &lt; Alpha)/Loops } </code></pre>
36,826
<p>What are hierarchical priors? </p> <p>How do they differ from the general concept of priors?</p>
36,827
<p>I'm currently storing the following things in my Database about user submitted content:</p> <ul> <li>Downloads: The total Download count of the Content</li> <li>Likes: A user can either like a content or do nothing</li> </ul> <p>How can I determine which Content has got the highest popularity by using these two numbers?</p> <p>I would try something like:</p> <p>$s = \frac{Downloads}{Likes}$</p> <p>and the Content with the lowest Value is the most popular.</p> <p>e.g.:</p> <ul> <li>$\frac{10000}{100} = 100$ not so good</li> <li>$\frac{10000}{1000} = 10$ lower value, better content</li> </ul> <hr> <p>Another approach could be adding both numbers together so:</p> <ul> <li>$10000 + 100 = 10100$ not so good</li> <li>$10000 + 1000 = 11000$ higher value, better content</li> </ul> <p>Would this work and is this "fair" in real life or should I do something different?</p>
36,828
<p>I need to test for correlation of 4 sets of weather parameters between 2 sites. I am not interested in interactions between the parameters. Because no weather parameter is independent from other weather parameters, if I was simply trying to determine if each parameter differed between the 2 sites, I would use a MANOVA followed by multiple-comparisons <em>t</em>-tests, and correct for family-wise error using a Bonferroni method. But since I want to see if the parameters are <em>correlated</em> between the sites, I'm not sure what to do. </p> <p>Is there an overarching test (like MANOVA) that should be applied prior to what amounts to multiple-comparison correlations? Or can I just do multiple Spearman's correlations and interpret from there?</p> <p><strong>EDIT</strong>: I suppose in the long run it doesn't really matter to me if the 2 parameters are statistically significantly correlated. What <em>will</em> matter is the strength of their correlations. That, from what I understand, is subjective (based on the field)...is this correct? In this respect, is there any multiple-comparison sort of thing I should be keeping in mind?</p>
74,144
<p>I have two questions related to cross validation in LIBLINEAR </p> <p>I have 1000 documents from which i take 300 documents for training and rest 700 for classification . I train 300 documents with parameter -s 0 having two class labels and then while prediction i feed 700 one by one document to the classifier with -b 1 parameter to get the probability and if the probability is greater than a defined thresh hold then that document i mark as classify and again feed another document till all 700 documents are iterated </p> <p>1:Is it possible to calculate Precision ,Recall,F-score,Accuracy after all 700 documents are classified based on above method.If yes then how?.Please give me one example for the same </p> <p>2:Is it possible to get number of TP,FP documents of a class label.Like get the TP and FP documents of a label For example out of 700 documents 300 are classified as Label1 and 300 classified as Label2 and 100 as unclassified then how to get Tp,FP of label1 300 documents and same for 300 label2..Label1:TP=Doc1,Doc2,Doc4,Doc5......FP=Doc3,Doc6,Doc9.....etc</p>
76
<p>So, I think that I have a decent grasp of the basics of frequentist probability and statistical analysis (and how badly it can be used). In a frequentist world, it makes sense to ask such a question as "is this distribution different from that distribution", because distributions are assumed to be real, objective and unchanging (for a given situation, at least), and so we can figure out how likely it is that one sample is drawn from a distribution shaped like another sample.</p> <p>In the Bayesian world view, we only care about what <em>we</em> expect to see, given our past experiences (I'm still a bit vague on this part, but I understand the concept of Bayesian updating). If that is so, how can a Bayesian say "this set of data is different from that set of data"?</p> <p>For the purposes of this question, I don't care about statistical significance, or similar, just how to quantify difference. I'm equally interested in parametric and non-parametric distributions.</p>
74,145
<p>I've been reading the <a href="http://en.wikipedia.org/wiki/Levene%27s_test" rel="nofollow">Wikipedia page</a> for Levene's test, and it cites the degrees of freedom as (k - 1, N - k), where k is the number of different groups to which the sampled cases belong, and N is the total number of cases in all groups. However, it does not explain why this is so. There is a very thorough answer <a href="http://stats.stackexchange.com/questions/16921/how-to-understand-degrees-of-freedom">here</a> which would suffice to answer this question in relation to the chi square goodness of fit. However, I have not been able to find a satisfactory answer to the question in relation to Levene's test.</p>
6,937
<p>I am particularly interested in hearing thoughts on what are logical next steps in the research agenda of people who are interested in "The experimental approach to development economics" or in the evaluation of policy.</p> <p>Many people reject the notion that a randomized controlled trial (RCT) can estimate relevant policy parameters. They generally argue for a more "structural" approach. Is there a good middle-ground and will structuralism begin to take hold?</p>
48,513
<p>Akaike's model selection criterion is usually justified on the base that the empirical risk of a ML estimator is a biased estimator of the true risk of the best estimator in the parametric family, say the family of linear regressors on a m-dimensional variable, $S_m $</p> <p>On the other hand, this family, $ S_m$, is known to have finite VC dimension ($VC = m+1$). Having finite VC-dimension should grant that the empirical risk minimizer is asymtotically consistent (Vapnik, "An overview of statistical learning theory")</p> <p>What am I missing? </p> <p>Thanks Jake</p>
36,839
<p>I am using R, I searched on Google and learnt that <code>kpss.test()</code>, <code>PP.test()</code>, and <code>adf.test()</code> are used to know about stationarity of time series.</p> <p>But I am not a statistician, who can interpret their results</p> <pre><code>&gt; PP.test(x) Phillips-Perron Unit Root Test data: x Dickey-Fuller = -30.649, Truncation lag parameter = 7, p-value = 0.01 &gt; kpss.test(b$V1) KPSS Test for Level Stationarity data: b$V1 KPSS Level = 0.0333, Truncation lag parameter = 3, p-value = 0.1 Warning message: In kpss.test(b$V1) : p-value greater than printed p-value &gt; adf.test(x) Augmented Dickey-Fuller Test data: x Dickey-Fuller = -9.6825, Lag order = 9, p-value = 0.01 alternative hypothesis: stationary Warning message: In adf.test(x) : p-value smaller than printed p-value </code></pre> <p>I am dealing with thousands of time series, kindly me tell how to check quantitatively about stationarity of time series.</p>
36,841
<p>I have responses to a questionnaire item from a number of people, measured at equidistant timepoints. I wish to fit a growth mixture model (in R, using the LCMM package) to this data to find latent classes. My data looks something like this:</p> <pre><code> ID item-response timepoint ----------------------- 1 3 1 1 2 2 1 2 3 2 2 1 2 3 2 2 2 3 2 1 4 2 1 5 2 3 6 2 2 7 2 2 8 2 2 9 2 1 10 2 4 11 2 2 12 3 1 1 3 1 2 3 1 3 3 1 4 3 1 5 . . . . . . . . . </code></pre> <p>The item is one of 13 on a questionnaire on mood states. Responses are given on a Likert-scale (1 to 5).</p> <p>A plot of the response curves of the first four individuals looks like this: <img src="http://i.stack.imgur.com/BFVqM.png" alt="Response curves of the first four individuals"></p> <p>I am worried about the fact that the number of measurements per person is not the same. Is this a huge problem for growth mixture models or not so much? </p> <p><strong>[edit] included a column of timepoints</strong></p>
74,146
<p>I have web log analysis data (AWStats) from a university library website. I'm looking at the number of visits per month divided by the number of faculty plus student enrollment (visits per headcount). This shows a downward trend, along with strong seasonality. Also, the undergrad enrollment has gone up steadily the last few years, while the graduate enrollment has stayed flat. </p> <p>Therefore, I am fitting a regression with ARMA errors model, with the ratio of grad student headcount to undergrad headcount as an explanatory variable (since graduate students use the library more than undergrads). My interest is in explaining the downward trend, not forecasting. The time series plots for the response and explanatory variables look very similar, seasonal with spikes in the summer and a downward trend.</p> <p>I have taken regular and seasonal differences for both variables and fit a model with ARMA errors. The estimate for the ratio is significant.</p> <p>My question is, how can I estimate how much of the downward trend the does the regression variable explain? I don't think it explains all of it.</p> <p>The AIC without the regression term is -20.02, and with the regression term is -35.16. The estimate of the slope is 3.71.</p> <p>I'm more familiar with SAS, and I am using proc arima, but I could use R as well.</p> <p>I want to emphasize that the question is not prediction. As there are more options to gather information online, it is natural that there might be less library usage per person. The question, at our particular institution, can part of the per-person usage be explained by the fact that the proportion of grad students in the entire student enrollment is less, as undergrad enrollment rises? We know, from survey data, that grad students use the library more. Then how much of the trend is attributable to that? 10%? 20%? 50%?</p> <p>I will try posting some graphs, output, etc. when I get a chance.</p>
74,147
<p>Let $X_{n}$ be an $\mathcal F_{n}$-martingale and let $B\in \mathcal B$.<br> Show that $T=\min\{n:X_{n}\in B\}$ is an $\mathcal F_{n}$-stopping time.<br> $\mathcal B$ is Borel $\sigma$-algebra and filtration is $\mathcal F=\sigma(X_{1},\dots,X_{n})$. Thanks for help.</p>
74,148
<p>I was just wondering if someone could help me understand this derivation of the probability generating function for a Poisson distribution, (I understand it, until the last step):</p> <p>$$\pi(s)=\sum^{\infty}_{i=0}e^{-\lambda}\frac{\lambda^i}{i!}s^i$$ $$\pi(s)=e^{-\lambda}\sum^{\infty}_{i=0}\frac{e^{\lambda s}}{e^{\lambda s}}\frac{(\lambda s)^i}{i!}$$ $$= e^{-\lambda}e^{\lambda s} $$</p> <p>This is a re-production from some lecture notes, but I'm not sure how it jumps from the 2nd last step to the last step?</p> <p>If someone can show me the intermediate steps I would be very grateful!! </p>
36,844
<p>I have a cycle that I filtered out from an original series using a Baxter deterministic filter. However, the cycle plot still has some noise and I would like it to be more determinisitc and follow a perfect sinosiod. I can run a trig-regression of the cycle on a $\sin(2\pi\cdot\text{index}/12)$ but I wouldn't know how to test for the trig-variables significance as the model is plaqued by auto-correlation. I can add AR terms but I am trying to find the perfect sine wave to fit the cycle without have to constantly guess its periodicity. Does anyone have a method for determining the best trig function variable without the constant trial and error? </p> <p>The cycle I have looks very similar to this one:</p> <p><img src="http://i.stack.imgur.com/ibOUm.png" alt="enter image description here"></p> <p>which has a natural cyclyical pattern but I want to model it to more like this:</p> <p><img src="http://i.stack.imgur.com/hCZCZ.jpg" alt="enter image description here"></p> <p>Anyone comments would be highly appreciated.</p>
36,845
<p>Given the number of users of an application was 70 in total, it's <a href="http://www.measuringusability.com/five-users.php" rel="nofollow">my understanding that research shows</a>:</p> <blockquote> <p>Five users is the number of users needed to detect approximately 85% of the problems in an interface, given that the probability a user would encounter a problem is about 31%.</p> </blockquote> <p>Thing is that over time, it seems likely that a given user's abilities to detect problems compared to other users would likely become more predictable. Is this true, and if not why? If it is true, what would be the best formula for knowing when to switch from random selection to optimal selection?</p> <p><strong>Note:</strong> Please note that this question is roughly understood by me, and it's very possible that there's a fundamental issue with the premise and/or gaps in the assumptions provided. If so, please comment and I'll attempt to address any concerns, questions, etc. </p>
74,149
<p>Is there any interesting problem in the area of "Document Image Analysis and Retrieval" which by nature needs an online/incremental clustering process ? The problem may be in the context of "Logical Structure Analysis", or "Document Layout Analysis" to identify regions of interest in a scanned page, or any other related topics. What matters is that the considered problem naturally needs an online/incremental clustering. Do you have any ideas or suggestions about such problems ?</p> <p>Note: the considered document images are actually a scanned administrative documents</p>
36,848
<p>I want to test the influence of exchange rates on a price index and struggle with the interpretations. My variables are I(1)</p> <p>First, I ran an OLS on first differenced variables which indicated a negative short term relation between FX and PI. Then, I tested it on co-integration and constructed a VECM.</p> <p>My VECM suggests that there is a long term equilibrium with a speed of adjustment of 50% per period but no short term effect.</p> <p>Both of my models are robust.</p> <p><strong>So, what is the implication of my VECM finding?</strong></p> <p>Does a long term equilibrium mean that, in the long-run, FX will not be able to influence the price index, since these variables always rebalance back to equilibrium?</p>
74,150
<p>Here is the problem (not homework),</p> <p>Let $U_1,\cdots,U_n$ be i.i.d. uniform$(-n,n)$ random variables. For $-n&lt;a&lt;b&lt;n$, we set $1_{U_i}(a,b)$ be the indicator function such that $1_{U_i}=1$ if $U_i\in(a,b)$ and 0 otherwise. What is approximate distribution as n large of $U_1+,\cdots,+U_n$.</p> <p>I computed the characteristic function of $U_1+,\cdots,+U_n$, i.e., $\phi(t) = \left(\frac{sin(nt)}{nt}\right)^n$, but I don't know how to get the final result. By the way, I have no idea to use the hint in the problem. Please provide me some hints or references. Thanks!</p>
74,151
<p>I was recently looking for ways to resample time series, in ways that</p> <ol> <li>Approximately preserve the auto-correlation of long memory processes.</li> <li>Preserve the domain of the observations (for instance a resampled times series of integers is still a times series of integers).</li> <li>May affect some scales only, if required.</li> </ol> <p>I came up with the following permutation scheme for a time series of length $2^N$:</p> <ul> <li>Bin the time series by pairs of consecutive observations (there are $2^{N-1}$ such bins). Flip each of them (<i>i.e.</i> index from <code>1:2</code> to <code>2:1</code>) independently with probability $1/2$.</li> <li>Bin the obtained time series by consecutive $4$ observations (thre are $2^{N-2}$ such bins). Reverse each of them (<i>i.e.</i> index from <code>1:2:3:4</code> to <code>4:3:2:1</code>) independelty with probability $1/2$.</li> <li>Repeat the procedure with bins of size $8$, $16$, ..., $2^{N-1}$ always reversing the bins with probability $1/2$.</li> </ul> <p>This design was purely empirical and I am looking for work that would have already been published on this kind of permutation. I am also open to suggestions for other permutations or resampling schemes.</p>
36,850
<p>I have two variables that predict fraud behavior (dependent variable). The independent variables are perception of fraud being wrong (1-5) and probability of being caught (1-5). The dependent variable is frequency of committing fraud in the last 5 years (never, once, 2-3 times, 4 times and more). Two questions: </p> <ol> <li>What kind of regression should I use? Ordinal?</li> <li>Theory predicts that interaction of these two variables predicts the fraud. For those who perceive it wrong, the probability of being caught affects differently than for those who perceive it right. How should I enter this to the model?</li> </ol>
36,851
<p>I need to explain the concept of linear mixed models in an article targeted at a mainstream audience. Is there a way of communicating the gist of the concept in a sentence or two?</p>
36,852
<p>Consider approximating the following expectation: $$\mathbb{E}[h(x)] = \int h(x)\pi(x) dx$$</p> <p>Where $h(x)$ is an arbitrary function and $\pi(x)$ is a distribution for which the <strong>normalizing constant is not known</strong>. Also, assuming the above integral is highly variable and high dimensional the standard approach would be to use MCMC methods to sample points $\{x^{(i)}\}_{i=1}^N$ that are distributed according to $\pi(x)$ and return the sample average: </p> <p>$$ \mathbb{E}[h(x)] \approx \frac{1}{N} \sum_{i=1}^N \ h(x^{(i)}) $$</p> <p>My question is, if the function $h(x)$ happens to be highly variable as well, that is, $\pi(x)$ is very different from the optimal sampling distribution $q^*(x) = |h(x)|\pi(x)/Z$, is there a straightforward way to modify MCMC methods to improve the variance of the estimate? That is, how can MCMC (and related methods) take into account the variability (or sparsity, etc.) of $h(x)$? </p> <p>Also, I <a href="http://stats.stackexchange.com/questions/19456/optimal-importance-sampling-with-ratio-estimator">asked a related question</a> earlier about whether one could use Monte Carlo methods for sampling from $q^*(x)$ and use the weighted importance sampling estimator to compensate for the fact the normalizer is not known. The answer was basically that this is not a good idea, since it requires estimating the harmonic mean of $h(x)$ under $q^*(x)$ which, in practice, is likely to have infinite variance. So, at the least, let's put aside this approach for now.</p> <p>Edit: The keywords "variance reduction MCMC" are actually useful in finding methods that address this issue, I've found a few methods using control variates, antithetic variates, and some adaptive methods like zero variance Monte Carlo. Perhaps someone is able to comment on the viability of these methods or to add more to my list. Thanks.</p> <p>Edit: In response to Xi'an's answer I thought I would update the question.</p> <p>First of all, it appears that my <a href="http://stats.stackexchange.com/questions/19456/optimal-importance-sampling-with-ratio-estimator">first question</a> where I asked how MCMC could be used to sample from $q^*(x)$ in the case where the normalizer for $\pi(x)$ was known is addressed in the literature primarily as the problem of computing the normalization constant, or equivalently: model evidence, partition function, or energy function. </p> <p>It appears there are a number of different methods for doing this with MCMC and I found chapter 4 of Iain Murray's <a href="http://homepages.inf.ed.ac.uk/imurray2/pub/07thesis/murray_thesis_2007.pdf" rel="nofollow">PhD thesis</a> to be an excellent overview of the main ideas. Also, the very recent paper cited in Xian's answer details a method, MCIS, that offers some particular advantages over these existing methods. However, after reviewing many of these papers I would have to conclude none of these ideas have turned out to be as "straightforward" as I would have hoped.</p> <p>Regarding this question, the answer is essentially: no, there is no straightforward way to solve this problem other than estimating the normalizing constant for $\pi(x)$ separately. That is, if we let: $\hat\pi(x) = \pi(x)/Z$ denote our unnormalized distribution and $I = \int h(x)\hat\pi(x) dx$ denote the quantity of interest up to a normalizing constant, we can use any MCMC methods designed for estimating normalization constants to estimate $I$ and $Z$ separately and return the ratio $\frac{I}{Z}$. How this simple procedure relates to bridge sampling I'm still a bit unsure though.</p>
74,152
<p>I am working with a batch of about 1000 univariate time series in R . For every time series, I have to perform following tasks , before deciding upon a model be it ARIMA, TAR or Holt Winter's Model </p> <ol> <li>Trend Detection and its type , i.e. whether trend is deterministic or stochastic</li> <li>Seasonality Detection and then deciding whether it is additive or multiplicative</li> <li>Does the series needs transformation. If yes then what kind of transformation is required, i.e whether box-cox or logarithmic.</li> </ol> <p>Currently I have to visualize every series and then take a call , are there any mathematical criterion available, which can reduce this effort</p> <p>Also what are the other factors that I need to consider before deciding on which model to use</p>
74,153
<p>How to calculate uncertainty of linear regression slope based on data uncertainty (possibly in Excel/Mathematica)?</p> <p>Example: <img src="http://i.stack.imgur.com/duJ8T.jpg" alt="Example plot"> Let's have data points (0,0), (1,2), (2,4), (3,6), (4,8), ... (8, 16), but each y value has an uncertainty of 4. Most functions I found would calculate the uncertainty as 0, as the points perfectly match the function y=2x. But, as shown on the picture, y=x/2 match the points as well. It's an exaggerated example, but I hope it shows what I need.</p> <p>EDIT: If I try to explain a bit more, while every point in example has a certain value of y, we pretend we don't know if it's true. For example the first point (0,0) could actually be (0,6) or (0,-6) or anything in between. I'm asking if there is an algorithm in any of the popular problems that takes this in account. In the example the points (0,6), (1,6.5), (2,7), (3,7.5), (4,8), ... (8, 10) still fall in the uncertainty range, so they might be the right points and the line that connects those points has an equation: y = x/2 + 6, while the equation we get from not factoring in the uncertainties has equation: y=2x + 0. So uncertainty of k is 1,5 and of n is 6.</p> <p>TL;DR: In the picture, there is a line y=2x that's calculated using least square fit and it fits the data perfectly. I'm trying to find how much k and n in y=kx + n can change but still fit the data if we know uncertainty in y values. In my example, uncertainty of k is 1.5 and in n it's 6. In the image there is the 'best' fit line and a line that just barely fits the points.</p>
74,154
<p>I just realised that even though I know how to perform an independent samples t-test or a Mann whitney test, I am not sure how their results should be reported in a paper. I was given this study to read in preparation for a Research Methodology class but it does not report the "easy" tests, so I wonder.</p> <p>Edit in response to the comment:</p> <p>I mean reporting according to strict scientific guidelines. I suppose there is a rule, similarly to when eg we report normally distributed variables we mention the mean and the SD. </p> <p>Edit number two :)</p> <p>I am sorry I didn't realise I wasn't specific. My orientation is medical research so I am primarily interested in knowing what is the best way to present data in papers that result from medical studies. The class I am taking right now is more general though (the article was from a study from the Law school) so it did not occur to me that this was a detail I should have mentioned in the first place. </p> <p>So lets assume I checked if x_bubblenephrine is different between say, a group of people who have Y-itis and a group who of people who do not. Say that I got p>0,005. Is there a "correct way" per se to report this? Or I can get away with "there was no difference between the two groups (p>0,05)"? </p>
74,155
<p>As per Wikipedia, I understand that the t-distribution is the sampling distribution of the t-value when the samples are iid observations from a normally distributed population. However, I don't intuitively understand why that causes the shape of the t-distribution to change from fat-tailed to almost perfectly normal.</p> <p>I get that if you're sampling from a normal distribution then if you take a big sample it will resemble that distribution, but I don't get why it starts out with the fat-tailed shape it does.</p>
913
<p>Suppose I have a Netflix-style recommendation matrix, and I want to build a model that predicts potential future movie ratings for a given user. Using Simon Funk's approach, one would use stochastic gradient descent to minimize the Frobenius norm between the full matrix and the item-by-item * user-by-user matrix combined with an L2 regularization term. </p> <p>In practice, what do people do with the missing values from the recommendation matrix, which is the whole point of doing the calculation? My guess from reading Simon's blog post is that he ONLY uses the non-missing terms (which consist of (say) ~1% of the recommendation matrix) to build a model (with some judicious choice of hyper-parameters and regularization) to predict the other 99% of the matrix? </p> <p>In practice, do you really skip all those values? Or do you infer as much as possible BEFORE doing stochastic gradient descent? What are some of the standard best practices for dealing with the missing values?</p>
74,156
<p>I'm working with a survey that uses a rolling data collection format (i.e., there are multiple waves of sampling and initial contacts). I'm trying to develop a model to predict how likely a sample member is to respond to the survey within 7 weeks from today. Predicting whether a respondent who is first contacted today will respond within 7 weeks is fairly straightforward - just a basic propensity model predicting response within seven weeks of initial contact. However, predicting whether a case that's been in the field for several weeks already will respond in the next 7 weeks is more difficult.</p> <p>My question is how do I take into account that the probability of response changes based on how long the case has been a nonrespondent (i.e., a respondent's initial probability of response may have been .75, but if they're still a nonrespondent after 4 weeks, they'll probably remain a nonrespondent)? I could just include the amount of time the case has been in the field as a variable in the propensity model, but I'm not sure if that's the appropriate way to handle it. It seems like this may be a situation for survival analysis, but my knowledge in that area is limited.</p> <p>Any suggestions of an approach, model, or previous research I should consider would be much appreciated. </p>
74,157
<p>Suppose we have $p$ dimensional vectors $Y_i$ which we model with $f_Y (y |\theta) = \sum \pi_k N(y | \mu_k, \Sigma_k)$ with $\theta$ being a catch all for the model parameters (the number of components might be a finite known/unknown number or infinite as in Dirichlet process mixtures). The prior on $\pi$ will either be a uniform or Stick-breaking prior with perhaps a hyperprior on the associated precision parameter. What are good default prior choices for the cluster components $(\mu_k, \Sigma_k)$? I have been using a conjugate Normal-inverse-Wishart, i.e. $$ (\mu_k, \Sigma_k) \sim \mathcal N(\mu_k |m, \Sigma_k / n_0) \mathcal W^{-1} (\Sigma_k | \Psi, \nu). $$ I then fix $\nu, n_0$ relatively small and either estimate $m$ and $\Psi$ empircally, or specify independent hyperpriors and estimate the key parameters of the hyperpriors empircally (from what I can tell, there is some evidence that people do this since it seems to be what is done in the vignettes for DPpackage). </p> <p>Ultimately, I'd like to set things up so that the prior should be relatively uninformative (with at least some hope of adding prior knowledge in a systematic way), but there are a lot of parameters floating around and the individual influence of each one isn't always transparent. I've come across some papers on this issue that give guidance for choosing parameters empirically but they mainly focus on $p = 1$. Given that these models are pervasive in Bayesian nonparametrics, I figure guidelines must exist and that I just haven't found them.</p> <p>Ideally, I'd like answers here to be specific as possible; in particular, I'm most interested in choosing the hyperparameters in the priors/hyperpriors. </p>
48,548
<p>Given a set of extracted data from different sources with different accuracies, how can I combine the accuracy of those who give the same output?</p> <p>Example :</p> <pre><code>Data from source A are 80% correct Data from source B are 85% correct Data from source C are 90% correct </code></pre> <p>If two of the sources give the same result (ResultA) and the third disagrees (ResultB) what's the probability of (A) being correct? This is not a homework question. I am a software developer and I don't have a clue about statistics and probability.</p> <p><strong>Update :</strong></p> <p>I've done an experiment using a random number generator</p> <p>Test 1 - 2 Possible outcomes (0/1) three methods (Acc: 0.5, 0.3, 0.1)</p> <pre><code>Samples : 100000000 Method A : 0,49993692 Method B : 0,30023622 Method C : 0,09994145 Method A+B : 0,794567779569577 Method B+C : 0,0455372643070089 Method C+A : 0,205615801945512 Method A+B+C : 0,0455215295368209 </code></pre> <p>Test 2 - 2 Possible outcomes (0/1) three methods (Acc: 0.8, 0.85, 0.9)</p> <pre><code>Samples : 100000000 Method A : 0,80003639 Method B : 0,8500426 Method C : 0,90005791 Method A+B : 0,715942797491352 Method B+C : 0,927408281972288 Method C+A : 0,864147967527417 Method A+B+C : 0,995137034088319 </code></pre> <p>That's the numbers I am looking for but I don't know how to calculate them...</p>
74,158
<p>I'm working on an ongoing data analysis project about a series of live educational seminars. Each of my data points represents one such event, and for each one I have a multitude of categorical variables, as well as a couple quantitative ones that are my desired response variables (total revenue and number of attendees).</p> <p>One trend I'm interested in looking at is how the frequency of these events affects my two response variables. Over the years, we have increased the frequency of the events and I'd like to determine whether or not it makes sense to continue doing so. I've created a couple of variables to help track this frequency:</p> <p><code>NEAREST.SEM</code> - the number of days between this event and the nearest one to it chronologically in either direction</p> <p><code>LAST.SEM</code> - the number of days between this event and the nearest one to it chronologically <em>before</em> it</p> <p><code>WEEKLY.SEMS</code> - the total number of events held during the 7-day period starting on Monday within which this event falls</p> <p>Depending on how I do the analysis, these three variables seem to have varying significance, but the one that seems to consistently come out on top is <code>NEAREST.SEM</code>, which I have found to be significant at the 0.01 level in one test and the 0.001 level in another. The other two variables are significant in predicting revenue but not number of attendees, which is not ideal since we are more interested in number of attendees. (The data for revenue is not representative of the total revenue for each event due to certain special offers for repeat customers that aren't taken into account there.)</p> <p>Increasing the frequency of events seems to decrease each event's individual performance, but has so far increased overall performance. I'd like to determine the "turning point" at which overall performance will either dip or level off. Unfortunately, this is going to be tough to predict because my best-fitted variable, <code>NEAREST.SEM</code>, isn't as good a representation of increased frequency. Note, for example, that it would look exactly the same whether 4 or 5 events were held per week--it would always have the value of 1 in such situations. In fact, any time that events are grouped in clusters of consecutive days, we'll always get 1 for them on this variable...</p> <p>One option would be to just use <code>WEEKLY.SEMS</code> as a predictor of revenue, which it is well correlated with, but as I said, we'd much rather do this analysis based on number of attendees, a better measure of an event's success.</p> <p>So I really have two questions here:</p> <ol> <li><p>Any suggestions on my dilemma of which variable to use and how to deal with the problems I laid out above?</p></li> <li><p>Once I decide on a predictor factor, how can I go about estimating the average decrease in revenue increasing to various frequencies will have? Should I run a multiple regression using all my variables and use the coefficient on the predictor factor? Or should I run a regression with just the one factor and my response and use that coefficient? Or is there a better test than regression to use?</p></li> </ol> <p>(By the way, I'm using R for my analysis and I'd appreciate any advice specifically tailored for that language.)</p> <p>UPDATE: I have tried creating two new measures, one that's the average distance in days of the nearest event on either side, and one that's the number of events within 3 days in both directions...neither of them had any significant correlation. I'm running out of ideas here...</p>
48,550
<p>I've fit a mixed linear model to some longitudinal data. I'm interested in the differences in patterns of decrease in the dependent variable according to group status, and my hypothesis particularly predicts a difference between the groups in trajectory of change at between specific ages. The data shows a significant interaction between group and the linear and quadratic effects of age, but I don't know if there is a way to assess this interaction for one part of the age range, or if I need to be able to do so in order to interpret my results as bing supportive of my hypothesis.</p> <p>NB I'm using the nlme package in R</p>
36,868
<p>I recently read a paper made a logistic regression and used a table like this to summarise the model:</p> <pre><code>data.frame(predictors = c("drat", "mpg"), "chi squared statistic" = c("x", "x"), "p-value" = c("x", "x")) predictors chi.squared.statistic p.value 1 drat x x 2 mpg x x </code></pre> <p>The data.frame presented the chi-squared and p value (the x's) for each predictor in the logistic regression model, and thus allowed to see at a glance which predictors were most important.</p> <p>I have made this logistic regression model:</p> <pre><code>mtcars_log_reg &lt;- glm(vs ~ drat + mpg, mtcars, family = "binomial") </code></pre> <p>How can I fill in the x's for the above table using R code?</p>
74,159
<p>Actually I thought Gaussian Process is a kind of Bayesian method, since I read many tutorials in which GP is presented in Bayesian context, for example, in this <a href="http://see.stanford.edu/materials/aimlcs229/cs229-gp.pdf" rel="nofollow">tutorial</a>, just pay attention to page 10.</p> <p>Suppose the GP prior is $$\pmatrix{h\\ h^*} \sim N\left(0,\pmatrix{K(X,X)&amp;K(X,X^*)\\ K(X^*,X)&amp;K(X^*,X^*)}\right)$$, $(h,X)$ is for the observed training data, $(h^*,X^*)$ for the test data to be predicted. And the actually observed noisy output is $$Y=h+\epsilon$$, where $\epsilon$ is the noise, $$\epsilon\sim N(0,\sigma^2I)$$. And now as shown in the tutorial, we have $$\pmatrix{Y,Y^*}=\pmatrix{h\\ h^*}+\pmatrix{\epsilon\\ \epsilon^*}\sim N\left(0,\pmatrix{K(X,X)+\sigma^2I&amp;K(X,X^*)\\ K(X^*,X)&amp;K(X^*,X^*)+\sigma^*I}\right)$$, and finally by conditioning on $Y$, we could have $p(Y^*|Y)$, which is called as predictive distribution in some books or tutorials, but also called posterior in others.</p> <p><strong>QUESTION</strong></p> <ol> <li><p>According to many tutorials, the predictive distribution $p(Y^*|Y)$ is derived by conditioning on $Y$, if this is correct, I don't understand why GP Regression is Bayesian? Nothing about Bayesian is used in this conditional distribution derivation, right?</p></li> <li><p>However, I don't actually think the predictive distribution should be just the conditional distribution, I think it should be $$p(Y^*|Y)=\int p(Y^*|h^*)p(h^*|h)p(h|Y)dh$$, in the above formula, $p(h|Y)$ is the posterior, right?</p></li> </ol>
74,160
<p>I have a two data-sets of a set of subjects with values for their baseline and followup visit. I would like to do a repeated measure test to see whether there is a significant difference between the two sets (baseline &amp; followup). I know I can do a simple paired t-test. But I need to adjust my values for covariates like age, etc...</p> <p>I would like to perform a GLM method (if possible) to see whether there is a significant difference between the two sets with the covariate adjustments. Please advice how can I do this in R.</p>
74,161
<p>I'm studying regression analysis but I'm struggling with really understanding how degrees of freedom are calculated. For example, if we have the simple scenario where $Y_i=\beta_0+\beta_1 X_i + \epsilon_i$ (and all the standard assumptions hold) then I read</p> <p>$\frac{1}{\sigma^2} \sum_{i=1}^n (\hat{Y}_i - \bar{Y})^2 \sim \chi^2_{1}$</p> <p>This seems reasonable when you make an argument like "$\hat{Y}_i$ has two parameters and so two degrees of freedom but $\bar{Y}$ takes one degree of freedom and so you're left with 1", but I guess I'm looking for an argument that's more theoretically grounded. Why does that summation have the same distribution as a standard normal squared?</p> <p>I was able to understand why $\frac{1}{\sigma^2} \sum_{i=1}^n (Y_i-\bar{Y})^2 \sim \chi^2_{n-1}$ by considering the sum of squares as a projection of the $\epsilon_i$ onto a space of dimension $n-1$. A proof for the above case that follows that kind of argument would be fantastic!</p>
74,162
<p>I have a problem with logistic regression. I had found out (<a href="http://www.uk.sagepub.com/burns/website%20material/Chapter%2024%20-%20Logistic%20regression.pdf" rel="nofollow">here</a>) that one of the assumptions of logistic regression model should be min. of for example 50 observations per predictor. But if I had created dummy variables in Stata using "i." operator, does every dummy category count as new predictor?</p> <p>example: </p> <ul> <li>i.maritalstatus</li> <li>1=Ref.</li> <li>2</li> <li>3 ... </li> </ul> <p>Thank You very much. </p>
74,163
<p>I am an ml noob. I have a task at hand of predicting click probability given user information like city, state, os version, os family, device, browser family browser version, city, etc. I have been recommended to try logit since logit seems to be what MS and Google are using too. I have some questions regarding logistic regression like:</p> <p>Click and non click is a very very unbalanced class and the simple glm predictions do not look good. How to make the data work through this?</p> <p>All variables I have are categorical and things like device and city can be numerous. Also the frequency of occurrence of some devices or some cities can be very very low. So how to deal with what I can say is a very random variety of categorical variables?</p> <p>One of the variables that we get is device id also. This is a very unique feature that can be translated to a user's identity. How to make use of it in logit, or should it be used in a completely different model based on a user identity?</p>
74,164
<p>My response variable is number of Fishing cat scats and I am using a zero-inflated poisson regression model to see the effect of the predictor variables on habitat use of Fishing cats. The predictor variables are Reed area, Vegetation Area and Agricultural area. </p> <p>Now, before using the GLM, I used scatterplots to see what the trends are between the response variable and each predictor variable based on my data and in it I saw that Fishing cats are negatively impacted by increase in agricultural area. However, the GLM is showing a positive correlation of Fishing cats with Agricultural area which is significant. However, when I take Agricultural area alone in the GLM, it shows an insignificant negative correlation. I do not know what to interpret out of this. I tried interactions also thinking that interaction with one of the predictors might have a positive effect on Agricultural area. Here is what it looks like:</p> <pre><code>Call: zeroinfl(formula = No_FC ~ Reed_area + Veg_area * Agril_area) Pearson residuals: Min 1Q Median 3Q Max -1.20907 -0.37197 -0.29263 -0.23930 6.97929 Count model coefficients (poisson with log link): Estimate Std. Error z value Pr(&gt;|z|) (Intercept) -2.64186 0.98692 -2.6769 0.0074315 ** Reed_area 1.88105 0.52355 3.5928 0.0003271 *** Veg_area 2.38728 0.64685 3.6906 0.0002237 *** Agril_area 2.05895 0.71096 2.8960 0.0037791 ** Veg_area:Agril_area -1.55241 0.86485 -1.7950 0.0726528 . Zero-inflation model coefficients (binomial with logit link): Estimate Std. Error z value Pr(&gt;|z|) (Intercept) -0.68988 1.82259 -0.3785 0.7050 Reed_area -0.72748 1.00656 -0.7227 0.4698 Veg_area 0.39630 1.34636 0.2944 0.7685 Agril_area 1.64785 1.15165 1.4309 0.1525 Veg_area:Agril_area 0.50427 1.30075 0.3877 0.6983 --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Number of iterations in BFGS optimization: 17 Log-likelihood: -134.02 on 10 Df. </code></pre> <p>It shows that the interaction term is also negatively affecting Fishing cat habitat use. Here is what it looks like when I take Agricultural area alone.</p> <pre><code>fishing_cat.glm &lt;- zeroinfl(No_FC~Agril_area) &gt; summary(fishing_cat.glm) Call: zeroinfl(formula = No_FC ~ Agril_area) Pearson residuals: Min 1Q Median 3Q Max -0.68839 -0.43312 -0.31080 -0.20735 8.16459 Count model coefficients (poisson with log link): Estimate Std. Error z value Pr(&gt;|z|) (Intercept) 0.85358 0.18532 4.6059 4.107e-06 *** Agril_area -0.31194 0.26974 -1.1565 0.2475 Zero-inflation model coefficients (binomial with logit link): Estimate Std. Error z value Pr(&gt;|z|) (Intercept) 0.16657 0.34396 0.4843 0.628200 Agril_area 1.30806 0.40498 3.2299 0.001238 ** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Number of iterations in BFGS optimization: 16 Log-likelihood: -146.93 on 4 Df </code></pre> <p>Could you kindly give me some direction as to how I can circumnavigate this problem?</p>
36,880
<p>I'm trying to find a test that will allow me to test the relationship between a categorical dependent variable and several independent variables that are both continuous (interval) and ordinal. </p> <p>If there is no such test, it would also makes theoretical sense for me to turn the variable "around". That is, I could use a statistical test (if such exists) that allows me to test the effect of a categorical independent variable on several dependent variables that are both continuous and ordinal.</p> <p>I hope this makes sense and would be grateful for any help.</p>
74,165
<p>I am calling the R MICE routines into SPSS to do multiple imputations. My question is how to save the multiple imputed data sets as SPSS files for later analyses.<br> Any help would be greatly appreciated.</p> <p>DLuo</p>
74,166
<p>I originally planned on path analysis utilizing multivariate multiple regression to test my hypothetical model - but I am not getting my sample size. I have looked at non-parametric regression techniques but - am not sure how I can develop my model using these techniques - or if a path model would even be useful at this point. I need a minimum N of 155 per my power analysis for my planned analyses - I currently have 45 and don't see much hope in increasing that number substantially within my somewhat limited timeframe ... so, Any suggestions?</p>
74,167
<p>I am trying to fit a multi-group latent growth curve model using censored data in Mplus. I have been able to fit a multi-group model using uncensored data, and a single group model using the censored data. Is there a way to combine these?</p> <p>When I have tried to combine these, the error message suggests using KNOWNCLASS with the TYPE=MIXTURE. Is this then a latent class or finite mixture model?</p>
74,168
<p>I have used Kendall's tau to examine whether there is a correlation between a number categorical variables, as I have a small sample. However, I also want to test whether some variables might have an confounding effect on some of the relationships. Unfortunately, with SPSS you can only use partial correlation using Pearson. </p> <p>I was wondering whether it make sense to use Pearson to examine the effect of confounding variables? If not, what would you suggest?</p> <p>Thanks </p>
74,169
<p>I'm having issues forecasting a model of the following form.</p> <pre><code>y1 &lt;- tslm(data_ts~ season+t+I(t^2)+I(t^3)+0) </code></pre> <p>It fits my data very well, but I run into a problem when attempting to do this:</p> <pre><code>forecast(y1,h=72) </code></pre> <p>This is the error that R gives me.</p> <pre><code>"Error in model.frame.default(Terms, newdata, na.action = na.action, xlev = object$xlevels) : variable lengths differ (found for 't') In addition: Warning message: 'newdata' had 72 rows but variables found have 1000 rows" </code></pre> <p>As far as I can tell, this has something do with using <code>tslm</code> and having the cubic function in it. If I just use <code>tslm(data_ds~season+trend)</code> everything works out fine, but I specifically need the model mentioned earlier. How can I forecast my model?</p>
74,170
<p>Can anyone report on their experience with an adaptive kernel density estimator?<br> (There are many synonyms: adaptive | variable | variable-width, KDE | histogram | interpolator ...)</p> <p><a href="http://en.wikipedia.org/wiki/Variable_kernel_density_estimation" rel="nofollow">Variable kernel density estimation</a> says "we vary the width of the kernel in different regions of the sample space. There are two methods ..." actually, more: neighbors within some radius, KNN nearest neighbors (K usually fixed), Kd trees, multigrid...<br> Of course no single method can do everything, but adaptive methods look attractive.<br> See for example the nice picture of an adaptive 2d mesh in <a href="http://en.wikipedia.org/wiki/Finite_element_method" rel="nofollow">Finite element method</a>.</p> <p>I'd like to hear what worked / what didn't work for real data, especially >= 100k scattered data points in 2d or 3d.</p> <p>Added 2 Nov: here's a plot of a "clumpy" density (piecewise x^2 * y^2), a nearest-neighbor estimate, and Gaussian KDE with Scott's factor. While one (1) example doesn't prove anything, it does show that NN can fit sharp hills reasonably well (and, using KD trees, is fast in 2d, 3d ...) <img src="http://i.stack.imgur.com/ulkUd.png" alt="alt text"></p>
74,171
<p>I've got a model that I've developed in R, but also need to express in SAS. It's a double GLM, that is, I fit both the mean and (log-)variance as linear combinations of the predictors:</p> <p>$E(Y) = X_1'b_1$</p> <p>$\log V(Y) = X_2'b_2$</p> <p>where Y has a normal distribution, $X_1$ and $X_2$ are the vectors of independent variables, and $b_1$ and $b_2$ are the coefficients to be estimated. $X_1$ and $X_2$ can be the same, but need not be.</p> <p>I can fit this in R using gls() and the varComb and varIdent functions. I've also written a custom function that maximises the likelihood using optim/nlminb, and verified that it returns the same output as gls.</p> <p>I would now like to translate this into SAS. I know that I can use PROC MIXED:</p> <pre><code>proc mixed; class x2; model y = x1; repeated /group = x2; run; </code></pre> <p>However, this only gives me what I want if I have 1 variable in the /GROUP option. If I enter 2 or more variables, MIXED can only handle this by treating each individual combination of levels as a distinct group (that is, it takes the cartesian product). For example, if I have 2 variables in $X_2$, with 3 and 4 levels respectively, MIXED will fit 12 parameters for the variance. What I want is for the log-variance to be additive in the variables specified, ie 6 parameters.</p> <p>Is there a way of doing this in MIXED or any other proc? I could probably code something in NLP, but I'd really prefer not to.</p>
74,172
<p>Let random variables $X$ and $Y$ be independent Normal with distributions $N(\mu_{1},\sigma_{1}^2)$ and $N(\mu_{2},\sigma_{2}^{2})$. Show that the distribution of $(X,X+Y)$ is bivariate Normal with mean vector $(\mu_{1},\mu_{1}+\mu_{2})$ and covariance matrix</p> <p>$$ \left( \begin{array}{ccc} \sigma_{1}^2 &amp; \sigma_{1}^2 \\ \sigma_{1}^2 &amp;\sigma_{1}^2+\sigma_{2}^2 \\ \end{array} \right).$$</p> <p>Thanks .</p>
74,173
<p>I am asked to draw a scatterplot and to compute a correlation coefficient for the following situation. A group of subjects are measured for a blood characteristic before and after surgery.</p> <p>Is it OK to correlate before-and-after data?</p> <p>I know that it is not OK to perform correlations on non independent data. I feel this is such a case--the two measurements are made on the same subjects--they should be correlated. </p> <p>I know that correlating data to the change over time is not OK--but that is obvious and it is not the case here.</p> <p>Also correlating two variables measured repeatedly on the same sample is a huge No. But again it is not my case.</p>
36,892
<p>Let $f_i(y)$ for $i = 1, \ldots, n$ be valid PDF’s, and let $a_i ∈ (0, 1)$ be constants, such that $\sum_{i=1}^n a_i= 1$.</p> <ol> <li>Show that the function $f(y) = \sum_{i=1}^n a_i\, f_i(y)$ is a valid PDF.</li> <li>If $E [Y_i] = \mu_i$ and $\text{Var}(Y_i) = σ^2_i$, show that<br> (i) $E[Y] = \sum_{i=1}^n a_i\, \mu_i$, <br> (ii) $E[Y^2_i] = \mu^2_i + σ^2_i$.</li> </ol> <p>I understand the basic requirements for PDF's but for whatever reason I am drawing a total blank.</p>
31,273
<p>I have a <code>SPSS</code> Output for a logistic regression. This output reports two measure for the model fit, <code>Cox &amp; Snell</code> and <code>Nagelkerke</code>.</p> <p>So as a rule of thumb, which of these R² measures would you report as the model fit?</p> <p>Or, which of these fit indices is the one that is usually reported in journals?</p> <hr> <p>Some Background: The regression tries to predict the presence or absence of some bird (capercaillie) from some environmental variables (e.g., steepness, vegetation cover, ...). Unfortunately, the bird did not appear very often (35 hits to 468 misses) so the regression performs rather poorly. Cox &amp; Snell is .09, Nagelkerke, .23.<br> The subject is environmental sciences or ecology.</p>
49,889
<p>I'm willing to apply machine learning with <code>R</code> (I will start with random forests then maybe have a look at NNs) on some data, but I don't know where to start, probably because I don't know which words to put on my problem and what to google for.</p> <p>My data consist in a set of events of type A, each of which contains both some specific variables and a (variable) number of elements of type B with their own variables.</p> <p>A typical example of such data would be horse racing : each race has its own parameters along with a list of horses and their own parameters.</p> <p>Now, of course the training has to be done on each element of type A independently, so tutorials using basic <code>iris</code> data won't work — or at least I don't understand how to apply them on events of type A instead of elements of type B.</p> <p>How should I organize my data set or feed it to <code>randomForest</code> ? Or which keywords should I use to find relevant documentation on this kind of topic ? (I tried "grouped data" without much success…)</p> <p>NB : For a start I can discard the common variables of each A event, if needed. But still every B element has to be considered equal of other B elements <em>inside</em> a single A event, and independently from other A events.</p> <p><strong>Update :</strong> I've found a workaround which may work in my particular situation (still to be tested, my DB needs reorganization). The workaround is to consider parameters of the A events as parameters of each B element, so the problem simply becomes a set of B elements. However I'm not satisfied with this solution and anyway I'm not sure it could be applicable to other similar problems, the question is still open.</p>
36,893
<p>I came across a paper related to Bayesian decision theory where the absolute loss function is introduced somewhat straightforwardly. This is part of the result.</p> <p>$\frac{\partial}{\partial a}\int_{-\infty}^a (a-\theta)\! f(\theta|y) \, \mathrm{d}\theta$ = $\int_{-\infty}^a \! f(\theta|y) \, \mathrm{d}\theta$ = $Pr(\theta\leq a|y)$</p> <p>Since the upper limit of the integral depends on "a", differentiation under the integral sign can not be applied directly. I have been trying to adequate the leibniz rule to reach this result but I think the solution might be much more simpler.</p>
45,212
<p>in a one way anova F test when F=37.45; df1=5; df2=40 what is the P value? I tried several software, and the result is &lt;0.0001. I know it sounds weird that I need a really small number of possibility. However, I really need it for a publication. I greatly appreciate if anyone can help on this issue. Please let me know the exact number if you can calculate it.</p> <p>thanks</p>
36,894
<p>I have two separate time series, indexed by time in nanoseconds. They both measure the same thing but because they come about in two completely different ways, the number of observations they each give are not only irregularly spaced but are also different in number and don't align. Even the time is on different clocks so even the small subset of indices which match are not necessarily referring to exactly the same time.</p> <p>When I just look at points for each every millisecond (anything from 0 to upwards of 30 observations in different millisecond buckets), match the values to timestamp, and plot all such coinciding observations the time series look identical (this is over 200,000 points). Yet cross correlating them gives very bad values, like <code>0.03</code> or <code>0.16</code>. And if I zoom in on these at sets of 20 or so consecutive points I begin to see all the misalignment generating this. I can look at zoomed out plots and actual values to ascertain myself that they are "generally speaking" measuring the same thing and hovering similary, but I'm not sure how to do this with time series tools or a proper statistic. If the index sets were of the same length then I would consider smoothing over less granular buckets, but because they are not, it's a bit harder to come up with something to translate "the zoomed out graphs look the same" into a number. What are some things I can do in this scenario?</p>
36,895
<p>Let $X_1, X_2...X_n$ be iid with $f(x,\theta)=\dfrac{2x}{\theta^2}$ and $0&lt;x\leq\theta$. Find $c$ such that $\mathbb{E}(c\hat{\theta})=\theta$ where $\hat{\theta}$ denotes MLE of $\theta$.</p> <p>What I have tried: I found the MLE of $f(x;\theta)$ to be $\max\{X_1,X_2\cdots X_n\}$ (which aligns with the answer at the back) but now I am stuck at this question. The answer given is $\dfrac{2n+1}{2n}$.</p> <p>I would have proceeded as: $$\begin{align*} \mathbb{E}(c\hat{\theta})&amp;=\theta\\ \int_0^\theta c \dfrac{2x}{y^2} &amp;=\theta \quad (y = \max\{x_1,x_2,\cdots,x_n\})\\ \dfrac{1}{y^2}\int_0^\theta c .{2x}{} &amp;=\theta \end{align*}$$ But continuing this way gives me an answer far off from the one in the book (I don't have a term of n to begin with).</p> <p>Help! Hints only please, no complete solutions.</p>
74,174
<p>Bear with me as I try to word this question well (I'm a mathematical modeler, but not a statistics guru).</p> <p>We want to assess the response patterns of a binary timeseries. The data are from people answering questions for information they have not previously seen. The scenario is that answers are either correct <em>or</em> incorrect, and they have to get an answer correct twice in a row before it is "retired." Since people have not previously seen the information, the "perfect" pattern is <code>0-1-1</code>, meaning the person answered incorrectly, then was presented the correct answer/information, then retained that for the next two attempts.</p> <p>I'm trying to assess people's "trend" and am seeking a pointer to how this might be possible. It's not enough to collect responses, since two people having 3 correct responses and 3 incorrect responses might reflect very different "trends." To wit:</p> <ul> <li>Person A: <code>0-0-1-0-1-1</code></li> <li>Person B: <code>1-0-1-0-1-0</code></li> </ul> <p>In the first case, the person experiences a trend toward understanding. In the second, the person is alternating with no trend toward understanding. Ideally, we'd come up with some number normalized to assess some distance from the perfect <code>0-1-1</code> pattern. For instance, <code>1-0-1-1</code> would be more distant than <code>0-0-1-1</code>, but closer than <code>0-0-0-1-1</code>. Even a distance from 0, 1, or some other arbitrary point would be good if it was internally consistent.</p> <p>My first thought would be to look for something in signal analysis– assess the periodicity of the signal of responses. However, I was wondering if the statistics community has some manner to assess a binary sequence like this. Of course, I'm not looking for an easy answer (since there might not be one) but pointers in the right direction would be helpful.</p> <h2>Edit to address @gung's questions:</h2> <p>I don't expect that the timeseries will be more than 10 to 20 attempts max. Mean number of attempts per question seems to be about 2.8 for well written questions. The series will be different for each user's response to each question, with thousands of users answering generally on the order of 10 questions each (i.e somewhere between 1 and 100, depending on how long they participate). </p> <p>I'm looking to rank individual users per question and per group of questions, but also try to rank questions by groups of users (i.e. this question was poor because many users flipped back and forth between 0 and 1).</p>
74,175
<p>I work on a website that gets around 150,000 unique visitors a month.</p> <p>I am proposing to sample one in 1,000 people visiting the site with a pop-up survey as described in this <a href="http://stats.stackexchange.com/questions/39319/bayesian-user-survey-with-a-credible-interval">question and answer about calculating credible intervals for survey data</a>. When we ran this survey in the past, giving it to all our users for a week, about 1 in 5 answered the question and 4 in 5 dismissed the pop-up without answering.</p> <p>My Program Manager and my Director want to know what the justification for sampling 1 in 1,000 is (I was hoping this would minimize user annoyance by sampling at a low rate, and if the rate is low I would not need to set "supercookies" to keep track of who has taken the survey already, I'd just survey them again if they use the site so much.) They also question why, in the previous question and answer, this issue of size of the population the sample is from does not figure into the calculation of the credible interval.</p> <p>(How) do sampling rate (1% sampled vs. 1 per thousand) and completion rate affect the quantification of how certain we are about our results and conclusions? Or is it just the number of samples that matter without regard to knowing the total population size?</p>
74,176
<p>Is anyone aware of good data anonymization software? Or perhaps a package for R that does data anonymization? Obviously not expecting uncrackable anonymization - just want to make it difficult. </p>
74,177
<p>I’m reading a paper and really struggling with one appendix. Basically they derive conditional expectation of a multivariate normal, conditioning on absolute values. </p> <p>Let $$\boldsymbol y = \begin{bmatrix} \boldsymbol y_{a}^{\top} \\ \boldsymbol y_{b}^{\top} \end{bmatrix} $$</p> <p>$$\boldsymbol y \sim \mathcal{N}({\boldsymbol 0},{\Sigma_{y}})$$</p> <p>where $$ \Sigma_{y} = \begin{bmatrix} \Sigma_{aa} &amp; \Sigma_{ab}\\ \Sigma_{ba} &amp; \Sigma_{bb} \end{bmatrix}$$</p> <p>it is known that $E[y_a|y_b]=\Sigma_{ab}\Sigma_{bb}^{-1} \boldsymbol y_b $</p> <p>The bit I'm struggling with is:</p> <p>Let</p> <p>$\boldsymbol y_a=y_1$ and $\boldsymbol y_b^\top=(y_2,y_3) .$ We want to calculate the conditional expectation: $ E[y_1|y_2 = l_2, |y_3|=l_3]. $ Define $f(\boldsymbol y_b) = e^{-1/2 \boldsymbol{ y_{b}^{\top}\Sigma_{bb}^{-1}y_{b}}} $ and let $\boldsymbol l_b = (l_2,l_3)$</p> <p>according to the paper it is easy to show that:</p> <p>$ E[y_1|y_2 = l_2, |y_3|=l_3] = \boldsymbol{\Sigma_{ab} \Sigma_{bb}^{-1} \{ i_{11}^{(2,2)}-\tanh[l_{b}^{\top} i_{11}^{(2,2)} \Sigma_{bb}^{-1}i_{22}^{(2,2)} l_{b}]i_{22}^{(2,2)}l_{b}\}} $</p> <p>where $i_{ij}^{(m,n)}$ is the index matrix.</p> <p>Then they continue with a even more complicated conditional expectation which I would also like to understand. The link the paper: <a href="http://web.mit.edu/wangj/Public/Publication/Wang94.pdf" rel="nofollow">http://web.mit.edu/wangj/Public/Publication/Wang94.pdf</a> page 163.</p> <p>I would be extremely grateful if somebody would be able to help me with this.</p> <p>Many thanks. </p>
74,178
<p>Trying to understand the solution given to this homework problem:</p> <p>Define random variables $X$ and $Y_n$ where $n=1,2\ldots%$ with probability mass functions:</p> <p>$$ f_X(x)=\begin{cases} \frac{1}{2} &amp;\mbox{if } x = -1 \\ \frac{1}{2} &amp;\mbox{if } x = 1 \\ 0 &amp;\mbox{otherwise} \end{cases} and\; f_{Y_n}(y)=\begin{cases} \frac{1}{2}-\frac{1}{n+1} &amp;\mbox{if } y = -1 \\ \frac{1}{2}+\frac{1}{n+1} &amp;\mbox{if } y = 1 \\ 0 &amp;\mbox{otherwise} \end{cases} $$</p> <p>Need to show whether $Y_n$ converges to $X$ in probability.</p> <p>From this I can define the probability space $\Omega=([0,1],U)$ and express the random variables as functions of indicator variables as such:</p> <p>$X = 1_{\omega &gt; \frac{1}{2}} - 1_{\omega &lt; \frac{1}{2}}$ and $Y_n = 1_{\omega &lt; \frac{1}{2}+\frac{1}{n+1}} - 1_{\omega &gt; \frac{1}{2}+\frac{1}{n+1}}$</p> <p>And from the definition of convergence in probability, we need find to show that $P\{|Y_n-X|&gt;\epsilon\}$ does or does not converge to zero. Which can be written as:</p> <p>$P\{|1_{\omega &lt; \frac{1}{2}+\frac{1}{n+1}} - 1_{\omega &gt; \frac{1}{2}+\frac{1}{n+1}} - 1_{\omega &gt; \frac{1}{2}} + 1_{\omega &lt; \frac{1}{2}}| &gt; \epsilon \}\;\;(1)$</p> <p>Now it's easy to see that $\epsilon &lt; 2$ for this to hold, but the solution given states that:</p> <p>$P\{|Y_n-X|&gt;\epsilon\} = 1 - \frac{1}{n+1} \;\; (2)$</p> <p>Thus $Y_n$ does not converge in probability to $X$.</p> <p>My problem is that I don't see the reasoning between (1) and (2). Can anyone shed some insight into intermediate steps/reasoning required to make this step?</p>
74,179
<p>I have situation in which I compare many genomic regions between two or more cell lines (CL). Each region is covered by n probes measuring the level of methylation (continuous variable). The coverage (value of n = sample size per region, generally varies between 4 and 20) is different for different regions (array property, property of the base pair sequence) . However, for 1 region n is the same for all CLs.</p> <p>For example (n=2, A &amp; B)</p> <p>region 1 has 5 measurements in each cell line.</p> <pre><code>region 1, CL A: 23 9 80 62 31 region 1, CL B: -98 -65 -19 -95 -23 </code></pre> <p>region 2 has 10 measurements within the region (bigger region and/or better coverage)</p> <pre><code>region 2, CL A: 66 7 31 89 100 81 63 93 33 0 region 2, CL B: 17 -50 -89 -46 -52 -80 -7 -26 -62 -26 </code></pre> <p>The data has one additional property: the measurements WITHIN one cell line &amp; region are correlated, e.g. if one of the measurements (one genomic locus) gives a high methylation value, the adjacent locus is more probable to also be high. I have illustrated this by the sign of the numbers in the demo data. (There is no repeated measurement involved.)</p> <p>The number of regions of interest concerns several thousand regions.</p> <p>I want to do 2 things</p> <ol> <li>Test for each region whether A differs from B. Currently I use a Mann Whitney U test for this if n=2 (A,B) or Kruskal Wallis if n>2 (A,B,C,...) . I have been told that because of the correlation, the assumption of independence of the measurements (=samples) fails and I should use a permutation test. Is the approach as implemented in the "coin" package in R (<a href="http://cran.r-project.org/web/packages/coin/vignettes/coin.pdf" rel="nofollow">http://cran.r-project.org/web/packages/coin/vignettes/coin.pdf</a>) applicable to this problem (conditional counterpart of unconditional tests).</li> <li>Correct for multiple testing. As my sample sizes vary between the regions I am at a loss here. The p-values do not seem to be comparable because of this, so application of default FWER based correction might not be possible, right?</li> </ol> <p>I would very much appreciate any insight. Thanks a lot.</p>
61
<p>I'm trying to fit linear mixed models to 3 different DV (so three models). I understand that REML gives less biased variance estimates. As im more interested in the fixed effects, I use ML for the initial stepwise model reduction based on AIC-values, and use REML to fit my final (reduced) models. </p> <p>However, if I got that right, REML ignores the fixed part for fitting the model, right? And since 2 of my 3 models have only very little random variance, I'm confused whether I should completely stick to ML-estimates? What is your opinion on this? Am I right in my understanding of REML vs. ML?</p>
74,180
<p>I am building a model with a highly significant interaction. This interaction was one of our main hypotheses. It is clear, however, that the form that the interaction takes does not represent a meaningful change across levels of either variable (one of which is <strong>time</strong>). The interaction is of 2 quadratic terms (and all lower order terms). Less terms in the model do not fit the data well. Higher order terms do not improve fit. With the quadratic interaction, we see the predicted trends for different levels of covariates begin together for early time, diverge slighlty for middle timings, and then reconverge for late time values. At most, the differences in the ranges of our data are not clinically meaningful. And the convergence at the end of the time range indicates that subjects end up in the same place anyways. Our best explanation for the divergence in the middle time is a change in selection criteria for entry into the study (not well documented, just possibility). </p> <p>We have categorized the 2 predictors and tested that interaction, so as not to force a functional form. The predicted values for this show some similar trends to the quadratic interaction initially modeled, but is not as rigid or uniform. Additionally, we tested the interaction in a smaller, independently collected sample and found nothing indicating an interaction. </p> <p>In short, my question is what more do we need to officially write off this interaction? The trouble I see is that this was pretty much <strong>what we wanted to test</strong>, and we found something statistically significant, yet it appears to not be the greatest fit for the data, nor offer a clinically important interpretation.</p>
74,181
<p>Which components should use for plotting in a PCA analysis? Should it be component 1 versus component 2, or any combination that shows clustering is okay to use?</p> <p>Also, I have seen that in a few cases the axis labels mention the variance that is shown (e.g. it says "Principal component 1 (Var. 58.09%)"). Does this mean that in these plots only a part of the corresponding axis is shown (i.e. is it zoomed-in)?</p>
74,182
<p>I am wondering if there is a way of calculating the following:</p> <p>I have a bag with 5 balls numbered from 1 through 5 that are going to be drawn. Due to different weights, the balls have different probabilities of being drawn:</p> <p>$P(Ball1) = 0.4$<br> $P(Ball2) = 0.3$<br> $P(Ball3) = 0.1$<br> $P(Ball4) = 0.1$<br> $P(Ball5) = 0.1$ </p> <p>There will be three draws. After each draw the balls are put back into the bag:</p> <ul> <li>Draw 1: Three balls</li> <li>Draw 2: Three balls</li> <li>Draw 3: Two balls </li> </ul> <p>Now, is it possible to calculate the probability that each ball will have been chosen at least once along the three draws?</p> <p>Disclaimer: I am not sure if the title is totally correct, but I think that my problem is a variant of <a href="http://stats.stackexchange.com/questions/5520/">Expected number of uniques in a non-uniformly distributed population</a> - and therefore the similar title. Correct the title as you think it may be more correct.</p> <p>My question is how to calculate this: $Pr(Ball1 &gt; 0, Ball2 &gt;0, Ball3 &gt; 0, Ball4 &gt;0, Ball5 &gt; 0)$ given 3 + 3 + 2 balls drawn.</p> <p><em>Edit</em>: The balls that are drawn (3 + 3 + 2) are always drawn at once which would imply that the Fisher's noncentral hypergeometric distribution is what I am looking for. Note that if the drawing would happen ball by ball, Wallenius' noncentral hypergeometric distribution, would be the distribution of choice.</p>
34,224
<p>Can anyone suggest where to obtain the results of the 10,000 coin flips (i.e., all 10,000 heads and tails) performed by John Kerrich during WWII?</p>
34,226