question
stringlengths
37
38.8k
group_id
int64
0
74.5k
<p>I have a table in which the multiple linear regression results is provided. If I have unstandardized coefficients and standard error for each independent variable, is it possible to calculate standardized coefficient (Beta) and coefficient of determination ($R^2$) from these data. </p> <p>My friend provided $R^2=0.41$ for these data, but I doubt if the results are reliable. I brought the table in the following, would you please compute Beta and R-squared for me to compare with the original table?</p> <p><img src="http://i.stack.imgur.com/LJGFb.jpg" alt="enter image description here"></p> <p>the descriptive statistics is the following table:</p> <p><img src="http://i.stack.imgur.com/re6g2.jpg" alt="enter image description here"></p>
49,927
<p>Let's say I have 7 urns filled with random numbers of colourful marbles. An example dataset is as below:</p> <pre><code>data &lt;- matrix (c(5,3,4,4,4,2,1,1,1,1,1,2,2,1,2,2,1,1,2,1,4,1,1,2,4,1,3,1,7,1,1,2,1,3,3), ncol = 5); rownames(data) &lt;- as.character(seq(1,7)); colnames(data) &lt;- c("red", "blue", "yellow", "green", "pink"); </code></pre> <p>I assume the count-based colour distribution to be multinomial. I want to test whether the contents of Urn 1 follow the colour distribution of Urns 2-7. </p> <p>I get the MLEs for Urns 2-7 as:</p> <pre><code>p_sample &lt;- colSums(data[2:7,])/ sum(colSums(data[2:7,])) </code></pre> <p><strong>1)</strong> When I run a $\chi^2$ test as below, R displays a warning message since the expected counts (EC)&lt;5. </p> <pre><code>obs &lt;- data["1",] chisq.test(obs,p=p_sample) # Chi-squared test for given probabilities # data: obs # X-squared = 8.0578, df = 4, p-value = 0.08948 # # Warning message: # In chisq.test(obs, p = p_sample) : # Chi-squared approximation may be incorrect </code></pre> <p>One of the answers to this <a href="http://stats.stackexchange.com/questions/14226/given-the-power-of-computers-these-days-is-there-ever-a-reason-to-do-a-chi-squa">question</a> states that $\chi^2$ test would nevertheless return accurate results as long as ECs exceed 1.0 if a very simple $\frac{N-1}{N}$ correction is applied to the test statistic. Is the correction implemented simply like this: $\chi^2 = (\sum_i \frac{(O_i - E_i)^2}{E_i} )* \frac{N-1}{N} $ ?</p> <p>This <a href="https://sites.google.com/a/lakeheadu.ca/bweaver/Home/statistics/notes/chisqr_assumptions" rel="nofollow">link</a> suggests so but I am not sure about its reliability:</p> <blockquote> <p>If one has the regular Pearson chi-square (e.g., in the output from statistical software), it can be converted to the 'N - 1' chi-square as follows:</p> <pre><code> 'N -1' chi-square = Pearson chi-square x (N -1) / N </code></pre> </blockquote> <p><strong>2)</strong> Alternative to $\chi^2$, since the number of counts is low, I ran a Fisher's exact test and got the results below. Is how I call the fisher.test below correct? The result I get makes me think: "No". I am confused since the R help document only refers to use cases for contingency tables.</p> <pre><code>fisher.test(obs, sum(obs)*p_sample) # Fisher's Exact Test for Count Data # # data: obs and sum(obs) * p_sample # p-value = 1 # alternative hypothesis: two.sided </code></pre>
74,183
<p>Apologies if this is a very basic question.</p> <p>If we have data that are not normally distributed (e.g. skewed, Shapiro-Wilk test is significant) and we resort to rank-based methods (e.g. Wilcoxon Signed Rank test), then do we need to be concerned with outliers?</p> <p>Imagine, for example, we plot the data using a boxplot and a minority of data points are marked as outliers. Should we transform those points? Or remove them? It seems to me that many textbooks talk about dealing with outliers, but only because they exert a major influence on the parameters such as mean and standard deviation. However, when we use a rank-based test they will already be 'transformed' to be the next value in the rank, and would therefore not exert a major influence on the test. I have not seen this stated explicitly in a statistics book so far, so I thought I would ask the question here.</p> <p>Do we need to worry about outliers when using rank-based tests?</p>
279
<p>I'm taking a class on R and I cannot get the professors code to work. I am trying to do a simple linear model and I run this code:</p> <blockquote> <p>ozone&lt;-read.table("<a href="http://www.ats.ucla.edu/stat/r/faq/ozone.csv" rel="nofollow">http://www.ats.ucla.edu/stat/r/faq/ozone.csv</a>", sep=",", header=T)</p> <p>fit = lm(ozone~.,data=ozone)</p> <p>summary(fit)</p> </blockquote> <p>Which keeps giving me the following error:</p> <blockquote> <p>Error in model.frame.default(formula = ozone ~ ., data = ozone, drop.unused.levels = TRUE) : invalid type (list) for variable 'ozone</p> </blockquote> <p>It's really depressing as they are the first two lines of code in his lecture notes. I have also found several other forum posts on this topic (it's even listed as a common r mistake), but I am too...special to figure out how to change it.</p> <p>I tried reading it as.numeric, and as a data.frame, which is what most other threads suggested, but neither worked.</p>
74,184
<p>I am seeking a statistic measuring an estimate's reliability or stability as an alternative to the coefficient of variation (CV), also known as the relative standard error. The CV is the standard error of an estimate (proportion, mean, regression coefficient, etc.) divided by the estimate itself, usually expressed as a percentage. For example, if a survey finds 15% unemployment with a 6% standard error, the CV is .06/.15 = .4 = 40%. </p> <p>Some US government agencies flag or suppress as unreliable any estimate with a CV over a certain threshold such as 30% or 50%. But this standard can be arbitrary (for example, 85% employment would have a much lower CV of .06/.85 = 7%) and has other limitations (such as when the estimate is zero).</p> <p>Can anyone suggest an alternative measure of stability or reliability?</p>
74,185
<p>I use <code>escalc()</code> function from <code>metafor</code> package to calculate various effect sizes or outcome measures (and the corresponding sampling variances) that are commonly used in meta-analyses.</p> <p>In most of articles are tables of <em>mean</em> and <em>standard deviation</em> which can be easily used by <code>escalc()</code>.</p> <pre><code># for example: # group A == mean=7; sd=1.8; n=13 # group B == mean=3.5; sd=3; n=179 escalc(m1i=7, sd1i=1.8, n1i=13, m2i=3.5, sd2i=3, n2i=179, measure="MD") yi vi 1 3.5000 0.2995 </code></pre> <p>...unfortunately, in some articles are tables consisting of <em>mean</em> and <em>confidence intervals</em>.</p> <p><strong>Is there any way how to compute effect size by using confidence intervals instead of standard deviation?</strong></p> <pre><code># for example # group A == mean=19.25; CI=17.1-20.1; n=28 # group B == mean=8; CI=6.8-9.2; n=72 </code></pre> <p><strong>P.S.</strong> or if not from CI than maybe from <em>range</em> (probably impossible).</p>
36,904
<p>I want to determine if there's a difference in mean p-values between two groups. In order to do this I perform a Wilcoxon's rank-sum test (the data is not normally distributed). So far, so good. Finally, I want to calculate the corresponding effect size. Unfortunately, R does not provide this. It also does not provide a z value with which the effect size can easily be calculated using: effect size = z / sqrt(N)</p> <p>here is some sample R code:</p> <pre><code>a=rep(0:1,each=20) #grouping variable b=c(rnorm(20, .03,.01), rnorm(20, .02, .009)) #vector of p-values d=cbind(a,b) test = wilcox.test(b ~ a, data = d) #perform Wilcoxon rank-sum test test </code></pre> <p>Does anybody know how to obtain the effect size?</p>
74,186
<p>Suppose we have a joint distribution on vector $[\mathbf{x}, y]$: $$ p([y, \mathbf{x}] ) = \mathcal{N}\left(\begin{pmatrix} y \\ \mathbf{x}\end{pmatrix}| 0, \begin{pmatrix} k&amp; \mathbf{v} \\ \mathbf{v}^T &amp; K\end{pmatrix}\right), $$ where $\mathbf{x} \in \mathbb{R}^N$, $y \in \mathbb{R}$. And also we know distribution of $\mathbf{x}$ conditioned on some data $D$: $$ q(\mathbf{x}| D) = \mathcal{N} (\mu, \sigma^2 I). $$ How does analytical expression for $p(y| D)$ look like (i.e. how to handle such an integral in the simplest way)?: $$ p(y| D) = \int p(y| \mathbf{x}) q(\mathbf{x}| D) d \mathbf{x} = ? $$</p> <p>So, I try to solve this, but all I obtain looks like a monster, which isn't appropriate for me, so I have to use this expression further through my research.</p>
74,187
<p>I know that linear regression can be thought as <em>"the line that is vertically closest to all the points"</em>:</p> <p><img src="http://i.stack.imgur.com/IXibx.png" alt="enter image description here"></p> <p>But there is another way to see it, by visualizing the column space, as <em>"the projection at the space spanned by the columns of the coefficient matrix"</em>: </p> <p><img src="http://i.stack.imgur.com/txk3G.png" alt="enter image description here"></p> <p>My question is: in these two interpretations, what happens when we use the penalized linear regression, like <strong>ridge regression</strong> and <strong>LASSO</strong>? What happens with the line in the first interpretation? And what happens with the projection in the second interpretation?</p> <p><strong>UPDATE:</strong> @JohnSmith in the comments brought up the fact that the penalty occurs in the space of the coefficients. Can we come up with an interpretation in this space also?</p>
74,188
<p>I'm trying to build a model that would describe some process of payment and distribution of payments in time. I believe that time of payment has <a href="http://en.wikipedia.org/wiki/L%C3%A9vy_distribution" rel="nofollow">Levy distribution</a> with probability density function:</p> <p>$ f(x,c)=\sqrt{\frac{c}{2\pi}}~~\frac{e^{ -\frac{c}{2x}}} {x^{3/2}} $</p> <p>This distribution depends on parameter <em>c</em> which actually defines the shape of distribution. My task is to build model that explains dependency of this parameter on some explaining variables. I'm trying linear dependency $c = \sum_i \beta_i x_i$ </p> <p>This is example of generalized linear model and is implemented in the <a href="http://cran.r-project.org/web/packages/VGAM/index.html" rel="nofollow">VGAM package</a> in R. Problem is that in a sample for building this model I have data only from some period at the beginning and this period is different for different groups of cases. And because of that I can not just to run the model in VGAM package on this data as the result will be incorrect significantly exaggerating probability of early payments.</p> <p>One possible solution I can think about is to change the likelihood function from which parameter is estimated. If we have information only from time up to <em>t</em> and as cumulative distribution function of Levy distribution is:</p> <p>$ F(x,c)=\textrm{erfc}\left(\sqrt{c/2x}\right) $</p> <p>the density of distribution up to time <em>t</em> is $ f_1(x,c,t)= \frac{f(x,c)}{F(t,c)} $ (where $f(x,c), F(t,c)$ defined as above). This new density functions can be used in estimating regression parameters with maximum likelyhood method. But can it be done in R using methods from VGAM package or usual <strong>glm</strong> function or some other packages? Or there are some better approaches to my problem? I'm interested in implementation in R.</p> <p>Thank you in advance for any help!</p>
38,194
<p>I am learning Gaussian kernel SVMs recently. I have to choose the parameter $\epsilon$ for Gaussian kernel, $$k(x,t)=e^{-\frac{\|x-y\|_2^2}{2\epsilon}}.$$</p> <p>I try to find the answer in literature. I found some to analyze the parameters like '<a href="http://www.ncbi.nlm.nih.gov/pubmed/20221922" rel="nofollow">A User's Guide to Support Vector Machines</a>' (<a href="http://pyml.sourceforge.net/doc/howto.pdf" rel="nofollow">PDF</a>). I really want to analyze these by myself step by step. But I have work to finish, you know. I found a paper '<a href="http://www.ncbi.nlm.nih.gov/pubmed/14690712" rel="nofollow">Practical Selection of SVM Parameters and Noise Estimation for SVM Regression</a>' (<a href="http://www.svms.org/parameters/CherkasskyMa2004.pdf" rel="nofollow">PDF</a>). </p>
74,189
<p>I am working on time series analysis. The book I am using ("Statistics for Long Memory Processes" by Jan Beran) uses a number of measures for process memory and one of them is to graph Sample Mean Variance against the sample size (In other words, for every n, the variance of the mean of $X_n$ is graphed).</p> <p>There is a graph of the famous Nile low water level dataset (page 21) and I have not been able to replicate it with the same dataset. I tried, for every n, to take many (100k) random samples, calculate average for each of them and graph their variance against n. My graph is significantly different, most importantly, while the log-log graph for me is fairly continuous and has a slop of 1, the graph in the book has a much different slope.</p> <p>Am I going the wrong way about this?</p> <p>EDIT: As per the request So here is the code:</p> <pre><code>sampleCount = 100000 def sampleMeanVariance(inputArray,sampleSizes): for sampleSize in sampleSizes: mean_set = [] for i in range(sampleCount): interarrivalTimesSample = random.sample(inputArray, sampleSize) mean_set.append(mean(interarrivalTimesSample)) print sampleSize,var(mean_set) </code></pre> <p>Here is <a href="http://postimg.org/image/kgpl3wjrx/" rel="nofollow">my plot</a>.</p> <p>Here is the <a href="http://books.google.de/books?id=jdzDYWtfPC0C&amp;lpg=PP1&amp;dq=Statistics%20for%20long%20range%20processes%20nile%20river&amp;hl=de&amp;pg=PA21#v=onepage&amp;q&amp;f=false" rel="nofollow">book plot</a> (pg 21):</p>
36,907
<p>In a normal distribution, the <a href="http://en.wikipedia.org/wiki/68-95-99.7_rule">68-95-99.7 rule</a> imparts standard deviation a lot of meaning. But what would standard deviation mean in a non-normal distribution (multimodal or skewed)? Would all data values still fall within 3 standard deviations? Do we have rules like the 68-95-99.7 one for non-normal distributions?</p>
20,350
<p>I am using panel data and I would like to determine whether I can use the Random Effects (RE) model instead of Fixed Effects (FE) to estimate one coefficient of interest. When I use the Hausman test comparing FE and RE, I have to reject the null hypothesis (meaning that the RE model is not OK). However, the difference between the coefficient of interest estimated by FE and RE is not statistically significant. So my question is: Can I justify using the RE model only based on this fact? After all, the null hypothesis of the Hausman test must be rejected only because the estimation of some other covariates (control variables) significantly differ between the RE and FE approaches. But in my case, these variables are not of interest and I do not seek for consistent estimators.</p>
36,910
<p><a href="http://journals.ametsoc.org/doi/abs/10.1175/JCLI4253.1" rel="nofollow">Perkins et al. (2007)</a> introduce a "skill score" for measuring climate model output against observations. The score basically consists of measuring the overlap between probability density functions of the model (m), and the observations (o); for some variable (eg. maximum daily temperature). It is calculated as </p> <p>$$S_{score} = \int^\infty_{-\infty} min[pdf(m),\ pdf(o)]$$</p> <p>I'm trying to wrap my head around bayesian conditional distributions, and not getting far. This seems related, in that it's some measure of the likelihood of the model being a good estimate of the observations. However, I can't figure out if it's equivalent or not.</p> <p>Given $P(m|o) = \frac{P(o|m)P(m)}{P(o)}$, is it correct that $P(m)=\int^\infty_{-\infty} pdf(m)=1$, and the same for the obs? Or am I missing something big here?</p>
48,610
<blockquote> <p><strong>Possible Duplicate:</strong><br> <a href="http://stats.stackexchange.com/questions/14500/how-can-a-regression-be-significant-yet-all-predictors-be-non-significant">How can a regression be significant yet all predictors be non-significant?</a> </p> </blockquote> <p>X and Y are not correlated (-.01); however, when I place X in a multiple regression predicting Y, alongside three other (related) variables, X and two other variables are significant predictors of Y. Note that the two other variables are significantly correlated with Y.</p> <p>How should I interpret these findings? X predicts unique variance in Y, but since these are not correlated, it is somehow difficult to interpret. </p> <p>I know of opposite cases (i.e., two variables are correlated but regression is not significant) and those are relatively simpler to understand from a theoretical and statistical perspective. </p>
49,368
<p>Apologies for the almost text-book like question. </p> <p>I have a 2x2 design with fixed categories and a continuous response variable.</p> <p>If the variances are equal between groups (Bartlett test) and residuals are normally distributed (Shapiro test), ok I can do standard ANOVA. </p> <p>Otherwise: </p> <ol> <li><p>Try transforming the data (e.g: arcsin(sqrt), or log(), or even rank()). If transformed data is homoscedastic &amp; normal residues, do normal ANOVA. </p></li> <li><p>One option: Kruskal test (tells you whether any means differ between groups) followed by many pairs of wilcox tests (to identify which means differ). If all are significant, all factors (and interactions are significant). </p></li> <li><p>Another option: Use the bootstrap approach (permuting residuals) outlined here: <a href="http://stats.stackexchange.com/questions/12151/is-there-an-equivalent-to-kruskal-wallis-one-way-test-for-a-two-way-model">Is there an equivalent to Kruskal Wallis one-way test for a two-way model?</a></p></li> </ol> <p>Is this correct?</p>
34,294
<p>Intervention analysis in Box-Jenkins framework crosspoinds to time-series regression with arma errors if the noise is stationary or arima errors if the noise is non-stationary. </p> <p>For a seasonal time series data with increasing trend, the noise model can be express as</p> <p>$$ N_t = \frac{\Theta(B)}{(1-B)(1-B^{12})\Phi(B)} \eta_t $$ </p> <p>If there is a step $S_t$ (0 before intervention and 1 after intervention) and a pulse $P_t$ (1 at intervention and 0 elsewhere) interventions, the model then can be expressed as</p> <p>$$ Y_t=\beta_1S_t+\beta_2P_t+\frac{\Theta(B)}{(1-B)(1-B^{12})\Phi(B)} \eta_t $$</p> <p>Also because there may different responses to the interventions, say graduate change in level is by $\frac{\omega S_t}{1-\delta B}$ or decayed responses $\frac{\omega P_t}{1-\delta B}$. </p> <p>$$ Y_t=\frac{\omega S_t}{1-\delta B}+\frac{\omega P_t}{1-\delta B}+\frac{\Theta(B)}{(1-B)(1-B^{12})\Phi(B)} \eta_t $$</p> <p>Therefore my question is:</p> <p>if the data is seasonal time series, then in the practice, does it mean we need to perform difference $(1-B)(1-B^{12})S_t$ and $(1-B)(1-B^{12})P_t$ along with $(1-B)(1-B^{12})Y_t$ anyways when consider those interventions?</p> <p>Thanks and Regards</p>
74,190
<blockquote> <p><strong>Possible Duplicate:</strong><br> <a href="http://stats.stackexchange.com/questions/35510/why-does-including-latitude-and-longitude-in-a-gam-account-for-spatial-autocorre">Why does including latitude and longitude in a GAM account for spatial autocorrelation?</a> </p> </blockquote> <p>I am interested in the effect of a predictor vector $X_i$ on a binary outcome $Y_i$, with corresponding spatial location $s_i$, so I've considered the spatial random effects model, </p> <p>\begin{equation} logit(P(Y_i=1)) = X_i \beta + Z(s_i) \end{equation}</p> <p>where $Z(s_i)$ is a stationary mean zero random process whose covariance structure is known up to a number of parameters $\theta$. It appears that this model is usually fit using Bayesian sampling algorithms, which can take a very long time to run when $n$ is large, since you have to sample from the joint posterior distribution of $\beta, \theta$ and the random effects. To circumvent the computational complexity and the expert tuning required for the MCMC, I've instead fit the model </p> <p>\begin{equation} logit(P(Y_i=1)) = X_i \beta + f(s_i) \end{equation}</p> <p>where $f(s_i)$ is estimated by a 2-D spline, which can be fit with the gam package in R in a matter of seconds. I have found in simulation studies that this model is effective at removing the spatial autocorrelation from the residuals and gives reasonable estimates for $\beta$. This leaves me wondering why this approach is not prominent in the literature. Why do users often opt for the computationally intensive sampling algorithms? Is there some drawback or something systematically lacking in this approach? </p>
49,903
<p>Based on real data (e.g. spot and futures prices of an index) if two series are correlated in the long run (e.g. strong positive significant correlation) it does not mean that they are cointegrated.</p> <p>What if two series are cointegrated: can we infer that they are also correlated in the long run? Can we find a case with real data that two series are cointegrated but they are not correlated in the long run?</p>
74,191
<p>I am trying to simulate a process of selection without replacement. The process is one in which the system places a set of items in a specific order, and then the user selects N items in whatever order they want. The order of the items will vary depending on an algorithm. </p> <p>The probability that each item will be picked by the user varies based on: the item, the position, and which draw the user is on (1st, 2nd, etc.). </p> <p>The full probability space is: 1 = Pr(Draw any item) + Pr(Draw no items and end selection)</p> <pre><code>+--------+----------+ | Item | Position | +--------+----------+ | Item A | 1 | | Item B | 2 | | Item C | 3 | | Item D | 4 | +--------+----------+ </code></pre> <p>My challenge is trying to figure out how to generate a conditional probability table that can be used in a simulation. I have one that is Pr(Click | Item, Position, Draw Number). But what I need is Pr(Click | Item, Position, Draw Number, {History of draws})</p> <pre><code>+--------+----------+------+-------------+ | Item | Position | Draw | Probability | +--------+----------+------+-------------+ | Item A | 1 | 1 | 0.33 | | Item B | 2 | 1 | 0.25 | | Item C | 3 | 1 | 0.2 | | Item D | 4 | 1 | 0.14 | | Item A | 1 | 2 | 0.1 | | Item B | 2 | 2 | 0.3 | | Item C | 3 | 2 | 0.18 | | Item D | 4 | 2 | 0.05 | +--------+----------+------+-------------+ </code></pre> <p>This table fails to take into account that the process is a draw without replacement. The probabilities for draw 2 do not take into account which item was selected in the first draw.</p> <p>If I simulate a series of draws, and simulate the user on Draw 1 selecting Item B, is there a way to modify the probabilities of Items A, C D for draw 2 to account for the fact that Item B is no longer eligible?</p> <p>I think there is an application of Bayes Theorem here, but I'm struggling with how to adapt bayes formula to account for the number of conditionals.</p>
74,192
<p>I understand how best split is chosen for random forest for numerical predictors (features). </p> <p>Numerical predictors are sorted then for every value Gini impurity or entropy is calculated and a threshold is chosen which gives the best split. But how best split is chosen for categorical predictor as there is no specific ordering?</p>
49,622
<p>I am trying to fit a linear model (<code>lme4::lmer()</code>) to my data in R. I would like to look at a number of things, including "<strong>scrambling</strong>" of visual stimuli and "<strong>intensity</strong>" of the emotions portrayed therein. These things are stored the "<strong>scrambling</strong>" and "<strong>intensity</strong>" columns of my dataframe.</p> <p>To ease your comprehension you may see a graphic plot of my data in <a href="http://stats.stackexchange.com/questions/76134/determining-best-approximator-based-on-repeated-measurements">this other thread</a>.</p> <p>I have been told that linear model results can be compromised if category names are parsed as integers instead of strings by accident. But since these measures (<strong>scrambling</strong> and <strong>intensity</strong>) are kind-of quantitative, I am thinking it may be better to leave them as integers - or maybe even use both approaches separately.</p> <p>I am however unsure how my interpretation of results should vary depending on whether or not my category IDs are passed as stings or ints.</p> <p>Could anyone explain this to me?</p> <p>Also would this differentiation still hold for when I use <code>stats::aov()</code> on the same data?</p>
74,193
<p>I am frequentist by training and practice, but I'd like to learn more about Bayesian statistics. I know the basics, but I would be at a loss if I had to, for example, replace my normal ANOVA hypothesis testing approach with a Bayesian alternative.</p> <p>What book would you recommend to learn practical Bayesian approaches? Preferably using R.</p>
36,918
<p>I have a dataset (data1.csv) that contains some missing data ** . ** (missing at random), I am creating a subset from this dataset (d1) such that only complete observations are retained in d2. I am using the <strong>ftable</strong> , <strong>as.data.frame</strong> function and creating a column <strong>p</strong> that represent the percentage of each combination.</p> <pre><code> d1 = read.csv("C:/Users/....../Data1.csv",header=T) d1[d1=='.'] &lt;- NA d2=na.omit(d1) d3= ftable(d2) d4=as.data.frame(d3) d4$p </code></pre> <p>The function <strong>ftable</strong> , <strong>as.data.frame</strong> works fine but the problem is I still see missing data in my results (d4). I thought I got rid of this when i did </p> <pre><code> d1[d1=='.'] &lt;- NA d2=na.omit(d1) </code></pre> <p>So need help getting rid of missing values and doing frequency table only with complete observations</p>
74,194
<p>Consider 10 values that follow a standard normal distribution. What would you expect to be the lowest value?</p> <p>I tried to simulate this problem in R. I basically just simulated 100000 standard normal distribution with 10 values and took the mean of the each lowest value.</p> <pre><code>&gt; mean(replicate(100000,min(rnorm(10)))) [1] -1.536875 </code></pre> <p>This corresponds to a probability to get a lower value of </p> <pre><code>&gt; pnorm(-1.536875) [1] 0.06216196 </code></pre> <p>I tried to reach these values analytically but I really have no idea how to approach this.</p> <p>I thought about it for quite some time now and also tried to look it up. But I can't find a solutions to this simple problem. Probably just overlooking something obvious. Someone that can help me?</p>
36,920
<p>Training data with $p$ =11 predictors and $n$ =165 with 4-class problem was cross-validated (5 times repeated 10-fold CV) using the sparse LDA (aka SDA) using <code>caret</code> package. This model is a regularized version of LDA with two tuning parameters: <em>lasso</em> using $\ell_{1}$ penalty and ridge using the $\ell_{2}$ penalty. The former will eliminate unimportant predictors and hence provide <em>feature selection</em>, a desired effect, while ridge will shrink the discriminant coefficients towards zero.<br> In <code>caret</code> one can tune over the no of predictors to retain <em>instead</em> of defined values for $\ell_{1}$ penalty. The ridge can also be tuned in the model and given the name <code>lambda</code> in the figure below. When looking to the documentation of <code>sparseLDA</code> package, I found that ridge or <code>lambda</code> has a default value of 1e-6. I couldn't find any clue about how wide the tune range could be.<br> Anyway, empirical values were passed for <code>lambda</code> as in the code below between (0 to 100).<img src="http://i.stack.imgur.com/GrStW.png" alt="enter image description here"> </p> <pre><code>ctrl &lt;- trainControl(method = "repeatedcv", repeats = 5, number = 10, verbose = TRUE, classProbs = TRUE) sparseLDAGridx &lt;- expand.grid(.NumVars = c(1:11), .lambda = c(0, 0.01, .1, 1, 10, 100)) set.seed(1) # to have reproducible results spldaFitvacRedx &lt;- train(Class ~ ., data = training, method = "sparseLDA", tuneGrid = sparseLDAGridx, trControl = ctrl, metric = "Accuracy", # not needed it is so by default importance=TRUE, preProc = c("center", "scale")) </code></pre> <p>As shown in the figure, the best ridge was 100 (I am sure it can take on higher values had been passed), and the important predictors are 8. I rant out of ways to know which 8 predictors of the 11 they were. For example: </p> <p>I ran this code on <code>$finalModel</code> I got this: </p> <pre><code>&gt; spldaFitvacRedx$finalModel Call: sda.default(x = x, y = y, lambda = param$lambda, stop = -param$NumVars, importance = TRUE) lambda = 100 stop = 8 variables classes = H, P, R, Q Top 5 predictors (out of 11): IL12A, EBI3, IL12RB1, IL23R, IL12B </code></pre> <p>If I run the <code>$bestTune</code>:</p> <pre><code>&gt; spldaFitvacRedx$bestTune NumVars lambda 48 8 100 </code></pre> <p>If I run <code>varImp()</code>: </p> <pre><code>&gt; varImp(spldaFitvacRedx) ROC curve variable importance variables are sorted by average importance across the classes H P R Q IL12RB1 100.00 100.00 100.00 100.00 IL23A 100.00 0.00 99.05 100.00 IL12RB2 100.00 100.00 100.00 100.00 IL12B 100.00 85.71 97.14 100.00 IL23R 100.00 100.00 100.00 100.00 EBI3 100.00 96.43 98.10 100.00 IL8 100.00 100.00 91.43 100.00 IL6ST 100.00 100.00 99.05 100.00 IL17A 100.00 73.48 97.14 100.00 IL12A 100.00 42.86 100.00 100.00 IL27RA 99.29 92.86 98.10 99.29 </code></pre> <p>The last output is puzzling, I didn't ask for ROC, so that's must be an irrelevant output. The 11 predictors if were really sorted as said in the output across the 4 classes then IL23A cannot be at any rate the second one while more average values other predictors have (e.g., IL12RB2 and IL23R). </p> <p><strong>Questions:</strong> </p> <ol> <li>How to interpret this figure which refers to more important number of predictors as the ridge would be increased. In other words, why do more important predictors appear as we increase the ridge penalty? </li> <li>What is our clue to the range of ridge values to be tuned? what is the highest limit? </li> <li>In <code>caret</code> package, how can one know <em>which are</em> the most important predictors here? </li> </ol> <p><strong>Note:</strong><br> The 11 predictors are gene expression data of 11 genes. They are by nature correlated $ r $ was not above 0.9. </p> <p><strong>Update</strong><br> According to the answer below, I couldn't get 8 but rather all of the 11 predictors so what to do now? really puzzled.</p> <pre><code>set.seed(1) # important to have reproducible results SDAobj &lt;- train(Class ~ ., data = training, method = "sparseLDA", tuneGrid = data.frame(NumVars = 8, lambda = 100), preProc = c("center", "scale"), trControl = trainControl(method = "cv")) &gt; SDAobj$finalModel$xNames[SDAobj$finalModel$varIndex] [1] "IL8" "IL17A" "IL23A" "IL23R" "EBI3" [6] "IL6ST" "IL12A" "IL12RB2" "IL12B" "IL12RB1" [11] "IL27RA" &gt; SDAobj$finalModel$varIndex [1] 1 2 3 4 5 6 7 8 9 10 11 </code></pre> <p>I tried on iris data, the same problem it returned all the 4 variables instead of 3, no feature selection was obtained: </p> <pre><code>data(iris) set.seed(1) obj &lt;- train(iris[,-5], iris$Species, method = "sparseLDA", tuneGrid = data.frame(NumVars = 3, lambda = 1), preProc = c("center", "scale"), trControl = trainControl(method = "cv")) &gt; obj$finalModel$xNames[obj$finalModel$varIndex] [1] "Sepal.Length" "Sepal.Width" "Petal.Length" [4] "Petal.Width" </code></pre> <p>Now trying the <code>Sonar</code> data, it was successful (10 were selected out of 60 predictors): </p> <pre><code>library(mlbench) data(Sonar) set.seed(1) obj &lt;- train(Class~., data = Sonar, method = "sparseLDA", tuneGrid = data.frame(NumVars = 10, lambda = 1), preProc = c("center", "scale"), trControl = trainControl(method = "cv")) &gt; obj$finalModel$xNames[obj$finalModel$varIndex] [1] "V4" "V11" "V12" "V21" "V22" "V36" "V44" "V45" [9] "V49" "V52" </code></pre> <p><strong>Question:</strong><br> My data and <code>iris</code> are more than 2 classes, <code>mdrr</code> and <code>Sonar</code> are 2-class problems. Most likely the problem is there, can you please help to fix this phenomenon? really appreciate that. </p>
74,195
<p>I'm reading the GPML book and in <a href="http://www.gaussianprocess.org/gpml/chapters/RW2.pdf" rel="nofollow">Chapter 2 (page 15)</a>, it tells how to do regression using Gaussian Process(GP), but I'm having a hard time figuring how it works.</p> <p>In Bayesian inference for parametric models, we first choose a prior on the model parameters $\theta$, that is $p(\theta)$; second, given the training data $D$, we compute the likelihood $p(D|\theta)$; and finally we have the posterior of $\theta$ as $p(\theta|D)$, which will be used in the <em>predictive distribution</em> $$p(y^*|x^*,D)=\int p(y^*|x^*,\theta)p(\theta|D)d\theta$$, and the above is what we do in Bayesian inference for parametric models, right?</p> <p>Well, as said in the book, GP is non-parametric, and so far as I understand it, after specifying the <em>mean function</em> $m(x)$ and the <em>covariance function</em> $k(x,x')$, we have a GP over function $f$, $$f \sim GP(m,k)$$, and this is the <strong>prior</strong> of $f$. Now I have a <strong>noise-free</strong> training data set $$D=\{(x_1,f_1),...,(x_n,f_n)\}$$, I thought I should compute the <strong>likelihood</strong> $p(D|f)$ and then the <strong>posterior</strong> $p(f|D)$, and finally use the posterior to make predictions.</p> <p>HOWEVER, that's not what the book does! I mean, after specifying the prior $p(f)$, it doesn't compute the likelihood and posterior, but just go straight forward to the predictive prediction.</p> <p>Question:</p> <p>1) Why not compute the likelihood and posterior? Just because GP is non-parametric, so we don't do that?</p> <p>2) As what is done in the book (page 15~16), it derives the <strong>predictive distribution</strong> via the joint distribution of training data set $\textbf f$ and test data set $\textbf f^*$, which is termed as <strong>joint prior</strong>. Alright, this confuses me badly, why joint them together?</p> <p>3) I saw some articles call $f$ the <strong>latent</strong> variable, why?</p>
36,921
<p>I'm trying to differentiate two groups of patients using various machine learning algorithms, including support-vector machines (SVM). </p> <p>As far as the details of the analysis go, I would like to train the sample on a separate group and cross-validate on another. </p> <p>The problem is that patients are different in some categorical variables (gender for example) and continuous variables (age for example) none of which are of interest. In regression analysis using generalized linear models, it is easy to factor out nuisance variables. I'm wondering whether there is a way in machine learning as general, and SVM in particular to factor out the effect of nuisance variable. In some papers I have seen that authors include nuisance variable to somehow normalize them. </p>
36,922
<p>I have a collection of training documents with publication dates, where each document is labeled as belonging (or not) to some topic T. I want to train a model that will predict for a new document (with publication date) whether or not it belongs to T, where the publication date might be in the past or in the future. Assume that I have decomposed each training document's text into a set of features (e.g., TF-IDF of words or n-grams) suitable for analysis by an appropriate binary classification algorithm provided by a library like Weka (for instance, multinomial naive Bayes, random forests, or SVM). The concept to be learned exhibits multiple seasonality; i.e., the prior probability that an arbitrary document published on a given date belongs to T depends heavily on when the date falls in a 4-year cycle (due to elections), where it falls in an annual cycle (due to holidays), and on the day of the week.</p> <p>My research indicates that classification algorithms generally assume (as part of their statistical models) that training data is randomly sampled from the same pool of data that the model will ultimately be applied to. When the distribution of classes in the training data differs substantially from the known distribution in the wild, this leads to the so-called "class imbalance" problem. There are ways of compensating for this, including over-sampling underrepresented classes, under-sampling overrepresented classes, and using cost-sensitive classification. This allows a model creator to implicitly specify the prior probability that a new document will be positively classified, but importantly (and unfortunately for my purposes), this prior probability is assumed to be equal for all new documents.</p> <p>I require more flexibility in my model. Because of the concept's seasonality, when classifying a new document, the model must explicitly take the publication date into account when determining the prior probability that the document belongs to T, and when the model calculates the posterior probability of belonging to T in light of the document's features, this prior probability should be properly accounted for. I am looking for a classifier implementation that either (1) bakes sophisticated regression of prior probabilities based on dates into the classifier, or (2) can be extended with a user-specified regression function that takes a date as input and gives the prior probability as output.</p> <p>I am most familiar with the Weka library, but am open to using other tools if they are appropriate to the job. What is the most straightforward way of accomplishing this task?</p>
15,738
<p><strong>Question</strong></p> <p>In some bank, the time it takes from the moment a customer arrives until a clerk is available is distributed normally with $\mu=15,\sigma=2$</p> <p>a. What's the probability for the next client to wait more than 18 minutes?</p> <p>b. What's the prob. that the average waiting time of the next 100 customers will be greater than 15.1 minutes (assuming that the waiting times are independent)</p> <p>c. The bank management hired a new manager for customer service and he claims that since he arrived at this job, the time it takes to be treated by a clerk decreased. To check his claim he measured the waiting time of 30 random customers and got an average of 14.3 minutes. Assuming that the s.d. hasn't changed - Check if his claim is true, with confidence level of a=0.1.</p> <p><strong>My answer</strong></p> <p>a. P(X>18)=(normalizing) P(Z>1.5)=0.0668</p> <p>b. $P(\bar X&gt;15.1)=P(Z&gt;0.5)=0.3085$</p> <p>c. $H_0:\mu=15, H_A:\mu&lt;15$ so $CI=(-\infty,15-Z_0.99 \frac 2{\sqrt{30}}]=(-\infty,14.15]$ but 14.13&lt;14.15 so H_0 is true and not rejected. </p> <p>Also calculating the p-value: $P_{H_0}(\bar X&lt;14.3)=P(Z&lt;-1.91)=0.0281&gt;0.01 $therefore $H_0$ is not rejected.</p> <p>Since this is the first time I've been doing it on my own I'd love if someone take a look, and confirm that it's correct (or not?) </p>
74,196
<p>I am looking for a working algorithm for find out optimal kernel bandwidth for density estimation. I need to write my own program in pascal instead of using R or Matlab. So far all algorithms I found failed. For example, this one looks simple and promising:</p> <p><a href="http://www.di.ubi.pt/~lfbaa/entnetsPubs/bandwidth.pdf" rel="nofollow">http://www.di.ubi.pt/~lfbaa/entnetsPubs/bandwidth.pdf</a></p> <p>I wrote the following program for it:</p> <pre><code>program test; {$mode objfpc}{$H+} function hopt(data: array of Double): Double; var h0, h1, k4, s: Double; i, j, n: Integer; begin n := Length(data); //Silverman's value to start with //stdev_p() simply calculate the population standard deviation h0 := 1.06*stdev_p(data)*power(n,-0.2); while True do begin //start of formula-17 k4 := 0; for i := 0 to n - 1 do begin for j := 0 to i - 1 do begin s := sqr(data[i] - data[j]); k4 += (sqr(s-6*h0*h0)-24*power(h0,4))*exp(-s/4/h0/h0); end; end; k4 := 3*n*h0+k4/2/power(h0,3); h1 := power(4*n*power(h0,6)/k4,0.2); //end of formula-17 h1 := (h0+h1)/2; if abs(h1-h0) &lt; 0.0001 then Break; h0 := h1; end; Result := h1; end; var a: array of Double; begin SetLength(a, 6); a[0] := -2.1; a[1] := -1.3; a[2] := -0.4; a[3] := 1.9; a[4] := 5.1; a[5] := 6.2; WriteLn('hopt=', hopt(a)); end. </code></pre> <p>Using this sample to test the above function: -2.1, -1.3, -0.4, 1.9, 5.1, 6.2</p> <p>The result it gives is 3.716, which is much worse than initial Silverman value 2.335. Using R's sm::hsj() on this data set, the optimal h value is around 1.9, which is rather close to wikipedia's 1.5.</p> <p>I have also tried cross-validation method suggested here:</p> <p><a href="http://www3.stat.sinica.edu.tw/statistica/oldpdf/A6n18.pdf?origin=publication_detail" rel="nofollow">http://www3.stat.sinica.edu.tw/statistica/oldpdf/A6n18.pdf?origin=publication_detail</a></p> <p>which does not converge at all.</p> <p>Could anyone explain a simple way, be it cross-validation, or plug-in method, using a small data set, rather than math formula or proprietary languages such as R or matlab?</p> <p>Thanks a lot!</p>
19,101
<p>I found that the probability of an event occurring is an algebraic function of all the probabilities that I want to find;</p> <p>$$P(v_1,v_2,v_3,...,v_n)=p_{collected}$$</p> <p>For small $n$, it would be easy to solve for all $v$ as a system of equations. However, as $n$ grows large (in the order of hundreds to several thousand), analyzing the data in this manner becomes infeasible. </p> <p>The majority of the variables may be zero (or very, very close) and are negligible. The function is not a polynomial so it might be hard to solve using basic linear algebra.</p> <p>Is there a way to estimate what these variables could be?</p>
36,927
<p>I have been having a hard time deciding which statistical test to choose for a dataset. The more a read on the web, the more I get confused since frequently there are different opinions when it comes to chose the right test.</p> <p>To this extent, when in doubt, I apply one parametric and one non-parametric tests, for example, an one-way ANOVA and a Kruskal-Wallis, or a two-sample t-test and a Mann-Whiteney, hoping that both tests give me the same output (generally $p &lt; 0.05$). If they do, I am done; if not, then I need to work harder.</p> <p>Is there some well recognized site out there that provides some kind of decision support tree for choosing statistical tests?</p> <p>Is there some tool that checks <em>as much as possible</em> the assumptions of a statistical test on a given dataset before applying it? For example, for one-way ANOVA it could check for normality and variance homogeneity automatically!</p> <p>I think such site or tool would help a lot, but probably I am asking too much ...</p> <p>Thanks</p>
74,197
<p>After doing k-means clustering on a set of observations, I would like to construct a discriminant function so as to classify new observations into the categories I found after k-means. Is this at all a good idea? What should I be careful with?</p>
36,928
<p>These two functions exist in R but I don't know their differences. It seems that they only return the same p-values when calling <code>wilcox.test</code> with <code>correct=FALSE</code>, and <code>wilcox_test</code> (in the coin package) with <code>distribution="aymptotic"</code>. For other values they return different p-values. Also <code>wilcox.test</code> is always returning W=0 for my dataset, independently of the settings of its parameters: </p> <p><code>x = c(1, 1, 1, 3, 3, 3, 3)</code> and <code>y = c(4, 4, 6, 7, 7, 8, 10)</code></p> <p>Also, when I try using different tools other than R (some available online, others as Excel add-ons), sometimes they report different p-values.</p> <p>So how can I know which tool is giving the "correct" p-value? </p> <p>Is there a "correct" p-value, or if a few tools give a p-value &lt; 0.05 should I be happy? (Sometimes these tools do not offer so many parametrization possibilities like R.)</p> <p>What am I missing here? </p> <p>Thanks</p>
37,492
<p>Suppose I am doing some experimental procedure on two treatment groups. The procedure has several stages, each of which may fail. Failure at any stage halts the experiment. If all stages are passed then there is some useful result.</p> <p>Although I'm primarily interested in the final result, the treatments <strong>might</strong> also entail different failure rates along the way. I'd like to quantify this, and since we're looking at simple counts it seems like a chi square or Fisher exact test would be appropriate.</p> <p>If I want to use such a test as it were <em>recursively</em>, to the groups passing each stage, do I need to apply some correction for multiple comparisons?</p> <p>That is, supposing the groups progressed like this:</p> <pre><code> Group_A Group_B Start 100 100 Stage_1 90 95 Stage_2 80 85 Stage_3 60 75 Stage_4 55 30 Results ... ... </code></pre> <p>Does it make sense to do a sequence of 2x2 tests of the form:</p> <pre><code> Group_A Group_B Passed_N X Y Failed_N Started_N-X Started_N-Y </code></pre> <p>I feel like I should just <em>know</em> the answer, but I can't figure out whether this counts as doing repeated tests or not. The populations are <em>somewhat</em> distinct each time, but heavily overlapping.</p> <p>Also, would it make a difference if I had physical reasons to suppose that only stage 4 should be at all affected by the treatments? Could I just choose to ignore any differences in passage through the other stages in that case?</p> <p>(Feel free also to post answers like "ZOMG, don't use that sort of test here, use XXXX, in manner YYYY, for reasons ZZZZ.")</p>
74,198
<p>I have a dataset that has both continuous and categorical data. I am analyzing by using PCA and am wondering if it is fine to include the categorical variables as a part of the analysis. My understanding is that PCA can only be applied to continuous variables. Is that correct? If it cannot be used for categorical data, what alternatives exist for their analysis? </p>
48,631
<p>There are 8 people playing poker.</p> <p>So, the odds of winning the entire round = 1/8</p> <p>2 rounds are played, and Bill wins both rounds.</p> <p>What are the odds this was random? (Hypothesis test?)</p> <p>NullH = Bill has no added skill. (Got lucky)</p> <p>AltH = Bill has skill.</p> <p>p = .13 = 1/8</p> <p>q = .87 = 7/8</p> <p>n = 2</p> <p>SD = sqrt(pq/n) = .23</p> <p>actual (p-hat) = 1</p> <p>z = 3.74</p> <p>p-value = 0%</p> <p>Conclusion: Odds of winning 2 out of 2 rounds randomly is unlikely.</p> <p>Reject null hypothesis. Bill has skill.</p> <p>Is this right? Thanks!!</p>
36,934
<blockquote> <p><strong>Possible Duplicate:</strong><br> <a href="http://stats.stackexchange.com/questions/6/the-two-cultures-statistics-vs-machine-learning">The Two Cultures: statistics vs. machine learning?</a> </p> </blockquote> <p>What is the difference between data mining and statistical analysis?</p> <p>For some background, my statistical education has been, I think, rather traditional. A specific question is posited, research is designed, and data are collected and analyzed to offer some insight on that question. As a result, I've always been skeptical of what I considered "data dredging"--looking for patterns in a large dataset and using these patterns to draw conclusions. I tend to associate the latter with data-mining and have always considered this somewhat unprincipled--along with things like algorithmic variable selection routines.</p> <p>Nonetheless, there is a large and growing literature on data mining. Often I see this label referring to specific techniques--clustering, tree-based classification, etc. Yet, at least from my perspective, these techniques can be "set loose" on a set of data or used in a structured way to address a question. I'd call the former data mining and the latter statistical analysis.</p> <p>I work in academic administration and have been asked to do some "data mining" to identify issues and opportunities. Consistent with my background, my first questions were--what do you want to learn and what are the things that you think contribute to issue. From their response, it was clear that me and the person asking the question had different ideas on the nature and value of data mining.</p>
48,439
<p>I have a ranking of books that is based on number of sales and I want to improve it to include how many countries the book was sold in and number of libraries that carries it</p> <pre><code>Book rating 1 = # sales Book rating 2 = # sales + # countries + # libraries </code></pre> <p>Can I say that Book rating 2 is "better" than book rating 1 ? What do "better" mean here? more comprehensive and more than one dimension?</p> <p>If I have a list of books that were ranked using these 2 equations , can I compare the 2 ratings to see if there a difference between them using t-test for example? or there is different test to use ?</p> <pre><code>Example - equation1 - equation2 Book 1 - 5000 - 5050 Book 2 - 300 - 320 Book 3 - 90 - 99 </code></pre> <p>Finally, forming a new equation and comparing it to existing ones is called modeling ?</p> <p>Many thanks </p>
74,199
<p>Let $f\left(x\right)$ be the probability density function of the random variable $X$. What is the joint probability distribution of $f_{X,Y}\left(x,y\right)$ if $Y=X$? </p> <p>Thanks for any helpful answer.</p>
74,200
<p>I have data for multiple people, where each person performs and is graded on a task an arbitrary number of times each year across multiple years. E.g. - </p> <pre><code> year 1 year 2 year 3 Person1 1 1 3 2 4 2 3 3 5 2 3 4 1 Person2 2 3 1 7 9 1 2 3 3 1 Person3 ... </code></pre> <p>So in year 1, person 1 does the task 5 times and gets scores 1 1 3 2 4. I take the mean for each person in each year: </p> <pre><code> year 1 year 2 year 3 Person1 2.2 2.7 3 Person2 2 4.75 2.3 Person3 ... </code></pre> <p>How do I test for a significant difference between people in the means for each year? I assume that I could do a repeated measures ANOVA on the means, but this ignores the number of measurements per person per year. For example, for person 1 in year1, there are 5 measurements compared to 3 for person2 in year1, making the former measurement more certain. In general, some people can have far more measurements than other people across all years. Thanks.</p>
36,938
<p>Having looked at multiple online sources, I can't seem to get a straight answer. Could someone please clarify for me if ordinal data is sufficient to use for the WSRT and if not, is the sign test an appropriate alternative? Finally, this is for my dissertation project at university and so if any references/literature could be included in answers it would be much appreciated as I need to justify my choice of test either way and so far have only found answers from websites (which I can't reference!)</p>
36,939
<p>Let's say the following is data for airplane accident death to total.</p> <pre><code>Country Sky Total death Total individual traveler A 30 10,000 B 60 15,000 C 3000 10,000,000 </code></pre> <p>Is it possible to calculate probability of air accident probability for any traveler in A, B, or C skies by simply dividing total deaths over total travelers ? If some countries have low flight and others have very high number of flights can this affect the estimates ? </p>
74,201
<p>Can somebody provide an intuitive difference between correlation and correlation coefficient? During learning of weights of neural network, I want to show how closely the estimated weights are to the known true weights. For this, I was thinking of using the correlation measure. If they are correlated then the value will be close to one. But I am not sure, if it should be the correlation or the correlation coefficient. </p> <p>Also, what is the difference between the two formula wise and physical meaning wise. Thank you</p>
74,202
<p>I need to be <em>neat</em> in measuring the success rate of a treatment. It is anyway pretty high. But as it is all about ecology, multpliying experiments is difficult.</p> <p>I have treated $N = 20$ individuals, $18$ succeded. This is a $\tau = .9$ success rate.</p> <p>I used the <em>Jeffrey's table</em> to get the uncertainty about this rate (<a href="http://www.jstor.org/discover/10.2307/2676784?uid=3739328&amp;uid=2&amp;uid=4&amp;sid=21104010444151" rel="nofollow">Brown 2013</a>), which seems then to be $CI_{95\%} = [.716,.979]$.</p> <p>Now, to be really neat, I'd like to increase it a little, taking into account the fact that, in reality, $N = N_1 + N_2 + N_3 + N_4$ with all $N_i = 5$, for the treatments have been given at 4 different dates.</p> <p>How do I take this replication into account ? How do I integrate a random factor to a proportion estimation ?</p> <p>I am aware it might be too much for not much, and that the raw data for $N = 20$ will be pretty convincing anyway. :)</p> <p>[EDIT:]</p> <p>A more narrow question seems to arise from your comments : How do I estimate, in the first place, whether or not the date $i$ is likely to have an impact on the success rate? Put it another way: How do I assess the null hypothesis $H_0: \tau_1 = \tau_2 = \dots$ with this (rather poor) data?</p> <ul> <li>If this hypothesis was to be validated by the data, then I would use this $CI_{95\%}$ and everthing would be fine.</li> <li>If it was to be invalidated by the data then How much wider should it be? (it would then represent the confidence interval of an "overall success rate $\tau$" (?))</li> </ul> <p>(once again, GLMs do seem to be the immediate answer but I'm far below their asymptotic assumptions..)</p> <p>(and once again, at the end of the day, $N$ might actually turn out to be so low that <em>doing statistics</em> is not a relevant option anymore ;)</p>
74,203
<p>I have fitted a Poisson regression to my claim frequency.</p> <p>I have obtained the following result:</p> <pre><code> Estimate Std. Error z value Pr(&gt;|z|) (Intercept) -19.95861 1139.33678 -0.018 0.9860 make -0.10534 0.04116 -2.559 0.0105 * agevehO 0.05983 0.08580 0.697 0.4856 area1 20.68177 1139.33677 0.018 0.9855 area2 20.85866 1139.33677 0.018 0.9854 area3 20.76927 1139.33676 0.018 0.9855 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Null deviance: 657.49 on 583 degrees of freedom Residual deviance: 150.62 on 578 degrees of freedom AIC: 1096. </code></pre> <p>My predictors area i have four levels (area 1,2,3,4) and agveh two (old and new), however for make i have four levels (make 1,2,3,4), how come it is not showing the three levels?? i am confused on how to interprete this result? Also my deviance table showed the following :</p> <pre><code> DF DEVIANCERESID DF RESIDDEV PR(&gt;CHI) make 1 13.61 582 643.88 0.0002251 *** ageveh 1 9.80 581 634.08 0.0017460 ** area 3 483.47 578 150.62 &lt; 2.2e-16 *** </code></pre> <p>Bases on this can i conclude that make ageveh and area are statistically significant in explaining my claim frequency? Thanks</p>
74,204
<p>I have two normal distributions, and I want to test whether they have the same standard deviation, I really don't care about the mean.</p> <p>My idea is: de-mean both of them and then use <a href="http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test" rel="nofollow">Kolmogorov-Smirnov</a> to test if the distributions are different, if they are then standard deviations also should be different. </p> <p>I am wondering if I am missing anything, and if there is a better way to do this.</p>
74,205
<p>The general disjunction rule for events $A_1$ and $A_2$ is $$P(A_1 \vee A_2) = P(A_1) + P(A_2) - P(A_1 \wedge A_2).$$ What about when there are $n$ events? What is $P(\bigvee_i^n A_i)$ where $A_i$ is the $i$th event?</p>
74,206
<p><img src="http://i.stack.imgur.com/agsGZ.png" alt="enter image description here"> <img src="http://i.stack.imgur.com/fgfjT.png" alt="enter image description here"></p> <p>I took the residual of a historical stock price $\hat e_t=r_t-\hat \mu_t$, where $r_t$ is the return of a stock and ran ACF and PACF. From the ACF I think that the residual does not follow AR or MA process, and the PACF shows slight MA. But I am new to this and I am not sure if my interpretation is correct.</p> <p>What do you think?</p> <p>The Ljung Box test gave Q-statistic of 87.5597</p> <p><img src="http://i.stack.imgur.com/AJi7W.png" alt="enter image description here"></p> <p>which rejects the null that the autocorrelation coefficients are all zero. I used 40 lags here to be consistent with the ACF and PACF. </p> <p>Does this contradict or confirm your intuition from visually inspecting ACF and PACF?</p>
74,207
<p>I am interested in stochastically modeling whether the market is likely to go on in the same direction(trend), or reverse and head back. This is all for intraday purposes, next 1-2 ticks kind of strategy with 30sec - 3 mins holding times. How can I attack this problem? Where do I start? </p>
74,208
<p>I am reading some books about hypothesis testing, but I am not sure if my following reasoning makes sense:</p> <p>Assume I have a gaussian random variable $X \sim N(\mu, \sigma)$ with $\sigma=1$. Now I obtain 5 iid samples, $x_1, \cdots, x_5$.</p> <p>I want to check if $\mu&lt;0$. So I set up the null hypothesis and alternative hypothesis to be $H_0: \mu=0$ and $H_1: \mu&lt;0$</p> <p>Therefore, for each sample $x_i$, I can compute the p-value = $P(X_i\le x_i)$, and denote it by $\alpha$. Therefore, I have $(1-\alpha)$ confidence to reject $H_0$. Also, by using the rule $(X\le x_i)$ to reject $H_0$, I have type-I error equal to $\alpha$.</p> <p>Now based on 5 samples, I have 5 p-values $\alpha_i$, ($i=1, \cdots, 5$). Therefore, I have $P(X_1\le x_1, \cdots, X_5\le x_5) = \alpha_1 \times \alpha_2 \times \cdots \times \alpha_5$. Therefore, using the decision rule that $(X_1\le x_1, \cdots, X_5\le x_5)$ to reject H0, I have a type-I error equal to $\times \alpha_1 \times \cdots \times \alpha_5$. And therefore, I have $(1-\times \alpha_1 \times \cdots \times \alpha_5)$ confidence to reject $H_0$.</p> <p>Basically, I want to use the 5 $x_i$'s for future testing. Next time I obtain 5 samples, I'll compare them to $x_i$. And I want to see how much confidence level this decision rule gives me. It seems to me that textbooks usually compute the p-value for one sample xi. I am basically trying to compute the "p-value" for 5 samples. </p> <p>Is the above reasoning correct?</p>
74,209
<p>Or more so "will it be"? <a href="http://en.wikipedia.org/wiki/Big_data">Big Data</a> makes statistics and relevant knowledge all the more important but seems to underplay Sampling Theory. </p> <p>I've seen this hype around 'Big Data' and can't help wonder that "why" would I want to analyze <strong>everything</strong>? Wasn't there a reason for "Sampling Theory" to be designed/implemented/invented/discovered? I don't get the point of analyzing the entire 'population' of the dataset. Just because you can do it doesn't mean you should (Stupidity is a privilege but you shouldn't abuse it :)</p> <p>So my question is this: Is it statistically relevant to analyze the entire data set? The best you could do would be to minimize error if you did sampling. But is the cost of minimizing that error really worth it? Is the "value of information" really worth the effort, time cost etc. that goes in analyzing big data over massively parallel computers?</p> <p>Even if one analyzes the entire population, the outcome would still be at best a guess with a higher probability of being right. Probably a bit higher than sampling (or would it be a lot more?) Would the insight gained from analyzing the population vs analyzing the sample differ widely? </p> <p>Or should we accept it as "times have changed"? Sampling as an activity could become less important given enough computational power :)</p> <p>Note: I'm not trying to start a debate but looking for an answer to understand the why big data does what it does (i.e. analyze everything) and disregard the theory of sampling (or it doesn't?)</p>
74,210
<p>If I have two different percentages and I wanted to know the change, I would simply use the percentage change formula...but what if I want to compare two percentages that have different amounts: Ex: August - there was 539 right answers out of 743 = 72.5 % September - there was 498 answers out of 820 = 60.7%</p> <p>How can I do a monthly comparison of change between those 2 percentages?</p> <p>It was suggested to me that I use LCD, but those numbers could be massive if I have a larger set of numbers like in the thousands.</p> <p>So, should I analyze the months separately or what should I do?</p> <p>**I want to do a comparative analysis on the students who took the exam in august vs the students in september. LCD = lowest common denominator. Can this be done?</p>
74,211
<p>I am trying to mine product-usage sequences for multiple users of online gaming site. I have found the R package <a href="http://cran.r-project.org/web/packages/arulesSequences/index.html" rel="nofollow">arulesSequences</a> but am not sure how to fit it to my problem. The data format would be tuples similar to those used in <a href="http://cran.r-project.org/web/packages/arulesSequences/index.html" rel="nofollow">arulesSequences</a>, but instead of just mentioning products A and B in transactions, I would like to mention the quantities in which those products (games of different types) were bought (played, in my case). </p> <p>Example:</p> <p>sequence1 (for uid=1)</p> <pre><code>Date UID Game1 Game2 Game3 Game4 Jan1 1 125times 0times 0times 0times Jan2 1 0times 1time 0times 0times </code></pre> <p>Each user would have such a sequence. However, I see that arulesSequences function cspades would only allow to operate with bollean types, e.g. whether each game was actually played on that date:</p> <p>Example:</p> <p>sequence1 (for uid=1)</p> <pre><code>Date UID Jan1 1 Game1 Game2 Jan2 1 Game1 Jan3 1 Game2 Game3 </code></pre> <p>My goal is to determine rules like "if a user playes Game3 on that date, it causes them to play much more of Game2 one week later". </p>
74,212
<p>Has the idea of a <a href="http://en.wikipedia.org/wiki/Maximum_entropy_probability_distribution" rel="nofollow">maximum entropy probability distribution</a> been explored for function spaces, and if so what are some key papers, books, or terms to look for?</p> <p>For $\mathbb{R}^n$ (and discrete spaces), the problem appears to be well studied - one maximizes the quantity, $$-\int f(x) ~ log f(x) ~ dx,$$ over the set of feasible candidate densities $f$, where the integral is taken with respect to the standard Lebesgue measure in $\mathbb{R}^n$ or counting measure in discrete spaces. There seems to be much literature about this problem which goes under names such as "non-informative priors", "maximum entropy distributions", "Jeffrey's priors", and the like.</p> <p>However, I've found little on this topic in the infinite dimensional (function space) setting. Can concept of maximum entropy priors be generalized to function spaces, or is the idea of entropy fundamentally incompatible with infinite dimensional space?</p> <p>Note: this thread wasn't getting answers here so I <a href="http://mathoverflow.net/questions/109671/maximum-entropy-priors-in-infinite-dimensional-spaces" rel="nofollow">reposted at mathoverflow</a>.</p>
74,213
<p>Let $p$ be some probability distribution with a density $f$.</p> <p>$p$ is defined over $\Omega$.</p> <p>Is it true that for any $\omega \in \Omega$, $p(\omega) = 0$?</p> <p>If not, what are the "minimal conditions" under which $p(\omega) = 0$ for any $\omega$ (the space $\Omega$ is actually a subset of $\mathbb{R}^d$ for some $d$)?</p>
36,953
<p>I am currently completing my dissertation. My study is cross-cultural and looks at predictors and inhibitors to adoption of technology in two countries (Thailand and Australia). I have a hypothesised model with IV's (Ease of Use, Usefulness, Need for Interaction, Risk, and Social Influence) directly linked to a single DV (intention to use). Both models are the exact same, so are the IVs and DVs (and related items), and sample sizes are similiar. </p> <p>I have run regression analysis on both the Thai and Australian sample individually. I have the regression coefficient outputs with signifiance etc. What I am trying to find out now is how to best test the following question (or something similiar): "<em>Social influence (IV) will have a stronger relationship in Thailand with intention to use (DV) m-banking than in Australia</em>". </p> <p>Is this the best way to test whether individual constructs fit better in one country then another? I want to test each individual construct to find out which has a more significant relationship between that IV and DV. </p> <p>I apologise if this question has been answered already somewhere on the site or sounds very simplistic. I am using SPSS v19.0 btw. Thanks in advance!</p>
74,214
<p>I've come accross a term called "statistical efficiency of the median" in a paper and couldn't find any definition in the paper. From my search online, I found that this might mean relative efficiency of the median compared to the mean.</p> <p>Can anyone shed some light to where I can look for clues?</p>
13
<p>We're working with some logistic regressions and we have realized that the average estimated probability always equals the proportion of ones in the sample; that is, the average of fitted values equals the average of the sample.</p> <p>Can anybody explain me the reason or give me a reference where I can find this demonstration?</p>
49,757
<p>Again this question may be simple for you, but it is an important aspect for my classification problem. Let`s say I have 5 attributes, which are:</p> <pre><code>- previous_value_1 - previous_value_2 - previous_value_3 - previous_value_4 - previous_value_5 </code></pre> <p>These attributes are generated with independent events, but I want to combine them for my classifiers therefore I need a way or statistical method to reach that goal. </p> <p>These, values are actually samples that if a process is improving or getting worse. Therefore, taking average of them is meaningless, I need them to generate a function and take its derivative. But, I do not know statistical counterpart for that operation or may be simpler way to do this. To sum up, I need a way to combine these attribute values as one, and that new attribute should indicate whether it is going up or down. </p> <p>I hope I managed to make some details clear to get an answer. </p> <p>Also, many thanks in advance.</p>
74,215
<p>Following my earlier question <a href="http://stats.stackexchange.com/questions/16524/multidimensional-vs-unidimensional-measure">here</a>, is there a quick way (using Excel or SPSS) to ascertain/calculate the reliability of composite scores. </p> <p>Reliability in this case is for me to say confidently (i.e. the ordinary not statistical meaning of this word!) that the composite score is consistently measuring the concept. </p>
48,665
<p>I am currently generating data by simulating a model of chemical system under different conditions (temperature) over time. In each simulation, the starting structure being modeled is exactly the same - only the temperature is different. The system is allowed to propagate over time and the length and number of observations in each simulation is identical. I would like to compare mean values e.g. distances between two atom under the different conditions. I have two questions:</p> <ol> <li><p>Should I regard the two simulations I have (high and low temperature) as paired data. How would an analogous human study be treated (e.g. comparing the behaviour of a single human participant under during a 1 hour period under condition 1 and another 1 hour period after an extensive washout period - under condition 2)? </p></li> <li><p>Since I effectively have two time series, what are the implications of the distance I want to measure being autocorrelated in some way? </p></li> </ol>
74,216
<p>I used my training dataset to fit cluster using kmenas function</p> <pre><code>fit &lt;- kmeans(ca.data, 2); </code></pre> <p>How can I use fit object to predict cluster membership in a new dataset?</p> <p>Thanks</p>
74,217
<p>I have 30 variables and am trying to select the best model. I have run the following methods on a 'large' data set (having removed a smaller test set): </p> <ul> <li>OLS, </li> <li>best subset selection, </li> <li>stepwise selection, </li> <li>ridge regression, </li> <li>LASSO, </li> <li>PCR and </li> <li>PLS. </li> </ul> <p>All outliers were removed from both data sets. None of the variables/response have been transformed in any way prior to running the above methods and there is little/no collinearity between variables. </p> <p>I ran each model (for OLS I ran the entire 30 variable model) and computed the MSE and variance for each. They differ only by 0.001 in MSE (best = PLS, worst = stepwise) and the variance (between the best – stepwise and the worst – ridge). </p> <p><strong>How do I now choose the best model?</strong> I'm pretty stuck! </p> <p>One idea I had is to cross validate the MSE and var on the test set, but I'm unsure about how to write this code in R. </p> <p>I'm using code similar to this <a href="http://cbio.ensmp.fr/~jvert/svn/tutorials/practical/linearregression/linearregression.R" rel="nofollow">website</a>'s. I'm not sure that will solve the problem though. I'm using <code>summary(model - y.test)^2</code> at the moment. </p>
34,587
<p>I'm a first year statistics graduate student taking a course in regression. In the previous chapter we covered, we discussed partial F-tests for deciding whether to include a predictor variable. In the current chapter (which we just finished), we covered six model selection criteria. I was expecting these two concepts to be linked together at some point, but there's nothing in the book about it. Does anyone know what is the relationship between partial F-tests and model selection? To me, it looks like partial F tests should be considered a model selection criterion.</p>
74,218
<p>This may be a silly question, but I'm not seeing a clear answer in any of the usual sources. I'm preparing to build a Bayesian model to fit with BUGS/JAGS, currently working through the model logic in plate notation.</p> <p>I have a few kinds of observed variables, and several latent variables. I know that a Bayesian model has to be a DAG. What other conceptual constraints are there on the network? In particular, can an observed variable be the parent of a latent variable in a Bayesian model?</p> <p>Thanks, and happy fourth of July to all the Americans out there.</p>
36,960
<p>Is a p-value from a traditional significance test the same as the false alarm value in the Bayesian rule? And/or is it "close enough" to give correct results when used that way?</p> <p>The definitions of the two terms seem to be talking about the same things, but I know it's easy to be tripped up by subtleties. Wikipedia <a href="http://en.wikipedia.org/wiki/P-value" rel="nofollow">says</a> that a p-value is "the probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true."</p> <p><a href="http://psych.fullerton.edu/mbirnbaum/bayes/BayesCalc.htm" rel="nofollow">This page</a> says the false error rate P(D|H') in the Bayesian rule is the probability of observing D if H' is true.</p> <p>In both cases, you are talking about the probability of seeing data that says H when the reality is H'.</p> <p>However, I see two possible problems with assuming that the two are equivalent: D in the definition of P(D|H') seems to refer to a single datum, while the definition of a p-value seems to refer to a range of values ("at least as extreme as"); and I'm not quite smart enough to figure out whether H' is equivalent to the null hypothesis. In all the simple Bayesian examples I've worked through it certainly seems to be, but I haven't yet found a definitive statement.</p> <p>I also haven't found a definitive statement of how p-value and the false alarm value are related if they aren't the same, given that they're both saying at least loosely analogous things about data and hypotheses.</p>
74,219
<p>Assume that I have a variable whose distribution is skewed positively to a very high degree, such that taking the log will not be sufficient in order to bring it within the range of skewness for a normal distribution. What are my options at this point? What can I do to transform the variable to a normal distribution?</p>
74,220
<p>I observe draws of some random variable $Y$ over time where $Y_{t} = aY_{t-1} + \epsilon_{t}$.</p> <p>$\epsilon \sim N(0, 1/\rho_\epsilon)$ and $a$ is an unknown parameter with prior distribution $a \sim N(\mu_0, \Sigma_0)$.</p> <p>Since both the noise and the prior are normal, after we observe $Y_t$, the posterior of $a$ is also normal and follows an updating process: $$ \mu_t = (\mu_{t-1}+\Sigma_{t-1}\rho_{\epsilon} Y_{t-1}Y_t)/(1+\Sigma_{t-1}\rho_{\epsilon}Y_{t-1}^2)\\ \Sigma_t = \Sigma_{t-1}/(1+\Sigma_{t-1}\rho_{\epsilon}Y_{t-1}^2) $$</p> <p>Moving two periods into the future, it's easy to see that: $$ Y_{t+2}= a(\underbrace{a Y_t + \epsilon_{t+1}}_{Y_{t+1}}) + \epsilon_{t+2} $$</p> <p>Given that I'm at time $t$, I'm looking to evaluate the variance of $Y_{t+2}$.</p> <p>Since $\epsilon_{t+1}$ and $\epsilon_{t+2}$ affect future draws of $Y$, they're independent of our current beliefs on the distribution of $a$. The variance is then: $$ var(Y_{t+2}) = var(b Y_{t+1}) + 1/\rho_{\epsilon} $$</p> <p>Is there a way to calculate the first variance term conditional on being at time $t$? I suppose I could take the covariance of $b$ and $T_{t+1}$ to obtain the joint distribution and then, if I wanted to solve this numerically, do the integration for the variance.</p>
74,221
<p>I have a problem that I don't think I've met before. I have N observations of the variables v1 and v2 and I assume that there is a function f such as v2 = f(v1).</p> <p>I want to know if f has a particular 'statistical' monotony (increasing or decreasing) and if it is 'statistically' convex or concave. By 'statistical' I mean that my observations may include error terms so you may encounter pairs of variable that show a monotony that is the opposite of the global monotony, if that's English.</p> <p>Should I simply compute a f' and a f'' ? (derivatives of f).</p> <p>If you have some thoughts on this I'd be glad to read it. Thanks,</p> <p>Arthur</p> <p>(btw I use Stata)</p>
36,962
<p>Let $(X,Y)$ have the mixed discrete-continuous pdf given by:</p> <p>$$f(x,y)= \begin{cases} \frac{y^{a+x-1}e^{-2y}}{\Gamma(a) x!}\ y&gt;0;x=0,1,2,\ldots \\ 0 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{elsewhere} \end{cases}$$</p> <p>Could you please help me show that this nasty pdf integrates/sums to 1 over the support of $(X,Y)$?</p> <p>I initially tried to integrate out $Y$, by completing the gamma distribution with parameters, <strong>shape**$=a+x$ and **scale</strong>=$1/2$ and then sum over $X$ but that lead to a mess. </p> <p>Is there another way to go here? Thank you.</p>
74,222
<p>I used the following R code to fit a probit model:</p> <pre><code>p1 &lt;- glm(natijeh ~ ., family=binomial(probit), data=data1) stepwise(p1, direction='backward/forward', criterion='BIC') </code></pre> <p>I want to know what does <code>stepwise</code> and <code>backward/forward</code> do exactly and how select the variables?</p>
74,223
<p>This is a homework questions. Can you guys give me some hints?</p> <p>Let $U_{(1)}&lt;\cdots&lt;U_{(n)}$ be the order statistics of a sample of size $n$ from a Uniform$(0,1)$ population. Show that $F^{-1}(U_{(1)})&lt;\cdots&lt;F^{-1}(U_{(n)})$ are distributed as the order statistics of a sample of size $n$ from a population with density $f$. </p> <p>Attempt:</p> <p>Let $U=(U_{(1)},\ldots,U_{(n)})$, and $V=(F^{-1}(U_{(1)}),\ldots,F^{-1}(U_{(n)}))=F^{-1}(U)$. I know that the joint pdf of the order statistics is: $f_{X_{(1)},\ldots,X_{(n)}}(x_1,\ldots,x_n)=n!\prod_{i=1}^n f_X(x_i)$. So I thought I could use the jacobian method or something:</p> <p>$\begin{align*}f_V(\mathbf{v})&amp;=f_U(F(\mathbf{v}))|J_{F^{-1}}(F(\mathbf{v}))|\\ &amp;= n!\prod_{i=1}^n F(v_i)|J_{F^{-1}}(F(\mathbf{v}))|\end{align*}$</p> <p>But I have no idea what the jacobian could be, and the $F(v_i)$ doesn't seem right either. Any ideas?</p>
74,224
<p>I have difficulty understanding how self organizing maps (SOM) are doing dimensionality reduction. Can anybody provide a useful explanation to me?</p> <p>Suppose we have 20 training data points in 50 dimensions. Let's say, I have specified 3 by 3 SOM (lattice with 9 points), I embed my manifold (3 by 3 lattice) to 50-D space and after the training process, each data point is mapped to one of the 9 points (nodes) in my manifold. Now, my embedded manifold (3 by 3 SOM) are 50-D. So how come I'm back to 2-D dimensions? I mean, where is this non-linear projection?</p>
36,966
<p>Been reading through C.R.Rao's (1975) Simultaneous Estimation of Parameters in Different Linear Models and Applications to Biometric Problems, Biometrics, Vol. 31, No. 2 (Jun., 1975), pp. 545-554.</p> <p>On P550 he notes "that if V (the correlation matrix for the current observations) is unknown then it cannot be completely determined by the "y" observations alone. However if V has suitable structure it may be possible to estimate it. Such problems will be considered in a later paper." Does anyone know what this later paper he is referring to is? The author is pretty prolific so its going to virtually impossible to go through every paper he wrote subsequently (I've tried a few in the years afterwards (that I can get hold of) and so far found nothing.)</p> <p>On p551 he notes "that the computation of the prediction error of (2.39) is complicated", again is he simply noting that the calculation required care because there are more parameters or that additional derivations/calculations are required. It seems like it should be option 1, but it just seems odd for him to mention it.</p> <p>What confuses me is that the Bayesian formulation and Mixed Estimator (Thiel and Goldberger 1961) are equivalent. Under the mixed estimator you take your prior data and estimate the parameters Beta (mean_prior) and variance (sigma_prior) (these are effectively your priors) and estimate your new observations using OLS with the priors as a stochastic restriction. There is no mention of problems estimating the correlation matrix V (of the current observations). Indeed I had hoped to estimate V, by calculating the correlation matrix generated by regressing the current observations on the parameters. However Rao's comment seems to make sense: in the model the current parameters depend on their prior's and so estimation of the current correlation matrix must take this into account!) </p> <p>Happy to post the paper up (So long as it doesn't contravene any rules), really hope to get an answer to this as I am trying to apply precisely these techniques currently.</p>
36,967
<p>I have a dataset containing $p$ variables (or columns) denoted by $X_i$ for $i=1,...,p$. I am trying to cluster this dataset using <a href="http://en.wikipedia.org/wiki/Self-organizing_map" rel="nofollow">Self-Organizing Map</a>. There are 3 main variables within these $p$ variables say $X_1$, $X_2$ and $X_3$. The rest of the variables (i.e $p-3$ of them) can be obtained by applying some functions on $X_1$, $X_2$ and $X_3$. In other words, by observing $X_1$, $X_2$ and $X_3$, I can actually recover $X_i$ for $i=3,5,...,p.$ Now my question: </p> <p>Q: Shall I apply the Self-Organizing Map on the whole dataset containing all the $p$ variables ( i.e. $X_i$ for $i=1,...,p$) or would it be better to just consider $X_1$, $X_2$ and $X_3$ when using the Self-Organizing Map? Is there any reference that can answer this question?</p>
36,968
<p>I would like to ask the difference between the normal distribution and the multinomial distribution because I don't know when to use each of them. I know the normal distribution is used for continuous probability, and the multinomial distribution is used for probabilities of <em>K</em> kinds of categories.</p> <p>Can anyone give me some examples of each to make me understand them more clearly? Thanks.</p>
36,969
<p>At first I thought the order didn’t matter, but then I read about the gram-schmidt orthogonalization process for calculating multiple regression coefficients, and now I’m having second thoughts.</p> <p>According to the gram-schmidt process, the later an explanatory variable is indexed among the other variables, the smaller its residual vector is because preceding variables' residual vectors are subtracted from it. As a result, the explanatory variable's regression coefficient is also smaller. </p> <p>If that's true, then the residual vector of the variable in question would be larger if it were indexed earlier, since fewer residual vectors would be subtracted from it. This means that the regression coefficient would be larger too. </p> <p>Ok, so I've been asked to clarify my question. So I've posted screenshots from the text that got me confused in the first place. Ok, here goes.</p> <p>My understanding is that there are <em>at least</em> two options to calculate the regression coefficients. The first option is denoted (3.6) in the screenshot below. </p> <p><img src="http://i.stack.imgur.com/vVlAl.png" alt="The first way"></p> <p>Here is the second option (I had to use multiple screenshots).</p> <p><img src="http://i.stack.imgur.com/PC6vp.png" alt="The second way"></p> <p><img src="http://i.stack.imgur.com/GO2cv.png" alt="enter image description here"> <img src="http://i.stack.imgur.com/sSkCF.png" alt="enter image description here"></p> <p>Unless I am misreading something (which is definitely possible), it seems that order matters in the second option. Does it matter in the first option? Why or why not? Or is my frame of reference so messed up that this isn't even a valid question? Also, is this all somehow related to Type I Sum of Squares vs Type II Sum of Squares? </p> <p>Thanks so much in advance, I am so confused!</p>
74,225
<p>Both bootstrap and jackknife methods can be used to estimate bias and standard error of an estimate and mechanisms of both resampling methods are not huge different: sampling with replacement vs. leave out one observation at a time. However, jackknife is not as popular as bootstrap in research and practice. My question is that is there any obvious advantage to use bootstrap instead of using jackknife? Many thanks in advance</p>
74,226
<p>I have 2 daily time-series, each 6 years long. While noisy, they are both clearly periodic (with a frequency of ~1 year), but appear to be out of phase. I would like to estimate the phase difference between these time-series.</p> <p>I've considered fitting curves of the form $a\sin(\frac{2\pi}{365}t - b)$ to each time-series and just comparing the two different values for b, but I suspect there are more elegant (and rigourous!) methods for doing this (perhaps using Fourier transforms?). I would also prefer to have some kind of idea of the uncertainty in my phase difference estimate, if possible.</p> <p>Any help would be much appreciated!</p> <p><strong>Update</strong>: </p> <p><img src="http://imgur.com/v2gWn.jpg" alt="Plot of the two time-series"></p> <p>The shaded regions are 95% CIs.</p> <p>Sample crosscorrelation between the two time-series: <img src="http://i.stack.imgur.com/b249A.jpg" alt="Sample crosscorrelation between the two time-series"></p>
36,970
<p>Disclaimer: I know absolutely nothing about statistics. I've had trouble searching for answers to my question, as I don't have much knowledge about the terminology of statistics.</p> <p>I'm currently trying to plot a graph with two sets of values that are widely different. This doesn't really matter, but I'm doing this in Python with the matplotlib library.</p> <p>One of my sets of values is a company's stock price over several days. My second set of data has much, much smaller values, but I'd like to be able to compare both lines side by side. I'm more interested in the magnitude of the changes than in the actual values.</p> <p>For the moment, the only idea I've had is the following:</p> <ul> <li><p>Average the first values.</p></li> <li><p>Average the second values.</p></li> <li><p>Divide the first average by the second one, as to find a coefficient.</p></li> <li><p>Divide every single value in the first set of data by this coefficient.</p></li> </ul> <p>Now, this <em>looks</em> fine, but I don't know anything about statistics, so is this correct? If it isn't, what's a better way to do it?</p>
37,452
<p>Plotting a <strong>glm</strong> binomial model is reasonably simple with the <strong>predict</strong> function. I'm having trouble creating a similar plot for a <strong>glmer</strong> model; predict doesn't work: </p> <pre><code>id &lt;- factor(rep(1:20, 3)) age &lt;- rep(sample(20:50, 20, replace=T), 3) age &lt;- age + c(rep(0, 20), rep(3, 20), rep(6, 20)) score &lt;- rbinom(60, 15, 1-age/max(age)) dfx &lt;- data.frame(id, age, score) library(lme4) glmerb &lt;- glmer(cbind(score, 15-score) ~ age + (1|id), dfx, family=binomial) ndf &lt;- expand.grid(age=10:60) #for extensibility, usually also have factors ndf$fit &lt;- predict(glmerb, ndf, type="response") *Error in UseMethod("predict") : no applicable method for 'predict' applied to an object of class "mer"* </code></pre> <ol> <li>How can I produce the desired plot?</li> <li>While I'm at it, what other plots would be useful for this kind of model for either diagnostic, presentation or glam purposes?</li> </ol>
36,972
<p>I'm a part of a volunteer organization that organizes a bunch of events. For each event, members need to submit an application in order to attend the event. A lot of people want to go to these events and we only have a limited number of spots available, so the selection process is usually pretty competitive. The application selections (accept/reject) for each event are usually handled by a different group of people, so it can be somewhat challenging to ensure that the selection process is as "fair" as possible across the board. I happen to have access to all of the relevant data (who applied to which events, who was accepted/rejected, some stats on the applicant, etc.), and I'm trying to assess whether the current acceptance policy is "fair" or not.</p> <p>In a perfect world, each applicant would be accepted the same proportion of the time. Obviously in real life this may not necessarily be the case. For example, if 5 people apply to 10 events each, and are accepted to 5,7,6,6,5 of them, the system appears to be mostly fair, since the difference between the acceptance proportions isn't that big. On the other hand, if the acceptance numbers are 2,3,9,8,3, the system is obviously unfair since a few people are being selected repeatedly at the expense of the others. What's the most meaningful way to quantify this in terms of actual statistics?</p> <p>Also, I have access to some data which might influence the acceptance rate of a particular individual, for example the number of articles written by them in the past year. It makes sense that someone who has written more articles should be accepted to more events. Also, if someone has, for example, only applied to 1 or 2 events ever, it makes sense for them to have a higher acceptance rate than someone who has applied to dozens.</p> <p>So in short, I'm trying to come with a way to figure out if the current distribution of acceptance proportions is correlated in terms of some of the variables to which I would expect it to be correlated, as opposed to based solely on luck, and if it does turn out to be based solely on luck, whether the proportions are roughly uniform across the population or some people are repeatedly being favored at the expense of others.</p> <p>Can someone guide me in the right direction here?</p>
36,974
<p>Generating n random variables whose summation will be 1. [<em>I got the answer.</em>]</p> <p><strong>EDIT</strong></p> <p>On genetic algorithm, we have to maintain population. Say, I have two individuals <strong>a</strong> and <strong>b</strong>. Every individual consists of $n$ pairs of ($x_i, \theta_i$), where $ 0 \leq i &lt; n$. A fitness function evaluates fitness, $f$ of every individual. <strong>Constraint is for every individual is $\Sigma\theta_i \approx 1$ ($0.95 \leq \Sigma\theta_i &lt; 1.05$ would suffice).</strong> $\theta_i$ associated with individual <strong>a</strong> will be adapted by some function (which I haven't figured out yet) of $d(a, b)$ &amp; $\Delta f$. $\theta_i$ will be adaptive (by I guess something like covariance matrix). So if I increase value of $\theta_i$, values of some $\theta_j$ have to be decreased to maintain summation $\Sigma\theta_i \approx 1$. So I am seeking suggestion how can be $\theta_i$ adapted based on $d(a, b)$ &amp; $\Delta f$?</p>
74,227
<p>I have bivariate data from which I have generated thousands of bootstrapped estimates within each of two conditions (pink &amp; blue):</p> <p><img src="http://i.stack.imgur.com/EnBqf.png" alt="bivariate data"></p> <p>I'd like to determine whether these conditions' bivariate distributions have different central tendencies. </p> <p>If I were dealing with univariate data, I'd compute, within each point, the .025 and .975 quantiles of the bootstrapped estimates for that point to construct a 95% confidence interval then compare the intervals of the conditions. Indeed, that's what the lines represent in the above graphic. However, I feel that comparing the conditions on each dimension separately ignores the fundamental bivariate nature of the data, yet I don't know what the appropriate procedure is for bivariate data.</p> <p>Note that any suggested solution should rely on just the bootstrapped estimates and not the raw data. This is because the estimates in this particular case actually come from rather complicated models that attempt to take into account and remove differences between the conditions that are present in the raw data.</p>
36,978
<p>If one wanted to use Kernel Regression in a Bayesian Framework, any ideas on how one would go about it? </p> <p><a href="http://en.wikipedia.org/wiki/Kernel_regression" rel="nofollow">Kernel Regression</a></p>
74,228
<p>I'v been playing around with back propagation, trying to see if I can find a solution to the XOR problem using a 2-2-1 network. Based on my simulations and calculations, a solution is not possible without implementing a bias for every neuron. Is this correct?</p>
74,229
<p>I'm using rb-libsvm and the RBF kernel to make classifications. <code>svm.predict(measurements)</code> returns either -1.0 or 1.0. Is there a way to get a confidence for this classification? I am interested in throwing out low-confidence classifications and tweaking precision/recall.</p>
74,230
<p>(Ross [2009], p.162) The current in a semiconductor diode is often measured by the Shockley equation I = I0(e^aV-1) where V is the voltage across the diode; I0 is the reverse current; a is a constant; and I is the resulting diode current. Find E(I ) if a = 5, I0= 10^-6 , and V is uniformly distributed over [1; 3]. Answer <img src="http://i.stack.imgur.com/IZYkC.png" alt="enter image description here"></p> <p>my question is: how "1/2" is calculated ??? E[x]= (a+b)/2 thats mean (1+3)/2 =2 not 1/2 I need help please thanks in advance</p>
74,231
<p>I was reading this <a href="http://arxiv.org/pdf/1206.4762.pdf" rel="nofollow">article</a>, where the author says that Maximum Likelihood (ML) estimates are asymptotically normal if the log-likelihood is asymptotically quadratic.</p> <p>I have heard or read other times about the likelihood being asymptotically quadratic (under conditions), but I have never read any proof of this. Is anybody aware of such a proof?</p> <p>It would be great to see a proof that shows also the rate of convergence $(\sqrt{n}?)$ of the log-likelihood to a quadratic function.</p>
36,980
<p><strong>Question:</strong> I want to test a medicine and I have two groups of people with baseline blood pressure, one I give medicine A and the other one medicine B. After 6 months, I measure their blood pressure.</p> <p>Now there are two options:</p> <ul> <li>I measure the difference of the mean change. </li> <li>I forget about the baseline and just measure the difference of the final values.</li> </ul> <p>When and why do you use method one or method two? </p> <p>And what would happen in case you work with the lowest p-value of each test to reject the null-hypothesis that both tests are equal?</p>
36,981
<p>$\newcommand{\Var}{\mathrm{Var}}$ Consider $Z_i$ as a binary random variable with $\mathrm{Pr}[Z_i = 1] = \pi$. Also, consider $Y_i$ as:</p> <p>$Y_i|Z_i = 0 \sim \mathrm{Poisson} (\lambda_0) $</p> <p>$Y_i|Z_i = 1 \sim \mathrm{Poisson} (\lambda_1) $</p> <p>My question is how we can find $\Var(Y_i)$.</p> <p>Here is what I think I should do, but I've not had any success till now:</p> <p>$\Var(Y_i) = E(\Var(Y_i|Z_i)) + \Var(E(Y_i|Z_i)) = E(\pi\cdot\lambda_1 + (1 - \pi)\cdot\lambda_0) + \Var(\pi\cdot\lambda_1 + (1 - \pi)\cdot\lambda_0) = \pi\cdot\lambda_1 + (1 - \pi)\cdot\lambda_0 + 0 $</p> <p>What I've got above is different from what my instructor has in his lecture notes. He has: $\Var(Y_i) = \pi\cdot\lambda_1 + (1 - \pi)\cdot\lambda_0 + (\lambda_1 - \lambda_0)^2\cdot\pi\cdot(1 - \pi)$</p> <p>I appreciate your help.</p>
74,232
<p>In Rapid Miner, I created a predictive model (SVM) with Kernel type = polynomial, c= 10, and obtained 80.77% accuracy using cross validation. When compared to hold out set my accuracy on the test set was: 71.54472%. That is 9.23% difference. My questions are 1) Is my SVM model overfit? 2) I created other models like k-NN, decision trees, Naive bayes etc but they all gave accuracy of less than 70%. Will the difference against the hold out set be less than 9.23% ? </p>
36,986
<p>Using wikipedia I found a way to calculate the probability mass function resulting from the sum of two Poisson random variables. However, I think that the approach I have is wrong.</p> <p>Let $X_1, X_2$ be two independent Poisson random variables with mean $\lambda_1, \lambda_2$, and $S_2 = a_1 X_1+a_2 X_2$, where the $a_1$ and $a_2$ are constants, then the probability-generating function of $S_2$ is given by $$ G_{S_2}(z) = \operatorname{E}(z^{S_2})= \operatorname{E}(z^{a_1 X_1+a_2 X_2}) G_{X_1}(z^{a_1})G_{X_2}(z^{a_2}). $$ Now, using the fact that the probability-generating function for a Poisson random variable is $G_{X_i}(z) = \textrm{e}^{\lambda_i(z - 1)}$, we can write the probability-generating function of the sum of the two independent Poisson random variables as $$ \begin{aligned} G_{S_2}(z) &amp;= \textrm{e}^{\lambda_1(z^{a_1} - 1)}\textrm{e}^{\lambda_2(z^{a_2} - 1)} \\ &amp;= \textrm{e}^{\lambda_1(z^{a_1} - 1)+\lambda_2(z^{a_2} - 1)}. \end{aligned} $$ It seems that the probability mass function of $S_2$ is recovered by taking derivatives of $G_{S_2}(z)$ $\operatorname{Pr}(S_2 = k) = \frac{G_{S_2}^{(k)}(0)}{k!}$, where $G_{S_2}^{(k)} = \frac{d^k G_{S_2}(z)}{ d z^k}$.</p> <p>Is this is correct? I have the feeling I cannot just take the derivative to obtain the probability mass function, because of the constants $a_1$ and $a_2$. Is this right? Is there an alternative approach?</p> <p>If this is correct can I now obtain an approximation of the cumulative distribution by truncating the infinite sum over all k?</p>
74,233
<p>I want to fit a Cox PH model with random effect (Gamma frailty). Here is an example with 'kidney catheter' data set:</p> <pre><code> &gt; library(survival) &gt; data(kidney) &gt; fit &lt;- coxph(Surv(time, status)~ age + sex + disease + frailty(id), kidney) &gt; fit Call: coxph(formula = Surv(time, status) ~ age + sex + disease + frailty(id), data = kidney) coef se(coef) se2 Chisq DF p age 0.00318 0.0111 0.0111 0.08 1 7.8e-01 sex -1.48314 0.3582 0.3582 17.14 1 3.5e-05 diseaseGN 0.08796 0.4064 0.4064 0.05 1 8.3e-01 diseaseAN 0.35079 0.3997 0.3997 0.77 1 3.8e-01 diseasePKD -1.43111 0.6311 0.6311 5.14 1 2.3e-02 frailty(id) 0.00 0 9.3e-01 Iterations: 6 outer, 28 Newton-Raphson Variance of random effect= 5e-07 I-likelihood = -179.1 Degrees of freedom for terms= 1 1 3 0 Likelihood ratio test=17.6 on 5 df, p=0.00342 n= 76 &gt; fit$history[[1]]$theta [1] 5e-09 </code></pre> <p>Does the <code>fit$history[[1]]$theta</code> returns the variance of random effect? Why is the value different?</p>
74,234
<p>I ran some experiments first. Afterwards, I looked for the parameter $p_{max}$ for which I can claim the chance a matrix has the ESP-property is $p_{max}$ or lower to a significance of exactly 99%. Is this correct on a more philosophical basis?</p> <p>I'm doing research in computer science, and have a test to determine whether a random matrix holds the ESP-property or not.</p> <p>If you run this Bernouilli experiment repeatedly, so I get a binomial distribution of matrices holding or not holding the ESP-property of which I estimate the $p$ parameter. This chance $p$ of not holding the ESP-property is really low (like 0.001%). So what I do is to find the chance $p_{max}$ a matrix holds the ESP-property, with a significance level of exactly 0.99.</p> <p>Finding this value is no problem. However, are there good reasons this is a bad approach to the problem? Because basically, it is a more advanced approach of this: <a href="http://stats.stackexchange.com/questions/24203/is-it-ever-good-to-increase-significance-level">Is it ever good to increase significance level?</a></p> <p>The difference in this case is I keep the significance level constant, while looking for a parameter $p$ which best explains my experimental results.</p>
74,235
<p>I wrote a script to do some analysis and it was working fine until I tried to impliment a while loop to find the number clusters appropriate for k-means. For some reason it keeps saying that an argument is of length zero, but there shouldn't be any. I'm running this remotely, and it works fine locally.</p> <pre><code>freq_1 &lt;- NULL freq_alignment &lt;- NULL for (res in point_reference) { point &lt;- paste(as.character(res), "_output.txt", sep = "") point_file &lt;- file(point, "r") point2 &lt;- read.table(point_file) point2 &lt;- as.data.frame(point2) k &lt;- 2 check &lt;- 0.5 while (check &lt; 0.75) { k &lt;- k + 1 kcluster &lt;- kmeans(point2, k) check &lt;- kcluster$betweenss/(kcluster$tot.withins+kcluster$betweenss) } config &lt;- kcluster$cluster frames &lt;- length(config) freq &lt;- as.data.frame(table(conformations)) freq_1 &lt;- cbind(freq_1, freq))) freq_alignment &lt;- cbind(freq_alignment, kcluster$cluster) close(residue_file) } </code></pre> <p>point_reference is a list of numbers (2, 3, 4, etc.) corresponding to which file to load and the files themselves load fine. My goal is to find the k that corresponds to 75% of the total SS coming from between clusters. Only the loops is wrong...if I replace it with just clustering with k = 5, it works fine. The exact error is:</p> <p>Error in while (check &lt; 0.75) { : argument is of length zero</p> <p>Again, I'm doing this remotely on a cluster, and it works on my desktop R64. All files were produced on the cluster. I hope you guys can help! Thank in advance.</p>
36,989
<p>Let's say I have a set of scalar values $V$ that I sampled from a set $S$. I want to test whether a given value $X$ could be a member of $S$ or not.</p> <p>I understand that if the values in $V$ are distributed normally, then we can find the mean and standard deviation, after which you can determine the probability of occurrence for any particular value based on how many standard deviations your value differs from the mean.</p> <p>However, what happens if the values are not distributed normally? For example, what if you have 2 peaks in your data?</p> <p>Example: If you sample the ratio of pelvis width to femur length over a large number of adults, you will have 2 peaks, one for men and one for women. Now, my question is, given a particular ratio, how can we determine the probability that the subject was human? (i.e. probability that the ratio is part of our set $S$.)</p>
74,236