question
stringlengths 37
38.8k
| group_id
int64 0
74.5k
|
---|---|
<p>Given are $Z_1, Z_2$ i.i.d. standard normal.</p>
<p>Find</p>
<p>$P[Z_1 < t < Z_2]$</p>
<p>I have difficulties with working out how I should split the condition.</p>
<p>Is $P[Z_1 < t < Z_2] = P[Z_1 < t, t < Z_2]$ or is some additional condition required?</p>
<h2>My question</h2>
<p>How should I approach this problem?</p>
| 74,289 |
<p>I found an example for manually calculating the hazard ratio when there are two groups.</p>
<pre><code>d2 <- c(2, 2, 1, 2, 2, 0, 0, 4, 0, 2, 2, 0, 1, 0, 1, 1, 1)
d <- c(2, 2, 1, 2, 2, 3, 1, 4, 1, 2, 2, 1, 1, 1, 1, 2, 2)
r2 <- c(21, 19, 17, 16, 14, 12, 12, 12, 8, 8, 6, 4, 4, 3, 3, 2, 1)
r1 <- c(21, 21, 21, 21, 21, 21, 17, 16, 15, 13, 13, 12, 11, 11, 10, 7, 6)
l <- log(r2/r1)
d1 <- d - d2
dd <- cbind(d2, d1)
summary(glm(dd ~ offset(l), family = binomial))
</code></pre>
<p>where d is the number of deaths in total, and $d_i$ is the deaths in group $i$ and $r_j$ is the number of people in risk group $j$. I understand how this works.</p>
<p>I would like to do a similar exercise for when there are not two groups of people, but only one vector of continuous data (e.g. expression of a particular gene) and no grouping factor. How can it be done for this scenario ?</p>
| 74,290 |
<p>I am reading a code for a Bayesian clustering method. Both prior and the likelihood are normally distributed. If I understood correctly, such cases are called "conjugate priors"</p>
<p>My question is about calculating the posterior mean and variance. So it is implemented in the code as following</p>
<pre><code>d = nrow (data.i)##number of attributes
n = ncol (data.i)##number of replication for each attribute
Smatrix = matrix (1, nrow=d, ncol=d)##correlation of attributes
Imatrix = Smatrix - 1
diag (Imatrix) = 1
prior.precision = solve (sdWICluster^2 * Smatrix + sdTSampling^2 * Imatrix) #inverting the prior correlation matrix (prior.precision)
prior.mean = cluster.mean # mean of each clusters
sample.precision = sdResidual^(-2) * Imatrix
sample.mean = apply (data.i, 1, mean)#mean for each cluster
post.cov = solve (as.matrix(prior.precision + n*sample.precision)) # posterior covariance matrix
post.mean = as.vector (post.cov %*% (prior.precision %*% prior.mean + n*sample.precision %*% sample.mean)) # posterior of the mean
</code></pre>
<p>it seems the code has take the following formula, </p>
<p>$\mu_{po} = C_{po}\times((\mu_{pr} \times \tau_{pr})+(n\times\tau_{li}\times\mu_{li}))$</p>
<p>$C_{po} = \tau_{pr} + n\times\tau_{li}$</p>
<p>As I said, this is the case of conjugates of normal distributions with unknown mean but variance known; however, it does not fit to the formula I have from <a href="http://www.people.fas.harvard.edu/~plam/teaching/methods/conjugacy/conjugacy_print.pdf" rel="nofollow">here</a>. (or may be it does by my eye is not able to catch). I appreciate if someone make some comments about the code and the formula</p>
| 74,291 |
<p>I am plotting a density estimate for misclassification rate of some classifier using the standard <code>plot</code> and <code>lines</code> functions. Even though I've set <code>xlim=c(0.32,0.38)</code> and <code>ylim=c(0,100)</code> within the <code>plot</code> command, the x-limits and the y-limits are a bit wider than I'd like. How can I get the bottom left of the plot to have coordinates (0.32,0) and the top right of the plot to have coordinates (0.38, 100)?</p>
| 74,292 |
<p>The structural form of the linear simultaneous equations model <a href="http://en.wikipedia.org/wiki/Simultaneous_equations_model" rel="nofollow">simultaneous equations model</a> can
be written as</p>
<p>$
\mathbf{y}_{i}^{\prime}\Gamma+\mathbf{x}_{i}^{\prime}\mathbf{B}=\epsilon_{i}^{\prime}
$</p>
<p>which can be written in the reduced form model as</p>
<p>$
\mathbf{y}_{i}^{\prime} =-\mathbf{x}_{i}^{\prime}\mathbf{B}\Gamma^{-1}+\epsilon_{i}^{\prime}\Gamma^{-1}\\
=\mathbf{x}_{i}^{\prime}\Pi+\mathbf{v}_{i}^{\prime}.
$</p>
<p>The reduced form model can be estimated with different estimation methods and if the model is identified the structural coefficients $\widehat{\mathbf{B}}$ can be obtained by using the relation $\widehat{\mathbf{B}}=\widehat{\Pi}\widehat{\Gamma}$.</p>
<p>I wonder how to drive the variance-covariance matrix of $\widehat{\mathbf{B}}$. I'd highly appreciate if you point out me any reference. Thanks in advance for your help and time.</p>
| 1,034 |
<p>I know there are methods to calculate a confidence interval for a proportion to keep the limits within (0, 1), however a quick Google search lead me only to the standard calculation: $\hat{p} \pm 1.96*\sqrt\frac{\hat{p}(1-\hat{p})}{N}$. I also believe there is a way to calculate the exact confidence interval using the binomial distribution (example R code would be nice). I know I can use the prop.test function to get the interval but I'm interested in working through the calculation.</p>
<p>Sample situations (N = number of trials, x = number of success):</p>
<pre><code>N=40, x=40
N=40, x=39
N=20, x=0
N=20, x=1
</code></pre>
| 39,847 |
<p>Okay I have given this a go but I think it is totally wrong. Can someone please make this correct for me ? To understand the MA(q) model I really need to plug some numbers into it as I', finding these time series formula epically confusing as it is never explained what any of it actually is in the real world.</p>
<pre><code> Model : y = c + u + .5 et-1 + .25 et-2 + .125 et -3
Ya t Yf t mean U e t
Actual Y at time t Forecast Y at time t 1.69% error at time t (et 0 is made up)
y = c + u + .5 et-1 + .25 et-2 + .125 et -3 Yf t - Mean u
0 1.69% 0.03
1 0.63% 1.69% -1.06%
2 1.13% 1.69% -0.56%
3 1.36% 1.69% -0.33%
4 0.97% 1.69% -0.72%
5 1.11% 1.69% -0.58%
6 0.88% 1.69% -0.81%
7 0.90% 1.69% -0.79%
8 1.06% 1.69% -0.63%
9 1.38% 1.69% -0.31%
10 0.96% 1.69% -0.73%
11 1.29% 1.69% -0.40%
12 1.39% 1.69% -0.30%
13 0.98% 1.69% -0.71%
14 1.42% 1.69% -0.27%
15 1.21% 1.69% -0.48%
16 1.04% 1.69% -0.65%
17 1.45% 1.69% -0.24%
18 1.11% 1.69% -0.58%
19 1.92% 1.69% 0.23%
20 1.89% 1.69% 0.20%
21 2.00% 1.69% 0.31%
22 1.40% 1.69% -0.29%
23 2.15% 1.69% 0.46%
24 2.11% 1.69% 0.42%
25 1.76% 1.69% 0.07%
26 2.55% 1.69% 0.86%
27 1.80% 1.69% 0.11%
28 2.23% 1.69% 0.54%
29 2.39% 1.69% 0.71%
30 1.55% 1.69% -0.14%
31 2.67% 1.69% 0.99%
32 2.50% 1.69% 0.81%
33 1.88% 1.69% 0.20%
34 2.18% 1.69% 0.49%
35 2.42% 1.69% 0.73%
36 1.75% 1.69% 0.06%
37 2.54% 1.69% 0.85%
38 2.03% 1.69% 0.34%
39 2.08% 1.69% 0.39%
40 2.58% 1.69% 0.89%
41 1.71% 1.69% 0.02%
42 1.56% 1.69% -0.13%
43 1.98% 1.69% 0.29%
44 1.85% 1.69% 0.16%
45 1.66% 1.69% -0.03%
46 2.19% 1.69% 0.50%
47 1.52% 1.69% -0.17%
48 1.75% 1.69% 0.06%
49 1.92% 1.69% 0.23%
</code></pre>
<p>FORECAST 50 1.80% 1.69% 0.11%
FORECAST 51 1.81% 1.69% 0.12%
FORECAST 52 1.81% 1.69% 0.12%
FORECAST 53 1.79% 1.69% 0.10%
FORECAST 54 1.79% 1.69% 0.10%
FORECAST 55 1.78% 1.69% 0.09%
FORECAST 56 1.77% 1.69% 0.08%
FORECAST 57 1.76% 1.69% 0.07%
FORECAST 58 1.76% 1.69% 0.07%
FORECAST 59 1.75% 1.69% 0.06%
FORECAST 60 1.75% 1.69% 0.06%
FORECAST 61 1.74% 1.69% 0.05%</p>
<p>I have tried to do this to the step that I am upto before estimating the first error term for t = 0 and parameters of the model but it does not look right. Hope you can follow this it is hard to get the formatting right. It just looks like it ends up being too similar to the mean to be useful. I want something which tries to predict the random patterns seen in the time series, for example you can see when the point goes up, it is very rarely higher in the very next point.</p>
<p>The formula for the forecasting is basically to take the mean and multiply it by the last 3 error terms and multiply by the coefficients that I just made up. Thanks.
Thanks.</p>
<p>As a side note I really find the notations and explanation about this to be very confusing.</p>
<p>Thanks.</p>
| 31,305 |
<p>I am running a Monte-Carlo simulation and I sample from various normal distributions. I was just wondering, is there a way by which I can increase the probability of selecting a point from the tails (i.e., $[-5\sigma,-3\sigma]$ and $[3\sigma,5\sigma]$) as compared to the probability associated with the interval $(-3\sigma, 3\sigma)$?</p>
<p>Edit:</p>
<p><img src="http://i.stack.imgur.com/UzAEY.jpg" alt="enter image description here"></p>
| 74,293 |
<p>I am using logistic regression to predict y given x1 and x2:</p>
<pre><code>z = B0 + B1 * x1 + B2 * x2
y = e^z / (e^z + 1)
</code></pre>
<p>How is logistic regression supposed to handle cases in which my variables have very different scales? Do people ever build logistic regression models with higher-order coefficients for variables? I'm imagining something like this (for two variables):</p>
<pre><code>z = B0 + B1 * x1 + B2 * x1^2 + B3 * x2 + B4 * x2^2
</code></pre>
<p>Alternatively, is the right answer to simply normalize, standardize or rescale the x1 and x2 values before using logistic regression?</p>
| 74,294 |
<p>An online module I am studying states that one should <strong>never</strong> use Pearson correlation with proportion data. Why not?</p>
<p>Or, if it is sometimes OK or always OK, why?</p>
| 74,295 |
<p>I have data from a set of sensors, and would like to measure similarity between two samples of data ignoring local fluctuations.</p>
<p>My problem is, my sensors are not regularly placed : some are quite alone, others are many in the same area, there is no regular pattern in their locations.</p>
<p>I want to measure similarity in terms of large scale, local fluctuations (even of high amplitude) are much less important as long as they are balanced by other nearby fluctuations.</p>
<p>I though about computing average values of a point and its neighbours, and making my indicator as the sum of absolute value of differences between the local averages of my two samples, but I have not idea whether it's a good approach or not.</p>
| 74,296 |
<p>In <a href="http://surveyanalysis.org/wiki/Multiple_Comparisons_%28Post_Hoc_Testing%29" rel="nofollow">http://surveyanalysis.org/wiki/Multiple_Comparisons_(Post_Hoc_Testing)</a></p>
<p>It states "For example, if we have a p-value of 0.05 and we conclude it is significant the probability of a false discovery is, by definition, 0.05."</p>
<p>My question: I always thought false discovery is Type I error, which is equal to the chosen significance levels in most tests. p-value is the value calculated from the samples. Indeed Wikipedia states "The p-value should not be confused with the significance level α in the Neyman–Pearson approach or the Type I error rate [false positive rate]"</p>
<p>So why the statistician in the link claims Type I error is also the p-value?</p>
| 74,297 |
<p>I just used the standard formula to determine the sample size of a sample to match the mean of a pupulation with an error margin of 3 percentage points and with a 90% probability.
I know would like to check that the mean of my sample is actually within the 3% margin of error of the mean in the population.
To do so I am taking the population and appending the sample, identifying observations in the sample with a dummy called sample. Then I run the following regresion:
reg var1 sample
And I am testing wheter the coefficient for the dummy variable sample is less than 3 percentage points. To do so I am using a one sided t test.</p>
<p>But this is only ok when the coefficient of the dummy variable "sample" has the "right" sign. I would like to test the joing hypoithesis that the coefficient of the dummy variable sample is greater than -3 and lower than 3.
Does anyone have any insight on how to do this test?
Thanks in advance.</p>
| 49,338 |
<p>I know correlation does not imply causation. I have read it nth time. (i.e. weight does not cause height etc. etc.)</p>
<p>However, to find the effect of a moderator variable on X-Y relationship, a regression model is used such as the GLM in SPSS to test for interaction or multiple regression. See <a href="http://stats.stackexchange.com/questions/18693/significant-interaction-between-covariate-and-factor-in-spss-glm">here</a>.</p>
<p><strong>My Question: If the X-Y relationship is a correlation, then why is a cause-effect model used in this instance?</strong></p>
<p>As far as I understand, it makes little sense to classify a variable as independent or dependent in a correlation analysis.</p>
<p>I apologise if this seems like a 'silly' question. In my past life, I had often told my students that there are no silly questions; just questions!</p>
| 37,371 |
<p>In <a href="http://books.google.co.uk/books?id=6-Y8OL3sW14C&pg=PA102&lpg=PA102&dq=deviance%20of%20a%20%22bernoulli%22%20model?%20goodness%20of%20fit&source=bl&ots=kb3FoEbN30&sig=gJ28_j0CN8-WsFoYp4gDrL2eooA&hl=en&sa=X&ei=SsFhUe3YL-Wa0QWTo4CwBg&ved=0CDsQ6AEwAQ#v=onepage&q=deviance%20of%20a%20%22bernoulli%22%20model?%20goodness%20of%20fit&f=false" rel="nofollow">Ordinal Data Modelling</a> by Johson & Albert, page 102-103:</p>
<blockquote>
<p>For Bernoulli observations [...] the asymptotic chi-squared
distribution of the deviance statistic may not pertain. Indeed, for
the linear logistic models with Bernoulli observations, the deviance
function can be expresses solely as a function of the MLE of the
regression parameter, <strong>which demonstrates the futility of using this
statistic to measure goodness of fit for such data</strong>.</p>
</blockquote>
<p>Could someone kindly explain what this means, please? </p>
| 74,298 |
<p>I would really appreciate some advice.</p>
<p>I have two sequential measurements of severity of illness $(s_1, s_2)$ that are incompletely observed along with an outcome measure ($y$). I am trying to estimate if the change in the measurement $(d = s_2 - s_1)$ adds anything to a model based on the second measurement $s_2$ alone. </p>
<p>There is an extra complication in that the severity of illness measurement is a score derived by categorising 10 continuous measurements of physiology ($x_1,x_2...x_{10}$), and weighting them. This is helpful because it is the standard clinical method in which to report severity, and it handles variables that are not always linearly associated with survival (e.g. very high or very low blood pressure is a bad thing --- you're best off in the middle).</p>
<p>I have been trying to use multiple imputation assuming MAR in Stata 12.</p>
<p>The final model would therefore be $y=\alpha + \beta_1s_2+\beta_2d+e$.</p>
<p>However, I think there are several different imputation strategies and I am not sure which is correct.</p>
<ol>
<li>I use the raw physiology in the imputation model, and then derive the weights afterwards.</li>
<li>I derive the individual weights ($x_1,x_2...x_{10}$) first, and then use these in the imputation model. </li>
<li>I use the aggregate score in the imputation model. </li>
</ol>
<p>I am aware that using a transformation of a variable in the final model after the imputation (e.g. $x^2$ when you only included $x$ in the imputation) biases the effect of $x^2$ to zero. I am not sure if this is also the case with a categorisation and summation? In which case, I assume (2) would be the best option. Otherwise (1) seems best to me. I have already discounted (3) because it seems to lose a great deal of the richness of the data.</p>
<p>Thanks in advance. </p>
| 74,299 |
<p>This is a questions I got for homework in <em>test planning and variance analysis</em>. I apologize in advance for the way my professor phrases things :D
I would love to hear hints or tips on how to approach this.
So here goes:</p>
<blockquote>
<p>A statistician decides to perform the next a-parametric test to examine the $H_0$ of variance Analysis. First he will calculate the test statistic in the following way:
Calculating the ranks of all observations with no affiliation to a group, and calculating F statistics of the anova test on the ranks instead of on the observations.
Then, he will calculate the p-value using a permutation test, since under $H_0$ there is the same chance for all the permutations of ranks when dividing into groups. </p>
<ol>
<li><p>What is the SST calculated on the ranks of observations? Show that the SST does not depend on the observations, only on the number of observations.</p></li>
<li><p>Prove that the p-value of this test will be equal to the p-value that is calculated with permutation on the Kruskal-Wallis test statistic. Clue: Show that the test statistic of Kuskal-wallis is SSB/(SST/N-1) when SSB and SST are calculated on the ranks of observation and N is the total number of observations.</p></li>
</ol>
</blockquote>
<p>This is what I've got for (1). Not sure about it but it makes sense to me.. What I don't get is how to show that SST does not depend on the observations, because obviously we need them in order to calculate the SST.</p>
<p>$k$ - number of groups<br>
$n_i$ - number of observations in group $i$<br>
$\bar{d}$ - mean of ranks of all groups<br>
$\bar{d}_i$ - mean of ranks in group $i$ </p>
<p>$$\operatorname{SSB} = \sum_{i=1}^k n_i(\bar{d}_i-\bar{d})^2$$</p>
<p>$$\operatorname{SSW} = \sum_{i=1}^k \sum_{j=1}^{n_i} (d_{ij}-\bar{d}_i)^2$$</p>
<p>$$\operatorname{SST} = \sum_{i=1}^k \sum_{j=1}^{n_i} (d_{ij}-\bar{d})^2$$</p>
<p>And actually when I think about it, I'm not so sure what is difference between the test that is described in this question and Kruskal Wallis.</p>
| 37,119 |
<p>In this <a href="http://tamino.wordpress.com/2013/04/06/sampling-rate/" rel="nofollow">blog article</a>
an example is given of one can use the DFT to detect frequencies much higher than the sample rate.
In the comments sections I asked how it was done, since DFT normally requires evenly sampled data. The author of the blog responded concisely with: </p>
<blockquote>
<p>One can still define the DFT as proportional to</p>
<p>$$ X(\nu) = {1 \over N} \sum x_n e^{-i 2 \pi \nu t_n} $$ </p>
</blockquote>
<p>How would one do that? I am asking for advice to do exactly that.
I can replicate the example if I use the data compensated method of Ferraz-Mello <a href="http://tamino.wordpress.com/2013/04/06/sampling-rate/" rel="nofollow">1</a> but the blog author says that normally DFT will work using that hint.</p>
<p><a href="http://tamino.wordpress.com/2013/04/06/sampling-rate/" rel="nofollow">1</a> <a href="http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?1981AJ.....86..619F&data_type=PDF_HIGH&whole_paper=YES&type=PRINTER&filetype=.pdf" rel="nofollow">http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?1981AJ.....86..619F&data_type=PDF_HIGH&whole_paper=YES&type=PRINTER&filetype=.pdf</a></p>
| 74,300 |
<p>I feel like every question I've asked on CrossValidated has lead back to looking at the number of observations I have per variable. I understand that there are many rules of thumb out there depending on your field, your expected effect size, etc. These come back to recognizing that too few observations can lead to overfitting.</p>
<p>I understand how overfitting can be a problem when looking at a regression model with a single predictor. Having only two observations leads to a perfect answer, whereas using least squares to solve an overdetermined model leads to generalization. However, I have trouble making sense of how overfitting can still be a problem if you have 20 predictors and, say, 31 observations. It seems like you have mitigated the problem with 10 extra observations, but I suspect I am missing something in how least squares solves overdetermined systems.</p>
<p>What I assume follows if it is true that a system is overfitted is that the relationships between predictors explained by the betas also do not hold.</p>
<p>Finally, if overfitting is a problem by having not enough observations, can this be solved by using forward stepwise regression? Or is there a good possibility of missing significant predictors due to eventually reaching a point where there are too many predictors and not enough observations?</p>
<p>To recap:</p>
<ol>
<li><p>Can someone explain why the case where the number of predictors is $N-1$ where $N$ is decently large (say, $N > 10$) can still overfit? Why is it not that you need a minimum number of observations <strong>regardless</strong> of the number of predictors?</p></li>
<li><p>Can overfitting be solved by using forward stepwise regression?</p></li>
</ol>
| 37,123 |
<h1>Problem</h1>
<p>I want to estimate the probability of a user choosing a car or transit as a transport mode. Following is the output of the simple logistic regression:</p>
<p><img src="http://i.stack.imgur.com/9qQZC.jpg" alt="enter image description here"></p>
<p>I want to estimate the probability of choosing car (auto) if INVT for auto = 3, INVT for transit = 7, OVT for auto = 1, OVT for transit = 2.5, OPC for auto =2 and transit fare = 3.</p>
<h1>What I tried</h1>
<p>For the probability of choosing car when the user owns a car and work in suburbs, I am using INVT for auto, OVT for auto, OPC for auto and transit fare along with AO as 1 and DW as 0.</p>
<blockquote>
<p>The probability = Pr = exp(1.45 - (0.00897*3) -(0.0308*1) -
(0.0115*2) -(0.00708*3) +(0.77 * 1) - (0.561*0))/(1+exp(1.45 -
(0.00897*3) -(0.0308*1) - (0.0115*2) -(0.00708*3) +(0.77 * 1) -
(0.561*0)))</p>
<p>Pr = 0.893</p>
</blockquote>
<h1>Question</h1>
<p>Is this the right approach to find this probability? Or should I also include the INVT, OVT and OPC for transit as well?</p>
| 74,301 |
<p>Context: I am working on a decision tree classifier, trying to classify businesses as to whether they are likely to have an event occur (default) in the next 90 days.</p>
<p>One input I get is whether, and when, the business is scheduled to have a related event (let's call it "warning"). A business may have no warning scheduled, or it may have a warning scheduled some number of days in the future (e.g., we learn today that a warning has been scheduled for 15 days hence). If a warning is scheduled further out, we may presume that the business has more time to cope and fix it, so it is "worse" to have 15 days till warning, than it is to have 85 days till warning.</p>
<p>I'm looking for a way I can massage/transform this data into a single, real-valued feature for use in a decision tree, that 1. preserves the natural ordering that intuitively exists here, and 2. won't make for too ugly of statistics?</p>
<p>Possibilities:</p>
<ol>
<li>Just use the number of days until scheduled warning as a feature. Pros: preserves order (except for no warning) and natural interpretability. Cons: when "no warning" exists, "zero" is a particularly poor value to use, since it violates the ordering.</li>
<li>Use "+infinity" or some arbitrarily large number of days when "no warning" exists, but otherwise use number of days. Pros: preserves order and interpretability (except for no warning). Cons: most values are going to be in the 1-90 range, so choosing a representative value for "no warning" skews the data.</li>
<li>Use "zero" for no warning, and transform the number of days till warning with the inverse. Pros: smooth, well-ordered over the domain, output range of 0,1, "no warning" is interpreted as "zero" effect. Cons: loses native interpretability of values, unsuitable for other (linear?) models.</li>
<li>Use two separate features, one a binary for warning, one the days till warning. Pros: properly separates the concerns (?). Cons: still doesn't deal with the question of what to put in for missing value in days till warning.</li>
</ol>
<p>If, as I suspect, this is something of a rookie problem, any pointers for standard approaches to this kind of problem are welcome, including "RTFM" responses, though specific sources, technique names, or search terms would be quite welcome.</p>
| 74,302 |
<p>I am working with matrices, of the size N*M, where each cell corresponds to the Pearson's correlation between two time series. I want to threshold each matrix such that it would retain only significant correlations. Now, my problem is that the time series are very large (>1500 data points) and thus rather weak correlations turn out to be (very) significant. I am already considering only correlations whose p val survives correction for multiple comparison (to correct for the number of tests done within each matrix), however this still leaves quite a few weak correlations in each matrix. I prefer not to choose a threshold arbitrarily, so I was wondering if anyone could direct me to established methods for determining statistical thresholds when working with very large time series </p>
<p>Many thanks for your help! </p>
| 74,303 |
<p>Let's assume we have a training set with $y \in \mathbb{R}$. Thus all the data is between $y_{min}$ and $y_{max}$. If we built a decision tree model it cannot return $y_{pred}$ outside the given range (using any combination of input features). Thus decision tree cannot extrapolate in terms of predicted values. Can a neural net regression model extrapolate and return $y_{pred}$ values outside the $y$ range in a training set? Does it depend on the activation function or not?</p>
<p>Below is my attempt to answer this question.</p>
<p>The output neuron of the model is just $\sum \Theta_ia_i$, where $\Theta_i$ - weight of i-th neuron on the previous hidden layer, and $a_i$ - value of activation function of that neuron. If we use logistic function then $a \in (-1;1)$. Thus maximum possible $y_{pred} = \sum \Theta_i$, assuming that all $a$ reach their maximum value around 1. But if we will use linear activation function, which doesn't have restrictions on output values of $a$ ($a \in \mathbb{R}$) the model will return $y_{pred} \in \mathbb{R}$, which can be ouside $y$ range of the training set.</p>
<p>Is my line of reasoning correct or there are some mistakes?</p>
| 570 |
<p>I was wondering what the following OLS scenario would imply: a variable is endogenous (i.e. correlated with the error term) yet is statistically significant.</p>
<p>Alternatively, what if in, once again OLS, the variable is exogenous yet statistically insignificant (although the overall regression is indeed statistically significant - by say an F test).</p>
<p>Thanks!</p>
| 37,130 |
<p>I found several similar questions with no answer. But I'm gonna give it a shot anyway. </p>
<p>I'm trying to find assets (stocks) which are co-integrated, since Engle-Granger's test may give cause to spurious regression it's suggested by several articles that you use Johansen's test as well to ensure a long-term co-integration. Even though I've not taken any course in time series analysis yet the Engle-Granger's test was not to difficult to understand. Johansen's test was on the other hand slightly more challenging, so I'm trying to understand how the interpret the results you're given in matlab.</p>
<p>So when I run the test for a nx2 vector in Matlab I get a 1x2 matrix as result with the inputs true or false. So the question is when I can reject/not reject the hypothesis.</p>
<pre><code>************************
</code></pre>
<p>Results Summary (Test 1)</p>
<p>Data:
Effective sample size: 13
Model: H1
Lags: 0
Statistic: trace
Significance level: 0.05</p>
<h1>r h stat cValue pValue eigVal</h1>
<p>0 1 23.6646 15.4948 0.0031 0.7486<br>
1 1 5.7169 3.8415 0.0170 0.3558 </p>
<p>h = </p>
<pre><code> r0 r1
t1 true true
</code></pre>
<p>pValue = </p>
<pre><code> r0 r1
t1 0.0031234 0.016977
</code></pre>
<p>stat = </p>
<pre><code> r0 r1
t1 23.665 5.7169
</code></pre>
<p>cValue = </p>
<pre><code> r0 r1
t1 15.495 3.8415
</code></pre>
<p>mles = </p>
<pre><code> r0 r1
t1 [1x1 struct] [1x1 struct]
</code></pre>
| 47,705 |
<p>I've been using igraph and statnet to do most of my network analysis, but I am trying to replicate results that I use MPNet (an expansion to <a href="http://sna.unimelb.edu.au/PNet" rel="nofollow">PNet</a>) in order to perform some simulations on a multi-level network. I'm hoping someone has experience with PNet or MPNet and can offer a suggestion.</p>
<p>I've run a few simulations of small networks, and the output for any specific simulation doesn't really lend itself to being visualized easily in igraph and statnet outside of manually inputting the networks and visual attributes of each vertex. This is problematic for me because I want to simulate large-ish (n>20 in each level) networks. </p>
<p><em>However,</em> it does appear that the output is intended to be visualized by the way it is written (see below), but I'm unsure what program it's intended for and cannot find much existing MPNet (or PNet) help online. </p>
<p><strong>My question is this:</strong> is there a way to use MPNet output for a specific simulation to visualize the network that is being simulated without manually entering in each network and visual attributes into some package?
Or in other words, did MPNet designers have a destination in mind with their output?</p>
<hr>
<p>The following picture is a quick MSPaint rendition of this 10 vertex multi-level simulated network (because we all need a little MSPaint in our lives):
<img src="http://i.stack.imgur.com/Lts6N.jpg" alt="enter image description here"></p>
<p>I have four output text files for a specific simulation (sorry in advance for the huge amount of output):</p>
<p>Network A: </p>
<pre><code> *vertices 5
1 "" box ic Blue bc Black
2 "" box ic Blue bc Black
3 "" box ic Blue bc Black
4 "" box ic Blue bc Black
5 "" box ic Blue bc Black
*matrix
0 1 1 1 1
1 0 0 0 0
1 0 0 1 1
1 0 1 0 1
1 0 1 1 0
Density 0.7000
Degree Distribution(s)
Mean degree 1.4000
Stddev 1.9105
Skewness 0.8547
Clustering coefficent(s)
Global 0.8000
</code></pre>
<p>Network B:</p>
<pre><code>*vertices 5
1 "" ellipse ic Red bc Black
2 "" ellipse ic Red bc Black
3 "" ellipse ic Red bc Black
4 "" ellipse ic Red bc Black
5 "" ellipse ic Red bc Black
*matrix
0 0 1 0 0
0 0 1 0 1
1 1 0 1 1
0 0 1 0 0
0 1 1 0 0
Density 0.5000
Degree Distribution(s)
Mean degree 1.0000
Stddev 1.6583
Skewness 1.2718
Clustering coefficent(s)
Global 0.3750
</code></pre>
<p>Network X:</p>
<pre><code>*vertices 10 5
1 "" box ic Blue bc Black
2 "" box ic Blue bc Black
3 "" box ic Blue bc Black
4 "" box ic Blue bc Black
5 "" box ic Blue bc Black
6 "" ellipse ic Red bc Black
7 "" ellipse ic Red bc Black
8 "" ellipse ic Red bc Black
9 "" ellipse ic Red bc Black
10 "" ellipse ic Red bc Black
*edges
1 6
1 9
3 7
3 8
3 9
3 10
4 6
4 7
4 8
5 8
5 9
Density 0.4400
Degree Distribution(s)
Mean degree A 2.2000
Stddev A 1.4832
Skewness A -0.2648
Mean degree B 2.2000
Stddev B 0.8367
Skewness B -0.2459
Clustering coefficent
Global 0.2667
</code></pre>
<p>Network M:</p>
<pre><code>*vertices 10
1 "" box ic Blue bc Black
2 "" box ic Blue bc Black
3 "" box ic Blue bc Black
4 "" box ic Blue bc Black
5 "" box ic Blue bc Black
6 "" ellipse ic Red bc Black
7 "" ellipse ic Red bc Black
8 "" ellipse ic Red bc Black
9 "" ellipse ic Red bc Black
10 "" ellipse ic Red bc Black
*edges
1 2 1 c Blue
1 3 1 c Blue
1 4 1 c Blue
1 5 1 c Blue
3 4 1 c Blue
3 5 1 c Blue
4 5 1 c Blue
6 8 1 c Red
7 8 1 c Red
7 10 1 c Red
8 9 1 c Red
8 10 1 c Red
1 6 1 c Black
1 9 1 c Black
3 7 1 c Black
3 8 1 c Black
3 9 1 c Black
3 10 1 c Black
4 6 1 c Black
4 7 1 c Black
4 8 1 c Black
5 8 1 c Black
5 9 1 c Black
</code></pre>
<p>*Also, if someone has enough reputation and feels the need to add the tags PNet and/or MPNet to this post so that it could be more specific, I would appreciate it.</p>
| 74,304 |
<p>I have a set of data that represents periodic readings. The data shows an upward trend but I need to test for a statistical difference from zero. I believe I should use the t test, two tailed but what should I use for the second set of data? Zeros, the starting value?</p>
<pre><code>0.2245
0.243
0.2312
0.1795
0.1923
0.17
0.2025
0.2059
0.2394
0.205
0.2201
0.2261
0.1817
0.2143
0.2126
0.237
0.1984
0.228
0.2292
0.2236
0.2096
0.2258
0.2155
</code></pre>
| 48,875 |
<p>is it possible for an algorithm A to have a better precision but worse recall (or better recall but worse precision) than another algorithm B?</p>
<p>Although I know that precision and recall are different things, it seems to me that if algorithm A has a better precision (or recall) than algorithm B then algorithm A will also have a better recall (or precision) than B.</p>
<p>Thanks
Ahmet</p>
| 46,203 |
<p>Suppose I have some kind of distribution which gives the probability of observing a given value (assume that this distribution is <strong>not</strong> normal).</p>
<p>Now, I have a certain observation and I want to calculate how likely it is to observe a value that is equally or more extreme under this distribution. So, I am thinking of doing this empirically by turning my observation into a quantile and calculating the probability empirically under the distribution of: quantile_distribution < quantile_observation and quantile_distribution > abs(1 - quantile_observation)</p>
<p>I'm not sure whether that is a valid way to do it, but for the second part I'm going to assume it is.</p>
<p>Now, here's where it gets a bit more complicated:</p>
<p>My observation has some sort of measurement error associated with it. I can model this error with a probability distribution (this time it is roughly normal). Given this spread, how would I calculate a p-value as above (or would it be a distribution of p-values, which I have never seen)?</p>
<p>I've been hacking something together where I take the p-value of observing the mean and the 95% CI values under the first distribution and then just take the maximum and report that, but obviously there has to be a better way to do it</p>
| 37,136 |
<p>How can I find the standard deviation in categorical distribution, where the elements have non-numerical attributes (e.g. colors)?</p>
<p>For example, I have a bag of marbles with $n$ colors. There's an infinite number of marbles in the bag, and are biased towards a certain color with a probability of $\frac{x}n, x>1$. From the bag, I pick $m$ marbles and get the probability distribution with respect to their colors. </p>
<p>From this distribution, by picking $q$ colors with the highest probabilities, I want to convince others with a confidence interval $k$% (e.g. 95%), that one of the colors I chose is the one that the marbles are biased towards.</p>
<p>In this scenario, what are some of the analysis techniques that can be used to find $q$, given $x,n,k,$ and $m$?</p>
| 74,305 |
<p>I have recently interviewed for a statistical analysis job and was asked a question about why linear least squares regression fails when the data is heteroskedastic. The correct answer to this question, according to the interviewers, is that heteroskedastic data means that the equation of the regression line produced by least squares regression is an unbiased estimator of the true relationship, but that it is NOT efficient, essentially because the part of the dataset where the variance is smaller than average is effectively underweighted. </p>
<p>My question is <strong>which textbooks could I use to find more detail about this topic, and other similar topics at this level</strong>, e.g. </p>
<ul>
<li>the relationship between data being normally distributed and least-squares linear regression being the maximum likelihood estimator for the straight line fit </li>
</ul>
<p>[I have a degree in mathematics but with minimal statistics background & understand general probability concepts such as the central limit theorem, random variables, etc, and I know high school level statistics up to the British A-level S4 statistics, however I lack a certain level of statistics knowledge and don't know what I don't know or where to find out more... ]</p>
| 37,138 |
<p>I am stuck in understanding how the output of convolution operation is obtained. Can somebody please show the steps? The question is an image is processed by applying 3*3 mean filter . What is the impulse response? </p>
<p>The formula is $g(m,n) = \sum_{\alpha = 0}^{M-1} \sum_{\beta=0}^{N-1} f(\alpha,\beta)h(m-\alpha,n-\beta)$</p>
<p>The image function $f$ =</p>
<pre><code> 0 0 0 0 0
0 1 1 1 0
0 1 1 1 0
0 1 1 1 0
0 0 0 0 0
h = 1 1 1
1 1 1
1 1 1
</code></pre>
<p>The result of the operation (obtained in Matlab) is g(m,n) = </p>
<pre><code>0 0 0 0 0 0 0
0 1 2 3 2 1 0
0 2 4 6 4 2 0
0 3 6 9 6 3 0
0 2 4 6 4 2 0
0 1 2 3 2 1 0
0 0 0 0 0 0 0
</code></pre>
<p>I cannot understand the steps how to do it in paper, can somebody please show the few steps how to go about it ?</p>
<p>Based on the reply and the formula, I have worked out one example, but the answers don't match with the formula and the other technique. For convolution at pixel f(0,0) the answer = 0. But I am getting 0 using the formula and 1 using the drag method. The cell to which this result will be mapped is g(0,0) and I have put the answer there. </p>
<p>For convolution of pixel element at f(1,1) = 1 also, when I evaluated I am getting answer 2 instead of 4. The result should go to g(2,2). But how do I get values for g(5,0) since the original image f has no fifth row!! How to calculate values for fifth and sixth rows?
Please let me know what is wrong in my understanding. Is my coordinate system incorrect? Shall really appreciate.
<img src="http://i.stack.imgur.com/FKado.jpg" alt="enter image description here"> </p>
| 74,306 |
<p>Imagine I have a time series for an animal population and a time series for a climatic variable during the same time period and at the same location. Unfortunately the data are observational (i.e., no experimental manipulation). Now imagine there is a decreasing linear trend in both variables (and no seasonality).</p>
<p>I know before hand that these variables will be correlated because they share a time trend (e.g.,<a href="http://stats.stackexchange.com/a/8037/38125">http://stats.stackexchange.com/a/8037/38125</a>). But is there anything more I can infer from this data? </p>
<p>I am looking for a rather low-level conceptual explanation as to whether this is possible and why.</p>
<p>I have 2 thoughts on how this might be accomplished, but haven't figured on my own if either would be valid....
1) Regress population against both time and the climate variable, or
2) "Detrend" both 'population' and 'climate variable' and regress their residuals against each other</p>
<p>I can provide more detail if necessary but wanted to keep this original post brief.
Thanks for your help! </p>
| 37,141 |
<p>I was wondering if anyone could help me on how to interpret the coefficient from an analysis I have carried out in R (<code>survival</code> package). </p>
<p>The data is right censored, the dependent variable (time to event) was modelled under the gaussian distribution and the independent variable is a categorical variable (3 categories). </p>
<p>my output is as follows:</p>
<pre><code>coeff se p-value
-0.107 0.048 2.7x10-2
</code></pre>
<p>Can anyone tell me how to interpret this coefficient under the following model?</p>
| 37,142 |
<p>I need some help
My project aims to develop algorithms for spatial temporal analysis of Flickr, Twitter and Foursquare databases to detect any kind of significant changes, named as “Event” in real time. Event can be defined as any anomalous user activity, which happens at a time or within a specific period of time. For this, different clustering methods should be implemented and the best fit has to be selected. The detected events will be visualized for further exploration.</p>
<p>This information will be integrated with some other VGI sources to provide a series of Volunteered Geographic services.</p>
<p>Could you please suggest me which clustering algorithm is good for this project? And also please suggest me some books and study material...</p>
| 49,929 |
<p>I obtained data of typical time point in two conditions in this shape:</p>
<pre><code>weight condition time
0.1307857 Transf 1
0.1926429 Transf 2
0.2734286 Transf 3
0.4403571 Transf 4
0.6037143 Transf 5
0.9036429 Transf 6
1.5454286 Transf 7
0.1370714 Unt 1
0.2005000 Unt 2
0.2973571 Unt 3
0.4592143 Unt 4
0.8336429 Unt 5
1.3099286 Unt 6
2.1470000 Unt 7
</code></pre>
<p>I am using the package <code>nlme</code> in <code>R</code> to test the difference between the two conditions in time</p>
<pre><code>fm1 <- lme(weigth~condition,random=~1|time,data=data_new)
anova.lme(fm1, adjustSigma = F)
</code></pre>
<p>I got this:</p>
<pre><code>numDF denDF F-value p-value
(Intercept) 1 6 8.421439 0.0273
condition 1 6 4.208786 0.0861
</code></pre>
<p>Questions:</p>
<ol>
<li><p>Am I doing things correctly with this procedure to test the differences between the two conditions in time?</p></li>
<li><p>If yes, which is my p-value, the intercept one or the condition?</p></li>
<li><p>If this is not the proper test, what are the alternatives?</p></li>
</ol>
| 74,307 |
<p>I have four samples $x_1, x_2$ and $y_1, y_2$ with $n_{x1} \neq n_{x2} \neq n_{y1} \neq n_{y2} $. I calculated, using a Wilcoxon rank sum test, that $x_1$ is significantly different to $x_2$ and $y_1$ significantly different to $y_2$. </p>
<p>However, I would like to test whether the difference $x_1 - x_2$ differs significantly to $y_1 - y_2$ but I have no idea how do to that given the unequal sample sizes.</p>
<p>Any ideas or suggestions would be really appreciated. </p>
| 74,308 |
<p>I perform a permutation test multiple times on different datasets, each time I am only concerned about significant $p$ values. To reduce computation time would it be correct to introduce this kind of stopping rule: After a certain number of $N$ permutations to check whether $p$ is greater than a particular value. So, for example if $p>0.1$ after $N$=200 permutations then the lower bound of 95% confidence interval would be greater than $0.05$. Therefore, calculations could be stopped as a true $p$ is not significant. Just want to make sure I am doing it right. Thank you.</p>
| 74,309 |
<p>I have read other posts on conducting Factor analysis (FA) with dichotomous variables and although it appears clear that FA done in the default way is not appropriate, I am still unclear about a few things. </p>
<p>I have 7 variables that are yes/no responses. They are assessing peoples functioning (e.g., working yes/no, involvement with justice system yes/no etc). I want to combine these responses to have two variables that represent domains of functioning (e.g., occupational and social). I suspect three of them will be occupational and 4 of them social. I would like to do something analogous to a FA to confirm this.</p>
<p>I only have access to SPSS (and am only familiar with this package) so am looking for advice on how to do this with SPSS. </p>
<p>Do I do a FA with tetrachoric correlations instead of Pearsons (how is this done?)? Or can I do Latent Trait Analysis in SPSS?</p>
<p>I am specifically looking for references with step-by-step instructions. </p>
| 74,310 |
<p>I have a time series data set. I can decompose it and get the trend but I would like to put confidence ranges around the trend (past) not the forecast-ed component. The decompose function also doesn't handle N/As very well so is there another way to define the trend with data that has N/As. I have been trying to use Holt-Winters and ARIMA but neither seem to be able to do this.</p>
| 74,311 |
<p>I have a data set and I have to fit this data set with a stable distribution. The problem is that the stable distributions are known analytically only in the form of the characteristic function (Fourier transform). How can I do this?</p>
| 74,312 |
<p>I have two sets of data. The one is daily temperature data and another is 16-day vegetation observation data (value of Jan 1, Jan 17 etc), throughout a year. But I need daily values of vegetation observation data. How to obtain daily value from 16-day vegetation data?</p>
| 74,313 |
<p>I came across an article where the authors did a Principal Component Analysis on gene expression data, and found out the genes that are most correlated to the 1st principal component, and they used that gene list for further analysis.
Can somebody tell me how to find out entities (genes in this case) that are most correlated to the 1st principal component?</p>
<p>Here's <a href="http://www.biomedcentral.com/1471-2164/10/135" rel="nofollow">the link to the original free article</a>, and this is how they've calculated it:</p>
<blockquote>
<p>Results of the gene set enrichment analysis of the genes most correlated to the 1st principal component. The correlations of the genes with the 1st principal component were transformed to SDs from the mean, then genes with values > 1.5 (positive correlation) or < -1.5 (negative correlation) were selected.</p>
</blockquote>
<p>I can make a toy example of 6 samples and 20 gene matrix and do the PCA the following way but how to proceed next:</p>
<pre><code>rm(list=ls())
set.seed(12345)
my.mat <- matrix(rnorm(120,0,0.5),nrow=6,byrow=TRUE)
rownames(my.mat) <- paste("s",1:6,sep="")
colnames(my.mat) <- paste("g",1:20,sep="")
head(my.mat)
#Ensure that input data is Z-transformed
pca.object <- prcomp(my.mat,center=TRUE,scale.=TRUE)
summary(pca.object)
par(mfrow=c(1,2))
plot(pca.object)
biplot(pca.object)
#The Rotation
pca.object$rotation
</code></pre>
| 74,314 |
<p>The only thing I understood about fuzzy neurons is that the neuron's activation function is replaced with some operation used in fuzzy. Other than that, I didn't understand much from science paper I found. Also, it is hard to just Google it because of it's name, which results in listing some fuzzy or ANN or hybrid systems, which is not what I'm looking for.</p>
<p>Is there anything more to fuzzy neurons than I have described above?</p>
<p>Also, I'm curious, <strong>when and how to use it</strong>?</p>
<p>And is there <strong>any other change in neural network structure or calculation that needs implementing</strong> if I'm going to use fuzzy neuron. </p>
<p>If someone has a good material on this subject, please link. I really have had a hard time finding anything on this subject. </p>
<p>EDIT: Title of the paper I've mentioned: Combining neural networks and fuzzy logic for applications in char.recognition. ByAnne Magaly de Paula Canuto. University of
Kent. Can't find link to it anymore, i found it last year. But this is the title. Fuzzy neuron is mentioned in section 1.4. </p>
| 74,315 |
<p>As title, I have a correlation matrix available with cor() function </p>
<pre><code> corMatrix<-cor(mydata)
corMatrix is a 98 by 98 matrix storing positive and negative correlation values.
</code></pre>
<p>I want to assess which correlations values are significant among all the correlations. Permutation test has been suggested to do this.</p>
<p>How to perform a permutation test on the correlation matrix ?
How to obtain p-values for each correlation value estimated ?
Any package suggested ? Or answer with working code will be well appreciated. </p>
| 74,316 |
<p>One particular assumption is that $e_{i} \sim N(0,\sigma^{2})$ but I was wondering why we use a QQ plot to test this assumption since we have $n$ iid distributions to look at. When we use a QQ plot, aren't we looking for the normality of all the residuals as opposed to one particular $e_{i}$? I am confused about this assumption. Does the QQ plot somehow imply that each $e_{i}$ is normally distributed?</p>
| 74,317 |
<p>Reposted from Math.SE:</p>
<p>Continuing from <a href="https://math.stackexchange.com/questions/593512/what-is-a-function-satisfying-these-constraints">this question</a>. Given two random variables $X$ and $Y$ where $X \sim \operatorname{Beta}(a, b)$ and $Y \sim \operatorname{Beta}(c, d)$, I'm looking for a random variable $Z$ with a distribution supported on $[0, 1]$ that satisfies the following constraints:</p>
<ol>
<li>The pdf of $Z$ has reflection symmetry: $f(x;a, b, c, d)=f(1-x;d, c, b, a)$</li>
<li>If $\mathrm{E}[X] > \frac{1}{2}$, then $\mathrm{E}[Z] > \mathrm{E}[Y]$</li>
<li>If $\mathrm{E}[X] = \frac{1}{2}$, then $\mathrm{E}[Z] = \mathrm{E}[Y]$</li>
<li>If $\mathrm{E}[X] < \frac{1}{2}$, then $\mathrm{E}[Z] < \mathrm{E}[Y]$</li>
<li>Increasing the expectation of $X$ or $Y$ must never decrease the expectation of $Z$.</li>
</ol>
<p>Is an answer to this question even possible? The answer to my first question does not seem to solve the problem (the expectations are OK, but the support constraint is violated).</p>
| 74,318 |
<p>Here is the data I have:</p>
<ul>
<li>Response variable : It contains proportions and it takes discrete values 0, 0.2, 0.4, 0.6, 0.8, 1. But there are 109 possible discrete values</li>
<li>Predictor variable.1: Discrete and ordinal. It contains these values 10, 20, 30, 40.</li>
<li>Predictor variable.2: Discrete and non-ordinal. It contains these values 'a', 'b', 'c'.</li>
</ul>
<p>Neither normality (checked with Kolmogorov-Smirnov and by looking at a qqplot) nor homoscedasticity (checked with Fligner and by looking at a plot) are respected.</p>
<p>Which model should I use in order to infer whether any of the two predictor variable influence my response variable?</p>
<p>What about a logistic regression? Would it work?</p>
| 74,319 |
<p>I am working on a machine learning experiment comparing the use of multiple different neural network classifiers by applying them on a large number of datasets, using stratified 10-fold cross-validation. I measure the performance as the average of the errors on the validation set (sometimes referred to as test set) of the 10-fold cross-validation procedure.</p>
<p>My question is, would it be ok to use this same validation set to do an early stopping of the training procedure? This early stopping would be performed by applying the trained model after each epoch to the validation set and measuring the performance, and if it declines for a number of successive learning epochs, the learning would be halted and we would take the epoch that produced the last good performance. This would be applied to all the different techniques, and across all the different datasets.</p>
<p>Is this ok? Or is it statistically inaccurate?</p>
| 6,992 |
<p>I have developed two new sorting techniques using C language. I need to compare the performance of the both sorting techniques, to see which one is better than another. To do this, I used different input data, from n = 500 to n = 2500.
For each data that need to be sorted, I run them 10 times and get the mean, standard deviation as well as coefficient of variation. An example of results are shown below:</p>
<pre><code>n min max mean SD CV
500 24.07 24.52 21.28 0.11 0.47
1000 52.41 52.83 52.67 0.13 0.25
</code></pre>
<p>Just to ask, can I only run each experiment 10 times or should I run more? If I run them 50 times, the results are very close and I obtain almost the same CV. The above results obtained when I run the first sorting technique. </p>
<p>Can I say that 10 times is OK to get the results?</p>
<p>I don't know how many samples should I use to test a particular data. Can anyone suggest any good reading books for this particular analysis?</p>
| 37,157 |
<p>I am using a standard version of logistic regression to fit my input variables to binary output variables.</p>
<p>However in my problem, the negative outputs (0s) far outnumber the positive outputs (1s). The ratio is 20:1. So when I train a classifier, it seems that even features that strongly suggest the possibility of a positive output still have very low (highly negative) values for their corresponding parameters. It seems to me that this happens because there are just too many negative examples pulling the parameters in their direction.</p>
<p>So I am wondering if I can add weights (say using 20 instead of 1) for the positive examples. Is this likely to benefit at all? And if so, how should I add the weights (in the equations below).</p>
<p>The cost function looks like the following:
$$J = (-1 / m) \cdot\sum_{i=1}^{m} y\cdot\log(h(x\cdot\theta)) + (1-y)(1 - \log(h(x\cdot\theta)))$$</p>
<p>The gradient of this cost function (wrt $\theta$) is:</p>
<p>$$\mathrm{grad} = ((h(x\cdot\theta) - y)' \cdot X)'$$</p>
<p>Here $m$ = number of test cases, $x$ = feature matrix, $y$ = output vector, $h$=sigmoid function, $\theta$ = parameters we are trying to learn.</p>
<p>Finally I run the gradient descent to find the lowest $J$ possible. The implementation seems to run correctly.</p>
| 37,158 |
<ol>
<li><p>If the items to be "summed" or combined to create an overall index are collectively the underlying construct (i.e. I am trying to measure compliance to an intervention which has different components), wouldn't combining all the components for IRT violate the assumption of unidimensionality since they represent the intervention itself? If so, do you have a reference for this?</p></li>
<li><p>Are there any good references that say if Likert scores are summed into a scale and the Cronbach's alpha is relatively high (~0.8), this is appropriate to use as an index?</p></li>
<li><p>Are there any other tests of validity I should do other than Cronbach's alpha if all I have done is sum the responses?</p></li>
</ol>
<p>Thanks in advance, any advice is appreciated!</p>
| 74,320 |
<p>My memory is fuzzy on the advantages and disadvantages of various methods for detrending time-series data. I'm looking for a succinct summary of why and when one should or should not use the following:</p>
<ul>
<li>Differenced data</li>
<li>Log-differenced data</li>
<li>Error term, after regressing on <em>only</em> a linear or polynomial time series (e.g., 0,1,2,3,...,t)</li>
</ul>
| 74,321 |
<p>What is the best way of going about dealing with few instances in support vector regression, e.g. only approximately 40? Also - is there an optimal way of dealing with outliers in this case of few instances?</p>
| 37,160 |
<p>I have two models - one is including a categorical covariate as a fixed effect, the other includes it as a random effect:</p>
<pre><code>require(nlme)
set.seed(123)
n <- 100
k <- 5
cat <- as.factor(rep(1:k, n))
cat_i <- 1:k # intercept per kategorie
x <- rep(1:n, each = k)
sigma <- 0.2
alpha <- 0.001
y <- cat_i[cat] + alpha * x + rnorm(n*k, 0, sigma)
plot(x, y)
m2 <- lm(y ~ cat + x)
summary(m2)
m3 <- lme(y ~ x, random = ~ 1|cat, na.action = na.omit)
summary(m3)
</code></pre>
<p>As you can see, both models <code>m2</code> and <code>m3</code> produce exactly the same coefficient estimate for x (including SE). Also the residual standard error is the same. The same result is produced when I simulate some missing data:</p>
<pre><code># simulate missing data
y[c(1:(n/2), (n*k-n/2):(n*k))] <- NA
m2 <- lm(y ~ cat + x)
summary(m2)
m3 <- lme(y ~ x, random = ~ 1|cat, na.action = na.omit)
summary(m3)
</code></pre>
<p>So can we say in general that adding effect as random will have the same impact on the other coefficients and the overall inference as adding it as fixed? If not, can you please provide a simple example (or change the provided one) when this fails?</p>
| 74,322 |
<p>I have always been taught that random effects only influence the variance (error), and that fixed effects only influence the mean. But I have found an example where random effects influence also the mean - the coefficient estimate:</p>
<pre><code>require(nlme)
set.seed(128)
n <- 100
k <- 5
cat <- as.factor(rep(1:k, each = n))
cat_i <- 1:k # intercept per kategorie
x <- rep(1:n, k)
sigma <- 0.2
alpha <- 0.001
y <- cat_i[cat] + alpha * x + rnorm(n*k, 0, sigma)
plot(x, y)
# simulate missing data
y[c(1:(n/2), (n*k-n/2):(n*k))] <- NA
m1 <- lm(y ~ x)
summary(m1)
m2 <- lm(y ~ cat + x)
summary(m2)
m3 <- lme(y ~ x, random = ~ 1|cat, na.action = na.omit)
summary(m3)
</code></pre>
<p>You can see that the estimated coefficient for <code>x</code> from model <code>m1</code> is -0.013780, while from model <code>m3</code> it is 0.0011713 - both significantly different from zero.</p>
<p>Note that when I remove the line simulating missing data, the results are the same (it is full matrix).</p>
<p>Why is that?</p>
<p>PS: please note I am not a professional statistician, so if you are about to respond with a lot of math then please make also some simple summary for dummies :-)</p>
| 74,323 |
<p>I'm analyzing users' in-game data in order to model whether they're going to be paid user or not. </p>
<p>Here's my model:</p>
<pre><code>Logistic Regression Model
lrm(formula = becomePaid ~ x1 + x2 +
x3 + x4 + x5 + x6, data = sn, x = TRUE,
y = TRUE)
Model Likelihood Discrimination Rank Discrim.
Ratio Test Indexes Indexes
Obs 1e+05 LR chi2 1488.63 R2 0.147 C 0.774
0 99065 d.f. 6 g 1.141 Dxy 0.547
1 935 Pr(> chi2) <0.0001 gr 3.130 gamma 0.586
max |deriv| 8e-09 gp 0.011 tau-a 0.010
Brier 0.009
Coef S.E. Wald Z Pr(>|Z|)
Intercept -6.7910 0.0938 -72.36 <0.0001
x1 0.0756 0.0193 3.92 <0.0001
x2 0.0698 0.0091 7.64 <0.0001
x3 0.0020 0.0002 11.05 <0.0001
x4 0.0172 0.0057 3.03 0.0024
x5 0.0304 0.0045 6.82 <0.0001
x6 -0.0132 0.0042 -3.17 0.0015
</code></pre>
<p>And in my model, I created couple of use cases such as:</p>
<pre><code> test1 test2
x1 8 9
x2 10 10
x3 250 250
x4 6 6
x5 2 2
x6 0 1
</code></pre>
<p>Then the probability of user test1 is to turn out to be a paid user is %.07 and % 0.84 for test2.</p>
<p>However I want to calculate the cumulative probabilities such as users whose' x1 values are greater than 8, x2 values are between 10 and 20 and so on.</p>
<p>Is there any way to calculate this ?</p>
<p>Thanks ! </p>
| 74,324 |
<p>I can think of different “types” of multiple testing when using linear models for example:</p>
<ol>
<li>Multiple inferences because we have several dependent variables</li>
<li>Multiple inferences because we have several independent variables</li>
<li>Looking at the data without making any test. Running a test on only the comparisons that might possibly yield to a significant p.value.</li>
<li>Running multiple different tests on the same data. (try a LM, if it is not significant, try a GLM, if it is still not significant try a beta regression, etc.)</li>
</ol>
<p><a href="https://en.wikipedia.org/wiki/Multiple_comparisons" rel="nofollow">Wikipedia</a> says:</p>
<blockquote>
<p>[...] multiple testing problem occurs when one considers a set of statistical inferences simultaneously or <em>infers a subset of parameters selected based on the observed values</em>.</p>
</blockquote>
<p>Does the first part of what wikipedia says encompass my first two points and the part that I put in italics (after the "or") is equivalent to my third point ? Is it correct that my point 4 has nothing to do with what we call multiple testing ?</p>
<p>If my question is too blurry, I might rephrase it this way:</p>
<p><strong>When does an issue of multiple testing occur ? How would you categorize (if needed) the possible events of multiple testing ?</strong></p>
| 37,166 |
<p>I need to proof that $X$ follows a distribution $F$ with probability $1-p$ and a distribution $G$ with probability $p$ if, and only if, its distribution function is:</p>
<p>$(1-p)F + pG$</p>
<p>Can anyone give me some hints about how to proof this?</p>
<p><strong>EDIT2</strong>: this is my <em>second</em> take based on comments:</p>
<p>$$F_p(x)=Pr(X\le x)=\int_{-\infty}^x dF_p(x)=(1-p)\int_{-\infty}^xdF(x)+p \int_{-\infty}^xdG(x) = (1-p)F+pG$$</p>
<p>but I am not sure if this is a valid proof</p>
| 74,325 |
<p>In statistics people gather a lot of data (ie heights of people or gene expression levels) to get some insight. Then in order to perform statistical analyses they try to fit their data to a theoretical distribution (ie Normal Distribution) by computing some parameters.</p>
<p>How do we know that our data follows such a distribution? If we were able to measure all objects in a population and draw a distribution we could probably get something different, a different shape than the theoretical distribution we thought of. Aren’t our calculations wrong then if we use a theoretical distribution?</p>
<p>Please correct me if I am wrong and tell me what you think of this. I think this is a very basic concept in statistics and I have to clarify it.</p>
| 74,326 |
<p>I have a 5 x 2 design where one of the levels of the first factor is a control, all others are experimental conditions. I'm interested in the interaction between the two factors and especially, if the interaction is present for each of the 4 experimental conditions with the control condition.</p>
<p>I conduct 4 separate 2 x 2 ANOVAs where I pair each experimental condition with the control condition in the first factor. This seems to call for p-value adjustment to avoid the multiple-testing problem but what do I have to adjust? Are all p-values adjusted (the two main effects and the interaction)? Or do I only adjust the interaction p-values? Or separately for the main effects?</p>
<p>Thanks a lot</p>
| 74,327 |
<p>The company where I work did a survey with a complex sample. Originally, the sample was a 2-step stratified, but due to some problems, we lost one of them (so, let's consider that the sample is stratified only on 1 variable).</p>
<p>The biggest part of the questions is like "What you think the company can do in respect of X?" and then there is a list of options (some questions allow only 1 answer, some more than 1, others are Likert-type response options).</p>
<p>My question is: Wow can I analyze this data? It would be nice if there were a test like Chi-square for complex sample (probably there exists one, but I don't know)</p>
<p>Any software is welcome, but I prefer R and SPSS.</p>
| 74,328 |
<p>I have data that involves 2 groups (equal sample size in each) and data for each group over 3 time points (they are actually 3 different monetary reward conditions). I want to investigate within group differences. All time/condition points are important, I don't have a "control" time point.</p>
<p>Any opinions on the following:</p>
<ol>
<li><p>If I am comfortable that sphericity is assumed by Mauchly's Test not being violated, would a multivariate Lambda F test statistic be more appropriate or a sphericity assumed estimate (within-condition estimate from SPSS)?</p></li>
<li><p>If I choose to go with the understanding that despite a test result saying it is OK, assuming sphericity may still be over-confident, any thoughts on potentially reporting ALL
Greenhouse-Geiser estimates regardless of Mauchly's or Lambda? I have read that this may reduce the chance of a Type-1 error without having to assume sphericity or equality of covariance matrices. Perhaps too overcautious? Or, is potentially adjusting df's more invasive than assuming sphericity?</p></li>
</ol>
| 48,906 |
<p>I am trying to determine why people ask if a model of a health care risk (for inpatient stay, or acquiring a disease or any outcome) is a population risk model or an individual risk model. This is in a clinical setting where the outcome is some medical event and covariates are personal health history and demographics. The risk here is then the prob(outcome) as determined by some binary classifier.</p>
<p>I understand there may be a different emphasis on the way quality of the model is assessed. For an individual risk model it is perhaps more important to have a small variance of an individual's score and this measure is not captured by AUC of the model or other traditional accuracy measures. Is that the main reason for the difference?</p>
<p>If the model is created with observations of people's health history so rows in model are people, then is this necessarily an individual risk model? If not why not?</p>
<p>Thanks</p>
| 37,174 |
<p>Following cardiac surgery, patients are encouraged to exercise regularly (assume that regular exercise is defined as exercising on 3 or more days per week). A physician suspects that patients exercise regularly immediately following cardiac surgery but tend to reduce, even stop exercising completely, over time. An investigation is planned to estimate the mean number of weeks that patients exercise regularly following cardiac surgery. Assume that the standard deviation (s.d) in the number of weeks cardiac patients exercise regularly following surgery is 6.3 weeks.</p>
<p>(a) If a sample of 40 cardiac patients is followed, and the number of weeks in which each patient exercised regularly is recorded, what is the probability that the sample mean will be no more than 1 week higher than the true mean?</p>
<p>Under central limit therom, </p>
<pre><code>standard error = s.d./sqrt(n) = 6.3/sqrt(40) = 0.99
X = true mean + 1
Z = (X - true mean) / 0.99 = (true mean + 1 = true mean) / 0.99 = 1/0.99
</code></pre>
<p>(b) Find the probability that the sample mean is at least two weeks less than the true mean for a sample of 40 cardiac patients.</p>
<pre><code>X = true mean - 2
Z = (X - true mean) / 0.99 = (true mean -2 - true mean) / 0.99 = -2/0.99
</code></pre>
<p>(c) If the sample is increased to 100 cardiac patients, what is the probability that the sample mean will be no more than 1 week higher than the true mean?</p>
<p>same as question
Can anyone help me check my answers to see whether is correct or not? Please correct and explain if wrong...</p>
| 37,175 |
<p>First, I need to prove that the distribution of a RV X, where X|lambda ~ Pois(lambda), and lambda ~ gamma(a, B), is a negative binomial. I know that it <em>is</em>, but why negative binomial instead of another 2 parameter distribution? How do I prove negative binomial is the best alternative in the circumstance of an overdispersed Poisson, basically?</p>
<p>Then, I have to calculate V(X). I'm wondering if V(X) will help prove that negative binomial is best to use as an alternative in an overdispersed Poisson??</p>
<p>I looked at <a href="https://stats.stackexchange.com/questions/37814/poisson-is-to-exponential-as-gamma-poisson-is-to-what/37884#37884?newreg=1c6cfdc8a1ce413489fc9274419b0b2a">this</a> post, and I desperately want to understand @probabilityislogic's math. I am new to notation associated with probability concepts (not <em>that</em> new, though), and I don't know how he/she jumps from line 2 to 3:</p>
<blockquote>
<p>$λ_i∼Gamma(α,β)$<br>
On doing the integration/mixing over λi, you have:<br>
$Y_i(t_i)|αβ∼NegBin(α,p_i)$ where $p_i=\frac{t_i}{t_i+β}$</p>
</blockquote>
| 74,329 |
<p>How can I interpret McLeod-Li test results below</p>
<pre><code>> McLeod.Li.test(y = data, gof.lag = 12, plot = FALSE)$p.values
[1] 7.602866e-09 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00
[7] 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00
</code></pre>
<p>And also fitting result .In fitting results, sometimes it says false convergence and sometimes relative function convergence what does it mean?. </p>
<pre><code> ***** ESTIMATION WITH ANALYTICAL GRADIENT *****
I INITIAL X(I) D(I)
1 2.654366e+00 1.000e+00
2 5.000000e-02 1.000e+00
3 5.000000e-02 1.000e+00
IT NF F RELDF PRELDF RELDX STPPAR D*STEP NPRELDF
0 1 2.042e+02
1 4 2.041e+02 7.06e-05 2.05e-04 3.0e-03 1.3e+02 1.8e-02 1.30e-02
2 5 2.041e+02 8.08e-05 1.08e-04 2.9e-03 2.0e+00 1.8e-02 6.01e-03
3 6 2.041e+02 1.03e-04 1.03e-04 2.4e-03 2.0e+00 1.8e-02 4.82e-03
4 10 2.037e+02 1.89e-03 1.87e-03 7.1e-02 1.7e-01 3.7e-01 4.86e-03
5 12 2.029e+02 3.77e-03 4.42e-03 1.9e-01 1.9e+00 7.4e-01 5.03e-01
6 15 2.028e+02 6.83e-04 1.23e-03 6.3e-03 4.0e+00 2.6e-02 4.34e-02
7 16 2.028e+02 1.07e-04 1.29e-04 7.5e-03 2.0e+00 2.6e-02 9.17e-03
8 17 2.028e+02 1.47e-04 2.09e-04 1.7e-02 1.9e+00 5.2e-02 5.64e-03
9 18 2.027e+02 1.32e-04 1.68e-04 1.7e-02 1.8e+00 5.2e-02 1.41e-03
10 20 2.027e+02 1.59e-04 2.20e-04 4.3e-02 7.8e-01 1.2e-01 3.48e-04
11 21 2.027e+02 4.71e-05 6.10e-05 2.9e-02 0.0e+00 8.2e-02 6.10e-05
12 22 2.027e+02 1.29e-06 1.37e-06 5.2e-04 0.0e+00 1.8e-03 1.37e-06
13 23 2.027e+02 1.92e-07 5.61e-08 3.6e-04 0.0e+00 9.3e-04 5.61e-08
14 35 2.027e+02 -9.82e-16 1.27e-19 3.3e-15 8.0e+08 9.1e-15 2.68e-10
***** FALSE CONVERGENCE *****
FUNCTION 2.026871e+02 RELDX 3.290e-15
FUNC. EVALS 35 GRAD. EVALS 14
PRELDF 1.274e-19 NPRELDF 2.677e-10
I FINAL X(I) D(I) G(I)
1 1.240313e+00 1.000e+00 1.250e-03
2 1.234210e-01 1.000e+00 1.351e-05
3 4.425966e-01 1.000e+00 2.553e-03
</code></pre>
<p>Another results</p>
<pre><code>***** ESTIMATION WITH ANALYTICAL GRADIENT *****
I INITIAL X(I) D(I)
1 3.736575e+00 1.000e+00
2 5.000000e-02 1.000e+00
IT NF F RELDF PRELDF RELDX STPPAR D*STEP NPRELDF
0 1 2.061e+02
1 2 1.764e+02 1.44e-01 1.61e+00 1.3e-01 3.3e+02 1.0e+00 2.69e+02
2 5 1.229e+02 3.03e-01 1.99e-01 3.7e-01 7.8e-01 2.0e+00 1.12e+00
3 7 1.070e+02 1.30e-01 9.76e-02 1.3e-01 3.7e+00 4.0e-01 2.19e+02
4 9 6.528e+01 3.90e-01 3.24e-01 4.3e-01 8.1e+03 8.0e-01 5.81e+03
5 11 5.613e+01 1.40e-01 1.37e-01 1.1e-01 8.8e+00 1.6e-01 3.40e+05
6 13 5.456e+01 2.79e-02 2.88e-02 2.1e-02 3.2e+01 3.2e-02 4.02e+02
7 15 5.217e+01 4.39e-02 4.78e-02 4.3e-02 2.6e+00 6.4e-02 1.08e+02
8 17 5.161e+01 1.07e-02 1.94e-02 3.7e-02 2.6e+00 5.7e-02 6.86e+01
9 18 5.132e+01 5.52e-03 8.37e-03 3.4e-02 2.0e+00 5.7e-02 4.83e+01
10 19 5.129e+01 7.58e-04 4.77e-03 3.0e-02 2.0e+00 5.7e-02 1.14e+01
11 21 5.123e+01 1.02e-03 2.86e-03 1.1e-02 2.0e+00 2.7e-02 5.80e-01
12 22 5.121e+01 3.58e-04 5.08e-04 1.5e-02 2.0e+00 2.7e-02 2.63e-03
13 25 5.121e+01 3.94e-05 6.44e-05 1.3e-03 3.8e+00 2.4e-03 4.69e-02
14 26 5.121e+01 2.55e-05 2.88e-05 1.3e-03 3.1e+00 2.4e-03 1.50e-02
15 27 5.121e+01 2.39e-05 5.20e-05 2.3e-03 2.0e+00 4.8e-03 8.22e-03
16 28 5.121e+01 5.18e-05 1.27e-04 5.4e-03 1.9e+00 9.6e-03 2.78e-03
17 30 5.121e+01 6.65e-06 2.50e-05 1.2e-03 1.8e+00 3.0e-03 8.02e-05
18 31 5.121e+01 4.39e-06 6.27e-06 1.7e-03 1.3e+00 3.0e-03 6.91e-06
19 33 5.121e+01 5.81e-07 2.26e-06 5.6e-04 1.5e+00 1.1e-03 4.17e-06
20 34 5.121e+01 5.53e-07 5.82e-07 2.0e-04 0.0e+00 4.6e-04 5.82e-07
21 35 5.121e+01 3.22e-09 3.35e-09 4.8e-05 0.0e+00 8.6e-05 3.35e-09
22 36 5.121e+01 8.01e-12 8.10e-12 2.4e-06 0.0e+00 4.2e-06 8.10e-12
***** RELATIVE FUNCTION CONVERGENCE *****
FUNCTION 5.120700e+01 RELDX 2.408e-06
FUNC. EVALS 36 GRAD. EVALS 23
PRELDF 8.103e-12 NPRELDF 8.103e-12
I FINAL X(I) D(I) G(I)
1 2.113519e-01 1.000e+00 1.349e-05
2 8.757729e-01 1.000e+00 2.232e-06
</code></pre>
| 74,330 |
<p>I have a list of objects and their frequencies of occurrence in R</p>
<pre><code> Names <- c("a","b","c","d","e")
Freq <- c(12,45,67,100,1 ...)
</code></pre>
<p>I want to rank the prominence of these objects based on the frequency. Is there a way to fit a distribution on the frequencies and weight them to get ranks instead of simply sorting them in descending order?</p>
| 74,331 |
<p>What is the difference between </p>
<p>$ \lim_{n \to \infty} \ \mathrm{E}_{\theta}(T_n(X)) = \theta$ </p>
<p>and </p>
<p>$ T_n(X) \xrightarrow{p} \theta \ $ for $\ n \xrightarrow{} \infty$ ?</p>
<p>(unbiasdness vs. Consisteny)</p>
<hr>
<p>Ups... there seems to be this question already :-(
<a href="http://stats.stackexchange.com/questions/31036/what-is-the-difference-between-a-consistent-estimator-and-an-unbiased-estimator?rq=1">Similar CrossValidated question</a> </p>
<p>Shall I delete this question?</p>
| 49,379 |
<p>I have been trying to estimate the MLE for my joint posterior. I'm using R and the package stats4. I have 14 parameters and two of them are $\geq 0$, which I did not know how to implement (and I was creating NaN due to the minus log posterior required in for the mle function) and I just made it return very high value (1000) if either of the parameters were negative. Is this the right way to solve this problem? As I was forced to change my prior each time (because MLE told me that my prior estimates were way to high) and I find these nonegative parameters going down to were low numbers (0.001 and 0.01) which did not seem right and at each iteration way below my repeatly suggested prior.</p>
<p>Also, since I didn't have the exact posterior due to the structure of the model and I tried to scale it such that the point estimate from the mle function plugged in the log joint posterior had the value 0. Is this approximation okay for this function?</p>
<p>Regards, Raxel. </p>
| 74,332 |
<p>Recently I attended a workshop on probability, I was asked the following question but I did not find a way to solve it. A discussion on this question can help to learn something new.</p>
<p>We were told that, We are guests in a game show and close to win a great fortune. The quiz master asks us to choose one of three (closed) doors. She explains that behind one of them awaits you a million Euros. Once you fixed your choice the quiz mastress opens one of the other doors and shows you that this was only a goat. She gives you a final chance: you may either retain your door or switch to the remaining closed one.</p>
<blockquote>
<p>(i) Say door 3 is opened. How can we Calculate the conditional probability that our door is the winning one given that the door 3 is a fail, and its complement.</p>
<p>(ii) How to Calculate the unconditional probability that our door is the winning one, and its complement.</p>
</blockquote>
<p>Thank you in advance.</p>
| 49,611 |
<p>Let $X_1, X_2, ..., X_n$ be a random iid sample from a population with mean $\theta$.</p>
<p>Now I am wondering about the intuition behind $E(X_1| \overline X ) = \overline X$, the sample mean.</p>
<p>If we just consider $X_1$ (or any $X_i$ for that matter) we have that $E(X_1) = \theta$ as the expected value of any random observation from a population will be the population mean.</p>
<p>Now given that we know $\overline X$ how come that changes what we expect to get for $X_1$? $X_1$ is still the same random observation from the population as before...but it seems that knowing the sample mean 'overrides' what we expect to get for an observation...is this correct?</p>
| 74,333 |
<p>There is a situation explained below where I intend to apply MLE.</p>
<p>The problem statement is that I am estimating a measure $X$.
This measure is obtained my Maximum Likelihood estimation technique. </p>
<p>I will apply this measure to an AR(2) model to find the parameters of AR model. The way the measure is applied and calculated is explained briefly. </p>
<p>Considering a linear auto regressive (AR) model whose parameters are needed to be found that is corrupted by $\eta(t)$ white noise. </p>
<p>Question : What is the effect of noise on MLE?</p>
| 74,334 |
<p>I'm struggling with mathematics behind linear regression. In the following lines I pasted the text from the book <a href="http://rads.stackoverflow.com/amzn/click/0387310738" rel="nofollow">Pattern Recognition and Machine Learning</a> (p. 46) where author derives the regression function $\mathbb{E}_{t} [t | \mathbf{x}]$. I want to understand the procedure from the equation (2) to the final result. Could somebody please provide me some useful pointers (and/or links) which concept from the calculus of variations should I study.</p>
<p>The average, expected, loss is given by</p>
<p>$$
\mathbb{E}[L] = \int \int L(t, x (\mathbf{x})) p (\mathbf{x}, t) \, d\mathbf{x} \, dt.
\tag{1}
$$</p>
<p>A common choice of loss function in linear regression is the squared loos given by $L (t, y(\mathbf{x})) = \{ y (\mathbf{x}) - t \}^{2}$. In this case, the expected loss can be written as</p>
<p>$$
\mathbb{E}[L] = \int \int \{ y (\mathbf{x}) - t \}^{2} p (\mathbf{x}, t) \, d\mathbf{x} \, dt.
\tag{2}
$$</p>
<p>Our goal is to choose $y (\mathbf{x})$ so as to minimize $\mathbb{E} [L]$. We can do this using the calculus of variations to give</p>
<p>$$
\dfrac{\delta \mathbb{E} [L]}{\delta y (\mathbf{x})} = 2 \int \{ y (\mathbf{x}) - t \} p (\mathbf{x}, t) \, dt = 0.
\tag{3}
$$</p>
<p>Solving for $y (\mathbf{x})$, and using the sum and product rules of probability, we obtain</p>
<p>$$
y (\mathbf{x}) = \dfrac{\int tp (\mathbf{x}, t) \, dt}{p (\mathbf{x})} = \int t p (t | \mathbf{x}) \, dt = \mathbb{E}_{t} [t | \mathbf{x}]
\tag{4}
$$</p>
| 74,335 |
<p>I have a list of parameters which correlate with 1-2 covariates that I want to control for.</p>
<p>Following normalization, I wanted to do comparisons between groups, correlation analysis and probably use some of them as features in a prediction model.</p>
<p>The problem is that I don't think I can use the normalized values following ANOVA for regression analysis (and so on), since they will be by default correlated as a result of the ANCOVA.</p>
<p>Do you agree (or not)? What else would you suggest? </p>
| 74,336 |
<p>Let us suppose that I have a number of features. I design pdfs for every feature and every class, some of them by smoothing some histogram of training samples, others just by introducing the prior knowledge on how the feature should look like.</p>
<p>Now I want to know which is the most likely class for a given observation, based on the pdfs. I can state the following:</p>
<p>$$P(p_i\in C_i | F_i) = \displaystyle\frac{P(F_i | p_i\in C_i ) P(p_i\in C_i)}{\displaystyle \sum_{C_j} P(F_i | p_i\in C_j ) P(p_\in C_j)}$$</p>
<p>being $p_i$ a particular observation with feature $F_i$, $C_i$ the class I'm assigning to it.
What I've doing so far is just replacing $P(\cdot|\cdot)$ by its density function $\delta$ given that $P(\cdot|\cdot)$ is about $2\epsilon \delta(\cdot|\cdot) $ for a neighborhood $[F_i-\epsilon,F_i+\epsilon]$ and I can cancel out the $2\epsilon$, so I suppose it's just legitimate to directly take the value at the density function.</p>
<p>Now, there's one feature in particular in which I want to convey that I just don't care about its value in a very wide range, for one of the classes. It's an area, so say the area can be between zero and a large value. In other words, in this class we can find any area. In another class, there is a shorter dynamic range of the values the feature should take. The problem that I find is that the result of the formula before is always lower for the 'don't care' class because the uniform distribution extends far away and hence the value of the pdf is low at every point. My desired behavior would be that since I don't care much about the area, there should be a more fair competition among the two classes, and probably some other features in the classification process will make the final decision, but this way I just say that the problematic class is very unlikely for any input of this feature.</p>
<p>I could tune $P(p\in C_i)$ to be higher for the problematic class. However, in other features this is not what I expect. Any clues on how to handle this or pose this problem?</p>
| 74,337 |
<p>Below is a function I wrote to try and tune the $\lambda$ and $\alpha$ elastic net GLM implemented with <code>cv.glmnet</code>. I've noticed that the qualitative outcome (in terms of the alpha that yields the lowest prediction error score) changes depending on the random seed, and I suspect this is mainly because the random seed determines how the folds are partitioned. In one of my datasets, I have only 598 samples and am fitting a Cox regression. That is where this effect is strongest (and, indeed, where the error bars about the prediction errors on the $\lambda$ search grid are largest). When I fit a binomial regression to a larger data set with about 30K observations, I get less jumping around. The number of folds doesn't seem to matter much.</p>
<p>I'm thinking of just choosing $\alpha$ = 0.5 in such cases, because the absolute difference in the cross-validation error scores are often quite small, and elastic net has inferential properties that make it more desirable that LASSO or ridge on their own.</p>
<p>Is this the right way of thinking? Thanks.</p>
<pre><code>cvglmnet2a <- function(x, y, family, nfolds, seed) {
set.seed(seed)
cvfit0 <- cv.glmnet(x, y, family = family, alpha = 0, nfolds = nfolds)
cvfitp5 <- cv.glmnet(x, y, family = family, alpha = 0.5, nfolds = nfolds)
cvfit1 <- cv.glmnet(x, y, family = family, alpha = 1, nfolds = nfolds)
plot(cvfit1); plot(cvfitp5); plot(cvfit0)
plot(log(cvfit1$lambda), cvfit1$cvm,
pch = 19, col = "red", xlab = "log(Lambda)", ylab = cvfit1$name)
points(log(cvfitp5$lambda), cvfitp5$cvm, pch = 19, col = "grey")
points(log(cvfit0$lambda), cvfit0$cvm, pch = 19, col = "blue")
legend("topleft", legend = c("alpha = 1", "alpha = 0.5", "alpha = 0"),
pch = 19, col = c("red", "grey", "blue"))
mins <- c(alpha1 = min(cvfit1$cvm),
alphap5 = min(cvfitp5$cvm),
alpha0 = min(cvfit0$cvm))
mins
}
</code></pre>
| 37,185 |
<p>In many econometrics model, the changes in the response variables in certain intervals are more difficult than other intervals. But I believe this is often not considered when estimating the model. </p>
<p>For example, suppose $Y_{st}$ represents the proportion of students in a certain school $s$, passing a standardized test in year $t$. Let $R_{st}$ be the academic resources students (ex. books in library), and $I_{st}$ represent average parental income of the students. In this case $Y_{st} \in [0,1],$ and we would like to estimate effect of $R_{st}$ on $Y_{st}.$</p>
<p>We could model this is as follows, </p>
<p>$Y_{st} = \alpha_{0} +\alpha_{1}R_{st} + \alpha_{2}I_{st} + \delta_{t}+ u_{st}$, where $u_{st}$ is additive error term, and $\delta_{t}$ are time dummies. In this context of pass rates, intuitively it is more difficult for a school to increase the pass rates from 95% to 100%, then it is for them to go from 45% student passing, to 50% student passing. Consequently, the effect of $R_{st}$ on $Y_{st}$ should be given less weight on the latter situation (45% to 50%), than the former (95% to 100%). Suppose we were comparing two schools in which the same $R_{st}$ increase lead to these results, clearly the 95% to 100% school invested more efficiently. </p>
<p>My idea is to use a multiplicative dummy variable with $R_{st}$, $\beta_{t}$, where $\beta_{t}$ takes on different values depending on the initial value of $Y_{st}.$ Is there a standard way to take this into consideration in the model? Are there other additional factors that could improve this model?</p>
| 74,338 |
<p>I have a list of numbers I need to group by similarity (differences being 1 between each). For example, in a list of [198, 202, 207, 218, 219, 220], 190 would be put into a list, 202 would be put into another list, 207, into a list, and 218 219 220 into another list.</p>
| 74,339 |
<p>I am struggling already a couple of days with this simple OLS, can you help?</p>
<p>Outcome years in function of predictor score, very simple linear model. The residual plot does absolutely not look good though. </p>
<p>Is it correct that based on the residual plot versus the outcome variable, to say that "if I predict the outcome to be 12, I most of the time over estimate the years of eduction?"</p>
<p>In blue my fitted OLS, in red a LOESS curve. What am I doing wrong or how to improve OLS?</p>
<p><img src="http://i.stack.imgur.com/YNXxI.jpg" alt="enter image description here"></p>
<p>I have tried to both transform the predictor and the outcome, no luck. Is there something else you can suggest?</p>
<p>The specific problem I have is that I am making wrong predictions when the outcome age is 12 years (ref table below, vertical are my predictions, horizontal the true values). How to solve this issue? </p>
<p><img src="http://i.stack.imgur.com/IRIKi.png" alt="enter image description here"></p>
| 74,340 |
<p>This question is about three-way interaction and the possibility of applying without second lower terms with keeping the main variables in the equation not like the other questions. In fact the other answers suggest there is possibility of applying . I am not here to find the best solution because I know it and I already included in my question, but to know whether is it possible regardless if it is preferable or not. thank you and please open my question for discussion </p>
<p>The widely known regression equation for assessing the three-way interaction is</p>
<p>$$ Y= B_1 X+B_2 Z+B_3 W +B_4XZ+B_5XW+B_6ZW+B_7XZW+B_0 $$</p>
<p>All lower order terms is included in the regression equation for the B7 coefficient to
represent the effect of the three-way interaction on Y. </p>
<p>Is there possible way to skip the lower order terms and include only the higher term? as in:</p>
<p>$$ Y= B_1 X+B_2 Z+B_3 W +B_4XZW+B_0 $$</p>
<p>And how many observations do I need to perform such equation if X & Z are continuous variables and W is dummy variable ?</p>
<p>I will be thankful if anyone can provide me with any suggestions </p>
| 49,556 |
<p>Which statistical tools are best suited for this problem:</p>
<p>Team A goes and measures 3 magnitudes in 200 locations (let's imagine the volume of a room, the insulation coefficient and the average temperature outside the room). For each room they calculate the energy this room will consume to keep the desired temperature for a week.</p>
<p>So after the measurement campaign they have measured these 3 magnitudes in 200 different rooms (each room is only measured once). So the data looks like:</p>
<pre><code> (Room_volume, r_coefficient, T, estimated_energy)
</code></pre>
<p>Team A is assumed to be very good at getting their results. Now team B measures the same 200 rooms, and calculates all the estimated energies for each room.</p>
<p>How should we measure the performance of team B vs team A (where A is assumed to be accurate)? </p>
<p>Forgetting the physics, as it's just an illustrative example, how do you compare the accuracy of a measurement campaign measuring 3 values from which a 4th one can be calculated deterministically? Note that none of these values are distributed in any special way among the 200 samples (they don't look gaussian)</p>
<p>I started looking at the distribution of errors, so:</p>
<pre><code> {Estimated_energy_Ai - Estimated_energy_Bi} with i = 1..200
</code></pre>
<p>And seeing how many results are within a standard deviation. What other tools are appropriate?</p>
| 74,341 |
<p>I want to perform a repeated measures ANOVA in SPSS using the descriptive statistics. So my input is: </p>
<pre><code>Descriptive Statistics
Mean Std. Deviation N
M1MI 3,8000 1,03280 10
M1MA 5,3000 2,16282 10
M2MI 7,0000 1,88562 10
M2MA 2,2000 1,54919 10
M3MI 6,2000 1,03280 10
M3MA 4,2000 1,75119 10
M4MI 4,7000 ,67495 10
M4MA 4,9000 1,19722 10
</code></pre>
<p>Does anyone know how I have to adjust my syntax? </p>
<p>My design is as follows:</p>
<pre><code>Dependent variables: Progress1, Progress2, Progress3, Progress4
Within subject factor 2: M (I/A)
</code></pre>
| 37,190 |
<p>I am in a situation where I have to compute:</p>
<p>$$E(u(x_1)|\bar{X},S^2)$$</p>
<p>where $X_1$ is a normally distributed random variable and $u(.)$ some function. I know that by the student's theorem the sample mean and the sample variance are independent and moreover that $\frac{(n-1)S^2}{\sigma^2}\sim \chi^2 (n-1)$. </p>
<p>Can I simplify the expectation with the information I possess? Is perhaps the bivariate normal distribution of use here?</p>
<p>Thanks.</p>
<p><strong>EDIT</strong>: Yes, an iid sample on $X$ is assumed here, hence the subscript $1$ on $u(x_1)$. If the expectation cannot be simplified, what is the conditional distribution of $X_1$ given $\bar{X}$ and $S^2$?</p>
| 74,342 |
<p>I have a data set with ~80 records, with ~8 features. I want to predict one of the features in future records. The feature is numeric and discrete. It ranges between -30 up to 140 with steps of 5. Until now I wanted to predict another feature which is boolean, so I used logistic regression.
Which method should I use here? Maybe some kind of particle filter?</p>
<p>Thanks!</p>
| 20,432 |
<p>I am looking at a Semivariogram. I know it shows me the relationship between distance and semi-variance. I also know that at the end of the range the distance no longer auto correlates. What I am wondering is, what does the semi-variance tell me at the point where to distance no longer auto correlates (at the sill)?
So in my case at about 900 m the distance no longer auto correlates. The sill at this value is about 4300. What does this value tell me?</p>
| 74,343 |
<p>I'm trying to estimate how many people visited the farmers market once, twice, thrice, etc. in a given time period, using sampled data. We have interview data from approximately 50% of visitors as they entered the market which lets us identify them uniquely. For the purposes of this analysis, I'm assuming that the interviews randomly captured ~50% of visits; in reality the interviews were not random, but I want to start with a simpler problem first.</p>
<p>Within this dataset, I can identify how many people visited once, twice, thrice, etc. But, intuitively, I think the nature of sampling will lead me to underestimate the number of people who visited more than once. I've tested this intuition by randomly cutting the dataset in half a few times - starting from the full dataset (50% of visits), going down to 6.25%, and find that the larger datasets have more multiple-visit people in it (see below).</p>
<p>However, I am unsure what happens as I go from 50% of the visits to 100% of the visits. Can you help me come up with a statistical framework to do that projection?</p>
<p>PS - I feel that the Birthday Problem is informative here, but I can't think of how to apply it!</p>
<pre><code> % of visits from people who visited at most.
% of visits sampled once twice thrice
6.25% 83% 96% 99%
12.50% 82% 96% 99%
25% 77% 94% 98%
50% 67% 88% 95%
100% ? ? ?
</code></pre>
<p>I'm trying to think about how to apply the mark-recapture framework Gael mentioned. I agree that there is a similarity if I simplify it to the percentage of all visits from individuals who visited only once -- in essence, the population size in terms of this framework. What I'm struggling with is how to think about my sampling technique. I know how many total visits there (say, 25,000). I know that out of the 12,500 visits we "marked" in a continuous 50% sampling of visits, there were about 10,000 unique individuals, with 7,000 being marked once, and the remainder being "marked" 2, 3, or more times. I can't fit this into the simple two-stage mark-recapture model, and I'm thinking I need to do some kind of a Poisson regression (based on the wikipedia entry).</p>
| 48,928 |
<p>It is standard advice to set a random seed so that results can be reproduced. However, since the seed is advanced as pseudo-random numbers are drawn, the results could change if <em>any</em> piece of code draws an additional number.</p>
<p>At first glance, version control looks to be a solution to this, as it would at least allow you to go back and reproduce the version extant when you wrote down the results in your notes or paper. However, since it only takes one draw to mess things up, if you update R the results could change as well.</p>
<p>I realize that this is probably only problematic in rare cases, but I'm curious if there are any best practices here. This is something I've been struggling with in my own work.</p>
| 48,930 |
<p>Given an analysis of every pair of competitors in a race, how may I determine the probability of any given competitor winning the race?</p>
<p>For example, what is the probability of competitor 2 winning the following race? P(Cx Win) means the probability of Competitor x winning.</p>
<pre>
Cx Cy P(Cx Win) P(Cy Win)
----------------------------------
1 2 0.3 0.7
1 3 0.4 0.6
1 4 0.9 0.1
2 3 0.8 0.2
2 4 0.7 0.3
3 4 0.9 0.1
</pre>
<p>I have tried to calculate a 'rating' for each competitor by adding their individual win probabilities. For example the rating for C1 would be 1.6. The rating for C2 would be 2.2 etc. I've tried different ways to use this rating to find the probability of the competitor winning, however my gut feel tells me something is wrong.</p>
<p>Is there a mathematical solution to this problem?</p>
| 5,024 |
<p>Diallel Analysis using the Griffing and Hayman approach is so common in plant breeding and genetics. I'm wondering if someone can share R worked example on Diallel Analysis. Is there any good referenced book which covered worked examples? Thanks</p>
<p>References:</p>
<p>Griffing B (1956) Concept of general and specific combining ability in relation to diallel crossing systems. Aust J Biol Sci 9:463-493 [<a href="http://www.publish.csiro.au/?act=view_file&file_id=BI9560463.pdf" rel="nofollow">pdf</a>]</p>
<p>Hayman BI (1954) The analysis of variance of diallel tables. Biometrics 10:235-244 [<a href="http://www.jstor.org/stable/3001877" rel="nofollow">JSTOR</a>]</p>
<p>Hayman BI (1954) The theory and analysis of diallel crosses. Genetics 39:789-809 [<a href="http://www.genetics.org/content/39/6/789.full.pdf" rel="nofollow">pdf</a>]</p>
| 37,194 |
<p>I've been aching to get my feet wet with a machine learning project, and I've found one that should be relatively simple, and actually has non-negligible business value for my organization. The marketing guys have to remove bot activity from our tracking data by hand for their metrics. I wanted to pull some data from GA, and have them construct a data set (bot, not-a-bot). There are probably 5-10 (numerical) categories that we have to train the algorithm, and the data set can be made as big as the marketing guys have an appetite for.</p>
<p>I've done a bit of reading, and played with RapidMiner/Knime/Weka a bit. I plan to do everything in Python, with <code>scikits-learn</code>, possibly working in R where I have to. My questions:</p>
<ol>
<li>Is this a "not actually that easy at all" problem?</li>
<li>Given the number of categories, about how large should the training
set be? </li>
<li>Given the problem, what algorithms should I start with? </li>
<li>Has anyone else done any learning around bot detection? How did it work? Am I barking up the wrong tree?</li>
</ol>
<p>Thanks in advance community!</p>
| 74,344 |
<p>For given cost function $S(\beta) = (Y - X \beta)^T(Y - X \beta) + \lambda \beta^T \beta$, where $\lambda$ is regularization parameter, the $\beta$ that minimizes the given cost function is $\beta = [X^T X + \lambda]^{-1} X^T Y$.</p>
<p>is it right? </p>
| 48,943 |
<p>I'm looking for a dataset (preferably with a story, at any rate a real dataset)
where a SVM with a linear kernel performs well...in other words i'm looking for
a dataset where the class boundary is likely to be linear. Ideally, there should
be between 10-20 continuous variables and not too many discrete ones....</p>
<p>It's not really important to me how many classes there are (as long as it's
a classification problem). Also, it can't be the iris dataset. </p>
<p>Any proposals?</p>
<p>P.S.: @modo: i don't think this is a question about obtaining a particular dataset:
it's more that i have not read svm/machine learning papers since i was a master student so i'm not familiar with some comon good examples...i'm just sure they exist.</p>
| 74,345 |
<p>I have multiple logistic regression models with all of the same IVs/controls and a variety of DVs (all health outcomes from the same sample). The primary IV is the sum of types of childhood abuse (emotional, physical or sexual). I made dummy variables that represent any one type of experience, any two types of experiences, or all three types of experiences (so each is mutually exclusive). This is the same type of model the CDC uses for their ACEs study which is where I borrowed the method from.</p>
<p>Question 1: Can I compare the one experience dummy to the two experience dummy within the same model? That is, talk about the odds ratios in comparison to one another without standardizing the coefficients? My sense is yes and I've seen it done all over the place but I recently was given a dissenting opinion saying that since I am only comparing each IV to the dummy referent of 0 experiences, I can't compare them to one another without standardizing first. </p>
<p>Question 2: What's the best method to make comparisons across models (with all the same IVs)? I'm testing the dummy IVs against a variety of physical and mental health outcomes and I'd like to compare the odds ratios for each DV based on any one type of experience, two experiences or three experiences. It would be nice to say, one experience increases the odds of this outcome by 3.2 times, this outcome by 2.1 times, etc. Therefore, I can say that one type of abuse increases the risk of depression more than anxiety disorder or two types of abuse increases the risk of PTSD over depression etc (assuming no overlap in confidence intervals). </p>
<p>I've read Menard's 2011 piece on standardized LR coefficients and that makes sense as to what mechanism to use within a single model (as I would apply in question 1 if necessary), but I can't tell if this can be applied across DV models if I'm using all the same IVs/controls from the same sample. If I standardize each IV coefficient, then are they comparable across models? It's a random sample and each model has the same number of valid cases (1073) with no missing data. </p>
| 48,944 |
<p>I'm trying to do a simple scatterplot and trend line in R, but it doesn't look right. Have I messed up something blatantly obvious? Any ideas as to why the line doesn't fit the data?</p>
<p><img src="http://i.stack.imgur.com/AISqZ.png" alt="the trend line does not fit the data"></p>
<p>Here is the code I used.</p>
<pre><code>> plot(x, y, pch=".")
> model <- lm(x ~ y)
> summary(model)
Call:
lm(formula = x ~ y)
Residuals:
Min 1Q Median 3Q Max
-0.23043 -0.04340 -0.00533 0.03761 0.47882
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.051154 0.001384 36.97 <2e-16 ***
y 0.462881 0.003739 123.80 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.06365 on 71514 degrees of freedom
Multiple R-squared: 0.1765, Adjusted R-squared: 0.1765
F-statistic: 1.533e+04 on 1 and 71514 DF, p-value: < 2.2e-16
> abline(model, col="red")
</code></pre>
| 74,346 |
<p>I am using two-samples paired permutation tests with Matlab. Now I have three within subject steps, forcing me to use a three-samples paired permutation test. Is there anything like that or any permutation test in general, I could use for that?</p>
| 74,347 |
<p>I'm looking to generate a set of 5 random variables and enforce a dependence structure between them and onto a dependent variable Y. I understand how to generate correlated random variables for multivariate normal, but not when mixing different types. Below is a little more than I need, but I'm hoping someone can give me a general way of solving this problem...</p>
<ul>
<li>X_1 and X_2 need to be highly correlated Bernoulli variables. </li>
<li>X3 needs to take one of 5 categorical values, call them "A"..."E". </li>
<li>X4 needs to be normal, and negatively correlated with X1, X2. </li>
<li>X5 needs to approximate test scores from 0 to 100 with a high skew, so gamma
probably. X5 needs to be positively correlated with X1, X2, X4.</li>
</ul>
<p>Each of these variables must impact a "success/occurrence" Bernoulli distributed variable Y.</p>
<p>How would I begin? I would like to enforce correlation both between the values of X, and also between each X and Y. (The categorical correlations seem particularly confusing to me.)</p>
| 47,462 |
<p>Let's say I have a sample of "events" done by a certain number of subjects, and some (although not most) of these subjects have underwent more than one event. I'd like to fit a logit model to these data to find out which characteristics contribute to a subject, over the course of an arbitrary period of time, undergoing another event within an arbitrary interval after a preceding event -- let's say three days. </p>
<p>The way I see it, I can set my data up in one of two ways:</p>
<ul>
<li><p>take all of the events underwent by each subject, and define the dependent/indicator variable to be yes/no to "did this subject have another event within 3 days of a preceding event, over the course of the last year (for example)?" In this case, each row of the data for the model would correspond to every subject and be an aggregate of their event history.</p></li>
<li><p>partition each subject's event history into pairs and define the dependent variable to be yes/no to "did this subject have another event within 3 days of this particular event?" In this case, each row would correspond to every event in the data. However, there are three outcomes: 1) the subject has another event within 3 days; 2) subject has another event within greater than 3 days; and 3) subject doesn't have any further events. Could I collapse 2) and 3) into one outcome, or would I be better off considering using a multinomial logit model instead?</p></li>
</ul>
<p>I'm leaning towards going with the latter option, but I'm worried that those subjects who undergo relatively more events will be over-represented in the final model (how would I deal with that, if I should?). However, I do think I gain significantly more information that way. Anyway, with that said, I'd love to hear insights on the pros/cons of each approach to setting up these particular data for a logit model.</p>
| 74,348 |
<p>I am trying to use the <code>glmnet</code> MATLAB package to train my elastic net model on some huge data. My features are of size 13200, and I have around 6000 samples of these. I directly tried to use <code>lassoglm</code> in MATLAB with these features and corresponding target taking cross validation to just 3 folds and alpha = 0.5. It's already 6 hours and it hasn't finished. I have to do it for several others as well.</p>
<p>Any suggestions what I should do?</p>
| 37,201 |
<p>I am very new to Bayesian inference and can't figure out what may be an elementary problem. Also, please forgive me if I am screwing up the notation -- this is my first foray into Bayesian statistics.</p>
<p><strong>Set-up:</strong> At time $i=0$ I start with a random variable $X_0\sim \mathcal{N}(\mu_0,\sigma^2_0)$. Over time I observe the Brownian motion process such that for $j>i$, $X_j-X_i\sim \mathcal{N}(0,(j-i)D)$. Starting at time 0, I collect a sequence of $n$ observations $\{Y_i\}_{i=0}^{n-1}$ of $\{X_i\}_{i=0}^{n-1}$, which are subject to Gaussian noise such that $p(y_i|x_i)=\frac{1}{\sqrt{2\pi N_0}}e^{-\frac{(y_i-x_i)^2}{2N_0}}$. Noise is independent from observation to observation. I am interested in the mean and variance (of the estimator) of $X_{n-1}$ given $\{Y_i\}_{i=0}^{n-1}$ in terms of $n$, $\mu_0$, $\sigma^2_0$, $D$, and $N_0$.</p>
<p><strong>What I've done:</strong> Using Bayes rule and the properties of Gaussian distribution, I've found that for the first observation ($n=0$):</p>
<p>$$p(x_0|y_0)=\frac{1}{\sqrt{2\pi S}}e^{-\frac{(x_0-M)^2}{2S}}$$</p>
<p>where $M=\frac{y_0\sigma^2_0+\mu_0N_0}{\sigma^2_0+N_0}$ and $S=\frac{\sigma^2_0N_0}{\sigma^2_0+N_0}$. I am not completely sure if the above is correct, but I am pretty confident that it is, as it looks like the mean squared error of the estimate of $X_0$ has decreased with the observation. However, I am having trouble extending this to $n>0$ and would appreciate any guidance.</p>
<p><strong>EDIT:</strong> I've misspecified the Brownian motion process. I think it's correct now.</p>
| 48,959 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.