question
stringlengths 37
38.8k
| group_id
int64 0
74.5k
|
---|---|
<p>I am having some confusion relating to cross-validated deviance of a lasso fit. I am not sure what is being done. </p>
<p>Let's say I run lassoglm in Matlab for my dataset having 1000 examples and 15 features with a CV of let's say 100. What does it do?</p>
<p>And what does the plot lassoPlot with CV do? </p>
<p>I am referring to <a href="http://www.mathworks.com/help/stats/lasso-regularization-of-generalized-linear-models.html#btcoa3h" rel="nofollow">this link</a> </p>
<p>I didn't understand what "deviance" means. Can anyone explain, please? </p>
| 74,237 |
<p>If I construct a 2-D matrix composed entirely of random data, I would expect the PCA and SVD components to essentially explain nothing.</p>
<p>Instead, it seems like the the first SVD column appears to explain 75% of the data. How can this possibly be? What am I doing wrong?</p>
<p>Here is the plot:</p>
<p><img src="http://i.stack.imgur.com/QjoYc.png" alt="enter image description here"></p>
<p>Here is the R code:</p>
<pre><code>set.seed(1)
rm(list=ls())
m <- matrix(runif(10000,min=0,max=25), nrow=100,ncol=100)
svd1 <- svd(m, LINPACK=T)
par(mfrow=c(1,4))
image(t(m)[,nrow(m):1])
plot(svd1$d,cex.lab=2, xlab="SVD Column",ylab="Singluar Value",pch=19)
percentVarianceExplained = svd1$d^2/sum(svd1$d^2) * 100
plot(percentVarianceExplained,ylim=c(0,100),cex.lab=2, xlab="SVD Column",ylab="Percent of variance explained",pch=19)
cumulativeVarianceExplained = cumsum(svd1$d^2/sum(svd1$d^2)) * 100
plot(cumulativeVarianceExplained,ylim=c(0,100),cex.lab=2, xlab="SVD column",ylab="Cumulative percent of variance explained",pch=19)
</code></pre>
<p><strong>Update</strong></p>
<p>Thankyou @Aaron. The fix, as you noted, was to add scaling to the matrix so that the numbers are centered around 0 (i.e. the mean is 0).</p>
<pre><code>m <- scale(m, scale=FALSE)
</code></pre>
<p>Here is the corrected image, showing for a matrix with random data, the first SVD column is close to 0, as expected.</p>
<p><img src="http://i.stack.imgur.com/n5eBc.png" alt="Corrected image"></p>
| 74,238 |
<p>I am confused about "parametric" and "non-parametric":
Our topic is nonparametric estimators for probability of default. So first of all, we consider the generalized linear models, as an example we have probit and logit:</p>
<p>$\pi (x)=G(\beta_0 +\sum \beta_i x_i)$</p>
<p>These generalized linear models are parametric models right? They are parametric, since we have parameters beta, which have to be estimated.</p>
<p>Next we consider semiparametric credit scoring, the generalized partial linear model:</p>
<p>$E(Y|X,T)=G(\beta ' X + m(T))$</p>
<p>m(.) is e.g. a Kernel, a smooth function</p>
<p>So the parametric term is again the beta ' x and the non-parametric is the kernel, right?</p>
<p>Thanks a lot for your help</p>
| 74,239 |
<p>I want to understand how I can compute the eigenvectors and the eigenvalues of a matrix using dimensional reduction.I have a Matrix $M$ of dimensions $n$ x $d$ using dimension reduction I can compute the eigenvectors and the eigenvalues of the Covariance matrix $MM^t$. After computing these eigenvectors and eigenvalues how can I compute the eigenvectors of the original matrix? </p>
| 46,056 |
<p>I have a preliminary study with very small sample size (n=26), and I want to test for differences between males and females and similar things, so I have to divide the sample and make comparisons of 13 vs 13 subjects.</p>
<p>Is there a test I can use to give an idea of what the differences might be? Is it possible to use the two-independent-sample t-test even if the sample is this small and the variable is not normally distributed? What can I do otherwise? </p>
| 74,240 |
<p>I'm trying to compute ANOVA effect sizes from papers that provide an F value without other information. If I understand correctly, the effect size for a single-factor ANOVA is
$$
\eta {2} = \frac{ss_{between}}{ss_{between} + ss_{error}}
$$</p>
<p>And the F value is:
$$
F = \frac{(N-k)ss_{between}}{(k-1)(ss_{between} + ss_{error})}
$$
<strong>UPDATE: Nope! the denominator is just [(k-1)*SSerror]. Thus, everything that follows is invalid. Back to first-years stats for me.</strong></p>
<p>Where N = number of observations and k = number of groups. </p>
<p><strong>Question 1:</strong> Does it follow that you can calculate eta squared as:
$$
\eta {2} = \frac{k-1}{N-k}F
$$</p>
<p><strong>Question 2:</strong> I tried checking this in some output from SPSS. Here's an example with k=4 and N=158:</p>
<p><img src="https://dl.dropbox.com/u/5473621/spss_etasq_output2.png" alt="SPSS output with relevant values described below"></p>
<p>I'm aware that SPSS gives partial eta squared, but for a single-factor ANOVA that should be the same as eta squared, right? And indeed, the ratio of the sums of squares is $\frac{342.872}{(342.872+6133.519)} = .05294$. But using F, we get $2.870*3/154 = .05591$, which is off by much more than rounding error. </p>
<p>Is SPSS subtly adjusting F somehow, or am I confused about how to calculate eta squared?</p>
| 74,241 |
<p>I am classifying different texts and I wondering about some features that are highly correlated. I have 49 features. Some features are absolute counters (integers) but most features are relative counters(float between 0-1).
I am running F-score (univariate) and I am getting the following three features with the highest scores: 1-fourth root of number of word forms, 2- number of word forms and 3-number of sentences.
I am running a feature ranking based on extremely randomized trees (scikit-learn ensemble forests) and I am getting exactly the same three features as the highest ranking features. The ranking based on randomized trees is using bootstrapping and GINI.
In the F-scores results I can understand that highly correlated features may have the highest ranking because it is univariate based (measure only one feature at the time).
In the random tree ranking I was expecting that only one of features related to the "length of the text" should have a high rank and the others should have a lower and the correlation problem should be solved. But the results are not according to my expectations. I must be doing something wrong! Could it be realted to the fact that all three features are integer counters (values in 1000-range) and the other features are relative counters(0-1). But as I undertand ranking based on random trees should be able to handle large discrepancies between the features.
My question is how should I handle this issue. Should I discard some features? How can I find out the best feature that characterizes the text length??? Any help here is appreciated! </p>
| 36,997 |
<p>I have a data set of a bag of words. I randomly choose some points and use them for testing and the others are used for training.</p>
<ul>
<li>case (1) I just take each data-point from the test set and classify it as
having the same class label as its nearest point from the train set.</li>
<li>case (2) I do the classification using any known supervised classifier.</li>
</ul>
<p>I always get better recognition rate in case (1). That is, not doing any learning at all, is better than using any supervised learning, for this data set (and others) ! Is that a frequent situation ?</p>
| 36,998 |
<p>I have a continuous random variable $X$ (positive). I want to simulate its distribution with a discrete distribution and calculate $E[X]$ from that discrete distribution. So, the obvious approach is to divide the range of the random variable into step size of $h$; let the CDF values at the points $0,h,2h,\ldots,Nh$ be $P_0,P_1,P_2,\ldots,P_N$.</p>
<p>Thus, $\text{Prob}(0 < X \leq h)=P_1-P_0$, $\text{Prob}(h < X \leq 2h)=P2-P1$ and so on.</p>
<p>Now these probability masses are associated with a interval. We need to find a representative point of each interval, and here lays my problem.<br>
For an interval $(a,b]$ which point should we take as the representative point? Leftmost point, rightmost point, the mid point?</p>
<p>Basically, given the following relation F'(t)=P(X<=t)=$1-(1-F(t))^{n}$ I need to find the expectation of X i.e E[X] where F(t) is CDF of some other random variable Y. The expression for F(t) is not known to me. I have only access to a black box that gives me a value of F(t) as output when I give a value of t as input. That's why the question of "approximating" the continuous distribution with a discrete distribution comes.</p>
<p>Another question is how to choose an appropriate h value (step size) given an error bound "epsilon" on the expected value. Is there any standard method already?</p>
| 40,272 |
<p>I got an example calculation for multiplicative model, which is shown as follows:</p>
<pre><code>Quarter 1 2 3 4
Average 0.866 1.0005 1.403 0.660
--------------------------------------------------
Adjustment 0.0176 0.0176 0.0176 0.0176
Seasonal factor 0.884 1.018 1.421 0.678
</code></pre>
<p>Then there is a note below:</p>
<blockquote>
<p>Sum of averages = 3.9295. These should sum to 4, 4-3.9295=0.0705.
Adding 0.0705/4=0.0176 to each average, to obtain the seasonal
factors.</p>
</blockquote>
<p>I saw from other resources that they are using "seasonal index" instead of "seasonal factor" by normalizing the values. Besides that, they also mentioned about X11, X12, ARIMA, and so on. I would like to know is, based on the example above, what is the method called?</p>
| 74,242 |
<p>I have zero inflated response variable I am trying to predict. I am facing few issues applying different regression models that should correct for this.</p>
<p>This is my 10,000 obs dataframe</p>
<pre><code> e_weight left_size right_size time_diff
Min. :0.000 Min. : 1.000 Min. : 1.000 Min. : 737
1st Qu.:0.000 1st Qu.: 1.000 1st Qu.: 1.000 1st Qu.: 4669275
Median :0.000 Median : 3.000 Median : 3.000 Median : 12263474
Mean :0.022 Mean : 6.194 Mean : 5.469 Mean : 21000288
3rd Qu.:0.000 3rd Qu.: 5.000 3rd Qu.: 5.000 3rd Qu.: 25420278
Max. :3.000 Max. :792.000 Max. :792.000 Max. :155291532
</code></pre>
<p>Here the frequency count for my 3 variables
<img src="http://i.stack.imgur.com/1yIvx.jpg" alt="enter image description here">
Indeed I have a problem with zeros...</p>
<p>I tried respectively a Zero-Inflated Negative Binomial Regression and a Zero-inflated Poisson Regression</p>
<pre><code>library(pscl)
m1 <- zeroinfl(e_weight ~ left_size*right_size | time_diff, data = s)
summary(m1)
# Call:
# zeroinfl(formula = e_weight ~ left_size * right_size | time_diff, data = s)
#
# Pearson residuals:
# Min 1Q Median 3Q Max
# -1.4286 -0.1460 -0.1449 -0.1444 19.6054
#
# Count model coefficients (poisson with log link):
# Estimate Std. Error z value Pr(>|z|)
# (Intercept) -3.8826386 0.0696970 -55.707 < 2e-16 ***
# left_size 0.0022261 0.0006195 3.594 0.000326 ***
# right_size 0.0033622 NA NA NA
# left_size:right_size 0.0001715 NA NA NA
#
# Zero-inflation model coefficients (binomial with logit link):
# Estimate Std. Error z value Pr(>|z|)
# (Intercept) 1.753e+01 6.011e+00 2.916 0.00354 **
# time_diff -3.342e-04 1.059e-06 -315.773 < 2e-16 ***
# ---
# Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#
# Number of iterations in BFGS optimization: 28
# Log-likelihood: -1053 on 6 Df
# Warning message:
# In sqrt(diag(object$vcov)) : NaNs produced
</code></pre>
<p>and </p>
<pre><code>library(MASS)
m2 <- glm.nb(e_weight ~ left_size*right_size + time_diff, data = s)
</code></pre>
<p>which gives</p>
<pre><code>There were 22 warnings (use warnings() to see them)
warnings()
Warning messages:
1: glm.fit: algorithm did not converge
...
21: glm.fit: algorithm did not converge
22: In glm.nb(e_weight ~ left_size * right_size + time_diff, ... :
alternation limit reached
</code></pre>
<p>If I ask a summary for the second model </p>
<pre><code>summary(m2)
# Call:
# glm.nb(formula = e_weight ~ left_size * right_size + time_diff,
# data = s, init.theta = 0.1372733321, link = log)
#
# Deviance Residuals:
# Min 1Q Median 3Q Max
# -3.4645 -0.2331 -0.1885 -0.1266 2.7669
#
# Coefficients:
# Estimate Std. Error z value Pr(>|z|)
# (Intercept) -3.239e+00 1.090e-01 -29.699 < 2e-16 ***
# left_size -4.462e-03 1.835e-03 -2.431 0.015047 *
# right_size -7.144e-03 2.118e-03 -3.374 0.000742 ***
# time_diff -6.013e-08 8.584e-09 -7.005 2.48e-12 ***
# left_size:right_size 4.691e-03 2.749e-04 17.068 < 2e-16 ***
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# (Dispersion parameter for Negative Binomial(0.1374) family taken to be 1)
#
# Null deviance: 1106.5 on 9999 degrees of freedom
# Residual deviance: 958.5 on 9995 degrees of freedom
# AIC: 1967.2
#
# Number of Fisher Scoring iterations: 12
#
#
# Theta: 0.1373
# Std. Err.: 0.0223
# Warning while fitting theta: alternation limit reached
#
#
# 2 x log-likelihood: -1955.2260
</code></pre>
<p>Also both models have very low p-values for heteroskedasticity </p>
<pre><code>bptest(m1)
#
# studentized Breusch-Pagan test
#
# data: m1
# BP = 244.832, df = 3, p-value < 2.2e-16
#
bptest(m2)
#
# studentized Breusch-Pagan test
#
# data: m2
# BP = 277.2589, df = 4, p-value < 2.2e-16
</code></pre>
<p>How should I approach this regression. Would make sense to simply add 1 to all my dataframe before running any regression?</p>
| 74,243 |
<p>Is there a way I can model a discrete time random walk with random variance? The stochastic vol models I have found all seem to be continuous and assume vol is normal.</p>
| 74,244 |
<p>I am looking at setting up some website load testing scripts and need some help in finding a formula to estimate how many concurrent users are browsing a website at peak times, based on common metrics such as visits, average page views per visit, and average visit duration.</p>
<p>For example:</p>
<p>Peak visitors per hour: 1,000</p>
<p>Average page views per visitor: 3</p>
<p>Average time per visit: 5 minutes</p>
<p>Should I be considering at any other stats? Thanks in advance!</p>
| 37,001 |
<p>I'm using <code>heatmap.2</code> to cluster my data, using the centroid method for clustering and the maximum method for calculating the distance matrix:</p>
<pre><code>library("gplots")
library("RColorBrewer")
test <- matrix(c(0.96, 0.07, 0.97, 0.98,
0.50, 0.28, 0.29, 0.77,
0.08, 0.96, 0.51, 0.51,
0.14, 0.19, 0.41, 0.51), ncol=4, byrow=TRUE)
colnames(test) <- c("Exp1","Exp2","Exp3","Exp4")
rownames(test) <- c("Gene1","Gene2","Gene3", "Gene4")
test <- as.table(test)
mat <- data.matrix(test)
heatmap.2(mat, dendrogram="row", Rowv=TRUE, Colv=FALSE,
distfun=function(x) dist(x, method='maximum'),
hclustfun=function(x) hclust(x, method='centroid'),
xlab=NULL, ylab=NULL, key=TRUE, keysize=1, trace="none",
density.info=c("none"), margins=c(6, 12), col=bluered)
</code></pre>
<p>This gives a heatmap with inversions in the cluster tree, which is inherent to the centroid method. A solution to avoid inversions is to use the Euclidean or the city-block distance, and indeed if you change maximum to Euclidean in the above example the inversions are gone (for reference see chapter 4.1.1 in <a href="http://bonsai.hgc.jp/~mdehoon/software/cluster/manual/Hierarchical.html" rel="nofollow">this link</a>).</p>
<p>Now as for my problem, when I use my actual data instead of this example table the inversions are still there when I change to Euclidean. The R code is exactly the same as in this example, only the data is different. When I use <code>cluster 3.0</code> and <code>java treeview</code> with the Euclidean and centroid method there are no inversions in my data as expected. So why does R give inversions? The theory and other software says it shouldn't.</p>
<p><em><strong>Update:</em></strong> This is an example were changing maximum to Euclidean does not fix inversions (as opposed to the above example were it did fix it)</p>
<pre><code>library("gplots")
library("RColorBrewer")
test <- matrix(c(0.96, 0.07, 0.97, 0.98, 0.99, 0.50,
0.28, 0.29, 0.77, 0.78, 0.08, 0.96,
0.51, 0.51, 0.55, 0.14, 0.19, 0.41,
0.51, 0.40, 0.97, 0.98, 0.99, 0.50,
0.28 ), ncol=6, byrow=TRUE)
colnames(test) <- c("Exp1", "Exp2", "Exp3", "Exp4", "Exp5", "Exp6")
rownames(test) <- c("Gene1", "Gene2", "Gene3", "Gene4")
test <- as.table(test)
mat <- data.matrix(test)
heatmap.2(mat, dendrogram="row", Rowv=TRUE, Colv=FALSE,
distfun=function(x) dist(x, method='maximum'),
hclustfun=function(x) hclust(x, method='centroid'),
xlab=NULL, ylab=NULL, key=TRUE, keysize=1, trace="none",
density.info=c("none"), margins=c(6, 12), col=bluered)
</code></pre>
| 74,245 |
<p>When finding the maximum margin separator in the primal form we have the quadratic program</p>
<p>$$min\frac{1}{2}||\theta||^2$$
$$\text{ subject to: } y^{(t)}(\theta \cdot x^{(t)} + \theta_0) \geq 1, \ t=1,...,n,$$</p>
<p>saying basically to find the maximum margin separator. The margin size will be:</p>
<p>$$\frac{1}{||\theta||}.$$</p>
<p>Does the size of the margin change if we change the constants of the constraint?</p>
<p>That is, if we have</p>
<p>$$\text{ subject to: } y^{(t)}(\theta \cdot x^{(t)} + \theta_0) \geq k, \ t=1,...,n,$$</p>
<p>instead of 1?</p>
<p>If it does not matter, why doesn't this matter? How is it an equivalent formulation regardless of the exact constants for the constraint?</p>
| 74,246 |
<p>This is not a question about implementation:</p>
<p>I have a GLMER model with a significant three way interaction. Neither of the two way interactions or main effects are significant. I accept this model and leave all lower order effects in the model, but want to plot this three way interaction: x, y, and then z as colour in a ggplot. I predict responses of y based on data I generate for y and z using said model, but only using the significant fixed effects (the intercept and three way interaction), and plot x, y, z. Does this seem like an alright thing to do in order to communicate what the interaction means assuming I keep the lower order, non significant effects in the model? </p>
<p>Thanks!</p>
| 74,247 |
<p>Is it possible to fit a data curve to another data curve?</p>
<p>Please see this plot</p>
<p><img src="http://i.stack.imgur.com/2iHKM.jpg" alt="enter image description here"></p>
<p>I do not have the model for the black data curve or I don't want to model it. But I want to see how the red data curve fits/matches with the black one. Is it possible? I am looking for a non-parametric solution or a solution that can be arrived at even without plotting the two curves.</p>
| 74,248 |
<p>The final theorem in Chapter 19 of Meyn and Tweedie's <a href="http://probability.ca/MT/" rel="nofollow">Markov Chains and Stochastic Stability</a> tells us that if the mean inter-arrival time $\lambda$ of a GI/G/1 queue is greater than its mean service time $\mu$, then the queue is positive Harris recurrent.</p>
<blockquote>
<b>Question</b>: What stability results are known for a continuous-time GI/G/1 queue for which $\lambda=\mu$, and in which the variances of the inter-arrival time and service time random variables are positive? Is it known that such a queue cannot be positive Harris recurrent? Regular?
</blockquote>
<p>The Lyapunov function used in Meyn-Tweedie, which is an expected hitting time, is not going to work for $\lambda=\mu$.</p>
<p>I searched in Morozov and Delgado's <a href="http://link.springer.com/article/10.1134%2FS0005117909120066#page-1" rel="nofollow">survey paper</a>, which claims that "stability conditions of the classical GI/G/m queue are well known", and in other surveys, but found no mention of the $\lambda=\mu$ case there. </p>
| 37,006 |
<p>I have done an AMOVA analysis on mtDNA sequences, partitioned on 14 populations (with a variable number of individuals in each populations), and I insert all the population in the same group.</p>
<p>From the AMOVA analysis I have obtained significant value for Fst among populations, but when i try to compute pairwise Fst between the populations I obtaine only 3 significant pair and the other are not significant.</p>
<p>Is it possible?
May the problem is that i have to redefine my populations?</p>
<p>Thank you</p>
| 34,791 |
<p>I have a situation where we are detecting anomalies based on data implied from the table data. </p>
<p>As an example, I have data on registered individuals spending time on the portal. Based on this, I have some logic driven by the interval between successive login-ins/ activity on portal. We have used arbitrary duration as the interval and it seems to work for us. There are a bunch of other user demographics(age, location, interests etc) that have been ignored </p>
<p>I tried various ML algorithms(NB, Decision tree etc) but am getting error rates in excess of 40% in spite of large training data. </p>
<p>I wanted to check
a) should I explicitly create variables like "interval between logins"; "number of activities per hr" so that they can be used for the algorithms
b) should I create class variables- like ">2 hrs" "1-2 hrs" for login-interval and number of activities per login-hr as '<5', '5-15' etc.</p>
<p>Broadly should I provide (a) and (b) explicitly... and likewise do I signal on a few other variables that I believe should be used </p>
| 74,249 |
<blockquote>
<p><img src="http://i.stack.imgur.com/BX2wm.jpg" alt="enter image description here"></p>
</blockquote>
<p><strong>My attempt at making sense of the problem:</strong><br>
The problem provides us with the sum of squares (SS; I believe that's the $18.1$). We can use the SS along with the number of samples $(21)$ to get the standard error. Then somehow use this information to find the probability of having a worse fit?<br>
After more research I believe the value given as $18.1$ is mean square deviation or sample variance and not the sum of squares. If this is true, then an <em>F</em>-test certainly makes sense for part <strong>b</strong>. </p>
<p>Could someone guide me in the right direction and show me how to attempt this question?<br>
Any help would be appreciated.</p>
| 74,250 |
<p>To check if unilateral pairs (defined below*) are coordinated (i.e. move together) in a flock of N coordinated individuals, we generate hypothetical (null) distribution of a certain focal parameter of our system (the frequency that 2 individuals are nearest neighbors along a track). The null model is simulated by shuffling the empirical data in a way that should break down coordination among pairs (changing the initial conditions and the fluctuations from the mean path along the track), repeated 5000 times. We then calculate the very same parameter for all N(N-1) unilateral pairs in the empirical data. For each unilateral pair, the null distribution should indicate the probability of getting the empirical value (or higher) of the focal parameter if the flock moves without coordination among pairs. But since we check all possible N(N-1) unilateral pairs, we engage multiple comparisons. We would greatly appreciate suggestions how to correct for multiple comparisons in this case.</p>
<ul>
<li>unilateral pairs - (A->B) or (B->A) but not necessarily A<->B (this is a bilateral pair)</li>
</ul>
<p>Best wishes,</p>
<p>Ran</p>
| 37,011 |
<p>I have been using JAGS but I am not quite sure how it actually simulates it values. I need to know in a general sense what's going on in the background. </p>
<p>Thanks for the help</p>
| 37,013 |
<p>This is my attempt to find the discrepancy in the matching process for <code>R</code> package <code>"Matching"</code> and user written function <code>"psmatch2"</code> in <code>Stata</code><a href="http://stats.stackexchange.com/questions/52889/matching-results-completely-different-in-r-and-stata"> [Details]</a>. </p>
<p>I am trying to find out how the Mahalanobis distance is computed in <code>psmatch2</code> (user written function for <code>Stata</code>) and whether it is consistent with the Mahalanobis distance computed in <code>R</code> and that computed using matrix in <code>Stata</code> [Details for Stata is available <a href="http://www.stata.com/statalist/archive/2010-10/msg01294.html" rel="nofollow">here]</a>.</p>
<p>Following is the code for computing <a href="http://stat.ethz.ch/R-manual/R-patched/library/stats/html/mahalanobis.html" rel="nofollow">Mahalanobis</a> distance in R. The data I used is nuclearplants data from <code>optmatch (R package)</code>. </p>
<pre><code> library (optmatch)# for the dataset nuclearplants
data(nuclearplants) # pr is a treatment variable
head(nuclearplants)
cost date t1 t2 cap pr ne ct bw cum.n pt
H 460.05 68.58 14 46 687 0 1 0 0 14 0
I 452.99 67.33 10 73 1065 0 0 1 0 1 0
A 443.22 67.33 10 85 1065 1 0 1 0 1 0
J 652.32 68.00 11 67 1065 0 1 1 0 12 0
B 642.23 68.00 11 78 1065 1 1 1 0 12 0
K 345.39 67.92 13 51 514 0 1 1 0 3 0
library(foreign)
write.dta(nuclearplants,"datanp.dta")
covar<-cbind(nuclearplants$t1,nuclearplants$t2) #co-variates t1 and t2
X<-covar
#compute Mahalanobis distance for the treatment (pr=1) in row 7 and control (pr=0) in row 3
kk<-solve(cov(X))
kk
[,1] [,2]
[1,] 0.11362947 0.01747102
[2,] 0.01747102 0.01194136
mahalanobis(X[7,], center=X[3,], cov=kk, inverted=TRUE)
[1] 12.63674
#compute Mahalanobis distance for the treatment (pr=1) in row 2 and control (pr=0) in row 3
mahalanobis(X[2,], center=X[3,], cov=cov(X))
[1] 1.719555
</code></pre>
<p>###########<code>Perform above steps in Stata using matrix</code>############</p>
<pre><code>I computed the covariance matrix in Stata as follows:
mat accum cov = t1 t2, dev noc
mat covinv=inv(cov/(r(N)-2)) # psmatch2 function divides by N-2
mat list covinv
symmetric covinv[2,2]
t1 t2
t1 .10996401
t2 .01690744 .01155615
#This is different from the inverse of covariance matrix compute above in R. However, dividing by N-1 gives the same answer (I am not sure why we have to divide by N-2 in psmatch2)
mat covinv=inv(cov/(r(N)-1))
mat list covinv
symmetric covinv[2,2]
t1 t2
t1 .11362947
t2 .01747102 .01194136
# For now I stick to one with N-1
mean(t1)
mean(t2)
Next for the treatment 3 we have X[3,]:-3.75 22.625 {this is obtained as 10-mean(t1) and 85 minus mean(t2))
and for the control X[7,]: -1.75 -12.375 {this is obtained as 12-mean(t1) and 50 minus mean(t2))
Now Mahalanobis Distance: (X[7,]-X[3,]) *covinv*(X[7,]-X[3,]) =12.2291 (This will be the same as in R if we use N-1 in variance computation in psmatch22)
Similarly, for the treatment 3 and control 2 this is
(X2[2,]-X2[3,])*covinv* (X2[2,]-X2[3,])=1.664086 (This will be same as in R if we use N-1 in variance computation in psmatch2.
</code></pre>
<p>So, up to now, I showed that results from <code>R</code> and <code>Stata</code> matches if we stick to computing using variance with N-1 instead of N-2 (as in psmatch2). However, when I run the <code>psmatch2</code> </p>
<pre><code>psmatch2 pr, mahalanobis(t1 t2)
</code></pre>
<p>and obtain the Mahalanobis distance (This is given as <code>_mdif</code> in <code>psmatch 2</code> : <code>_mdif</code> In the case of one-to-one Mahalanobis matching, for every treatment observation it stores the absolute distance to its matched control in terms of the Mahalanobis metric). </p>
<pre><code>Mahalanobis distance for the treatment (pr=1) in row 7 and control (pr=0) in row 3
2.26469
</code></pre>
<p>Note: It doesn't provides Mahalanobis distance for the treatment (pr=1) in row 2 and control (pr=0) in row 3 because it provides the distance only for the matched observation.</p>
<p>My question is why Mahalanobis distance (2.26469) computed using <code>psmatch2</code> is different than that computed using <code>R</code> (12.63674) and matrix function in <code>Stata</code> (12.63674 for variance computed using N-1)?</p>
| 74,251 |
<p>A student's t distributed rv $X$ has characteristic function but no moment generating function. I wonder if cf(X)=$E[e^{itX}]$, why we cannot take $t=-iu$ to get the mgf $E[e^{uX}]$? (This question may be very silly...)</p>
<p>If we cannot know mgf of $X$, is there some accurate numerical way to evaluate $E[e^{X}]$, i.e., the value of the mgf at $u=1$?</p>
| 37,015 |
<p>From <a href="http://en.wikipedia.org/wiki/Completeness_%28statistics%29" rel="nofollow">Wikipedia</a>:</p>
<blockquote>
<p>The statistic $s$ is said to be complete for the distribution of $X$ if for every measurable function $g$ (which must be independent of parameter $θ$) the following implication holds:
$$
E(g(s(X))) = 0, \forallθ \text{ implies that }P_θ(g(s(X)) = 0) = 1, \forall θ.
$$
The statistic $s$ is said to be boundedly complete if the implication holds for all bounded functions $g$.</p>
</blockquote>
<p>I read and agree with <a href="http://stats.stackexchange.com/a/44135/1005">xi'an and phaneron</a> on that a complete statistic means that "there can only be one unbiased estimator based on it"</p>
<ol>
<li><p>But i don't understand what Wikiedia says at the beginning of the same article:</p>
<blockquote>
<p>In essence, it (completeness is a property of a statistic) is a condition which ensures that the parameters of the probability
distribution representing the model can all be estimated on the
basis of the statistic: it ensures that <strong>the distributions</strong>
corresponding to different values of the parameters are distinct.</p>
</blockquote>
<ul>
<li><p>in what sense (and why) does completeness "ensures that the distributions corresponding to
different values of the parameters are distinct"? is "the distributions" the distributions of a complete statistic?</p></li>
<li><p>in what sense (and why) does completeness "ensures that
the parameters of the probability distribution representing the
model can all be estimated on the basis of the statistic"?</p></li>
</ul></li>
<li><p>[optional: What does "bounded completeness" mean, compared to completeness?]</p></li>
</ol>
<p>Thanks and regards!</p>
| 74,252 |
<p>I have a R data frame like this:</p>
<pre><code>structure(list(Mash_pear = c(0.328239947270445, 0.752207607551684,
0.812118104861163, 0.640824971449627, 0.615568052052443, 0.546635339103089,
0.557460706464288, 0.650480192893698, 0.418044504894929, 0.52962586938499
), tRap_pear = c(0.0350096175177328, 0.234255507711743, 0.23714999195134,
0.185536020521134, 0.191585098617356, 0.201402054387186, 0.220911538536031,
0.216072802572045, 0.132247101763063, 0.172753098431029), Beeml_pear = c(0.179209909971615,
0.79129167285928, 0.856908302056589, 0.729078080521886, 0.709346164378725,
0.669599784720647, 0.585348196746785, 0.639355942917055, 0.544909349368496,
0.794652394149651), Mash_pear50 = c(0.192474082559755, 0.679726904159742,
0.778564545349054, 0.573745352397321, 0.56633658385284, 0.472559997318901,
0.462635414367878, 0.562128414492567, 0.354624921832056, 0.64532681437697
), labels = c("Aft1", "Alx3", "Alx4", "Arid3a", "Arid3a", "Arid3a",
"Arid3a", "Arid5a", "Arid5a", "Aro80"), fam = c("AFT", "Homeo",
"Homeo", "BRIGHT", "BRIGHT", "BRIGHT", "BRIGHT", "BRIGHT", "BRIGHT",
"Zn2Cys6"), pwmlength = c("21", "17", "17", "17", "17", "17",
"17", "14", "14", "21")), .Names = c("Mash_pear", "tRap_pear",
"Beeml_pear", "Mash_pear50", "labels", "fam", "pwmlength"), row.names = c("Aft1",
"Alx3_3418.2", "Alx4_1744.1", "Arid3a_3875.1_v1_primary", "Arid3a_3875.1_v2_primary",
"Arid3a_3875.2_v1_primary", "Arid3a_3875.2_v2_primary", "Arid5a_3770.2_v1_primary",
"Arid5a_3770.2_v2_primary", "Aro80"), class = "data.frame")
</code></pre>
<p>The first 4 columns are my correlations which i want to test for significant differences. These 4 columns are methods to estimate transcription factor binding to the DNA. Now i want to know which method performs best? I tried a paired t-test and unpaired t-test which seems the most suitable to me. Now i am wondering on how to interpret the test and are there other ways to test which method is better. </p>
<p>Data.frame for readability:</p>
<pre><code> Mash_pear tRap_pear Beeml_pear Mash_pear50 labels fam pwmlength
Aft1 0.3282399 0.03500962 0.1792099 0.1924741 Aft1 AFT 21
Alx3_3418.2 0.7522076 0.23425551 0.7912917 0.6797269 Alx3 Homeo 17
Alx4_1744.1 0.8121181 0.23714999 0.8569083 0.7785645 Alx4 Homeo 17
Arid3a_3875.1_v1_primary 0.6408250 0.18553602 0.7290781 0.5737454 Arid3a BRIGHT 17
Arid3a_3875.1_v2_primary 0.6155681 0.19158510 0.7093462 0.5663366 Arid3a BRIGHT 17
Arid3a_3875.2_v1_primary 0.5466353 0.20140205 0.6695998 0.4725600 Arid3a BRIGHT 17
Arid3a_3875.2_v2_primary 0.5574607 0.22091154 0.5853482 0.4626354 Arid3a BRIGHT 17
Arid5a_3770.2_v1_primary 0.6504802 0.21607280 0.6393559 0.5621284 Arid5a BRIGHT 14
Arid5a_3770.2_v2_primary 0.4180445 0.13224710 0.5449093 0.3546249 Arid5a BRIGHT 14
Aro80 0.5296259 0.17275310 0.7946524 0.6453268 Aro80 Zn2Cys6 21
</code></pre>
| 74,253 |
<p>My project is about purchasing power parity (PPP). I am checking whether the real exchange rate of Canadian dollar(CAD)/US dollar(USD), Japanese Yen(JPY)/USD and Great Britain Pound(GBP)/USD has a unit root with Augmented Dickey Fuller (ADF). </p>
<ul>
<li>H0 is - there is a unit root for the series and real ex. rate will follow the random walk and is non-stationary </li>
<li>H1 is - there is no unit root and the real exchange rate is stationary and PPP holds. </li>
</ul>
<p>I have created a diagram of the real exchange rates over the sample period, unfortunately I can not attach photos, due to the site regulations (I don't have 10 reputation yet) Anyways my JPY ex rate is highly volatile, fluctuates between 1.7 and 2.3 over the period; GBP is giving similar line, only it fluctuates between -0.3 and 0.1; CAD is the one that seems to have stationarity, it is relatively flat, moves between -0.1 and 0.3. </p>
<p>All the critical values are the same for all three results, and only t-statistic and the p-values are different. 1% -3.442; 5% -2.871; 10% -2.570</p>
<p>In Cad/USD t-stat is -1.568 and p-value 0.4997;
in JPY/USD t-stat -2.551 and p-value 0.1036;
in GBP/USD t-stat is -3.410 and p-value 0.0106</p>
<p>It seems that with the rise of the t-stat result, p-value decreases; moreover my H0 is rejected at 5 and 10% for GBP and is not rejected at 1%. I am really puzzled at how to give the interpretation for that. (Maybe it's because it fluctuates under zero?)</p>
<p>My supervisor wants me to explain what is the connection between t-stat and the p-value. and if possible could you please explain in simple English what is the unit root, mathematical explanations were of no help. I have also read the article <a href="http://stats.stackexchange.com/questions/29121/intuitive-explanation-of-unit-root">Intuitive explanation of unit root</a> already, unfortunately I still could not get the main idea of the unit root</p>
| 37,018 |
<p>Statistical semi-novice question:</p>
<p>Just following up <a href="http://stats.stackexchange.com/questions/53432/can-ancova-disagree-with-multiple-regression">this question</a>, the fact that the three types of ANOVA exist with an unbalanced design, and that (presumably) you don't necessarily know a priori which one is suitable for your data (because, say, you do not know whether an interaction is present), my understanding is that a comparison between the types may be necessary.</p>
<p>Isn't this equivalent to stepwise regression - and therefore problematic?</p>
<p>I'm wondering whether a better approach for an unbalanced design would be to use a mixed model?</p>
| 37,019 |
<p>I devised a distance function similar to this form</p>
<p>$d(x,y) = \sum_{i = 1}^{n-1} b(x_i, y_i,x_{i+1}, y_{i+1}) $ </p>
<p>with</p>
<p>$b(x_i, y_i,x_{i+1}, y_{i+1}) = 0 \mbox{ if } x_i \leq 0 \vee y_i \leq 0 \vee x_{i+1} \leq 0 \vee y_{i+1} \leq 0$<br>
$b(x_i, y_i,x_{i+1}, y_{i+1}) = \alpha \cdot \frac{x_{i+1}}{y_{i+1}} + (x_i - y_i)^2 \mbox{ else } $<br>
where $\alpha$ is a real number > 0.</p>
<p>And now I want to prove (or disprove) that $e^{-d(x,y)}$ is a kernel which I for example could use in SVMs. </p>
<p>I have read about Mercer's condition, positive semi definiteness and about constructing kernels from existing kernels, but I can't transfer it to this kind of function. Especially, how do I deal with the if-cases in my definition of b?</p>
<p>Any help would be greatly appreciated :)</p>
| 74,254 |
<p>I hope that this question does not get marked "as too general" and hope a discussion gets started that benefits all.</p>
<p>In statistics, we spend a lot of time learning large sample theories. We are deeply interested in assessing asymptotic properties of our estimators including whether they are asymptotically unbiased, asymptotically efficient, their asymptotic distribution and so on. The word asymptotic is strongly tied with the assumption that $n \rightarrow \infty$. </p>
<p>In reality, however, we always deal with finite $n$. My questions are:</p>
<p>1) what do we mean by large sample? How can we distinguish between small and large samples?</p>
<p>2) When we say $n \rightarrow \infty$, do we literally mean that $n$ should go to $\infty$?</p>
<p>e.x. for binomial distribution, $\bar{X}$ needs about n = 30 to converge to normal distribution under CLT. Should we have $n \rightarrow \infty$ or in this case by $\infty$ we mean 30 or more?!</p>
<p>3) Suppose we have a finite sample and suppose that We know everything about asymptotic behavior of our estimators. So what? suppose that our estimators are asymptotically unbiased, then do we have an unbiased estimate for our parameter of interest in our finite sample or it means that if we had $n \rightarrow \infty$, then we would have an unbiased one?</p>
<p>As you can see from the questions above, I'm trying to understand the philosophy behind "Large Sample Asymptotics" and to learn why we care? I need to get some intuitions for the theorems I'm learning.</p>
<p>Your help is greatly appreciated.
Thanks.</p>
| 37,021 |
<p>Does anyone know of a good compendium or catalog of compound distributions, or finite mixture representations of those distributions? </p>
<p>I am trying to find out to what extent the common multi-parameter distributions and their generalized forms can be built up from single parameter distributions through compounding. The ones I have found so far cover only 10 or 15 distributions, and I still can not tell if it is possible to build up e.g. most four parameter distributions from one parameter distributions by successive steps of compounding.</p>
| 38,773 |
<p>I came across a study in which patients, who were all over 50, were pseudo-randomized by birth year. If birth year were an even number, usual care, if an odd number, intervention.</p>
<p>It's easier to implement, it's harder to subvert (it's easy to check what treatment a patient should have received), it's easy to remember (the assignment went on for several years). But still, I don't like it, I feel like proper randomization would have been better. But I can't explain why. </p>
<p>Am I wrong for feeling that, or is there a good reason to prefer 'real' randomization?</p>
| 74,255 |
<p>The question states: </p>
<blockquote>
<p>Consider a set of random variables $X_i$, where $i=1,...n$. Each $X_i$ is
normally distributed with mean $0$ and variance $1$, i.e. $X_i$ are $\mathcal N(0,1)$.
What is the mean and the variance of the random variable $Y$, where
$Y=X_1+...+X_n$.</p>
</blockquote>
<p>How do I do this?</p>
| 37,023 |
<p>This is a pretty basic question, but I can't find an answer by searching for different statements of the same problem.</p>
<p>Is there a straightforward way to test if a regression parameter is different from a non-zero value in (non-) linear regression? I can only think of one way, but it seems too roundabout. Here's an example:</p>
<pre><code>INPUT PROGRAM.
LOOP #I = 1 TO 10.
COMPUTE X = #I.
COMPUTE Y = RV.NORMAL(5,1)+x*RV.NORMAL(4,1.5).
END CASE.
END LOOP.
END FILE.
END INPUT PROGRAM.
dataset name exampleData WINDOW=front.
EXECUTE.
</code></pre>
<p>Say I want to test if the slope from linear regression deviates from 3. The only way I know how in SPSS is NLR:</p>
<pre><code>MODEL PROGRAM b0=5 b1=0.
COMPUTE PRED_=b0 + (3+b1)*x.
NLR Y
/OUTFILE='C:\temp\SPSSFNLR.TMP'
/PRED PRED_
/CRITERIA SSCONVERGENCE 1E-8 PCON 1E-8.
</code></pre>
<p>This will give you an estimate, std. error, and 95% CI for b1, but <em>no p value</em>. I also get a weird feeling that this method isn't statistically "correct". Does SPSS have a built-in test for this without resorting to NLR? Is there one that gives a p value?</p>
<p>What if my regression model is non-linear? I suppose I can use the same approach as above, but how can I get a p value?</p>
<p>EDIT: I would like to clarify that I am asking about more than just SPSS methodology. Although I'd love to get an SPSS-specific answer, I am also looking for a more general statistical method to a) get a p value from regression parameters from the estimate, std. error, and degrees of freedom and b) if there's a statistical test designed to test deviation of a regression parameter from a non-zero constant.</p>
| 44,244 |
<p>I've done an experiment where I recorded a positive or negative result from a cell.
My data is set in tables of counts of events and trials, but I don't know how to do statistical analysis on binary data, I need to test the significance between the groups I tested.
Can I convert to percentage and do a stats test on that, converting it to continuous data?</p>
| 37,024 |
<p>So the question states that I'm trying something with an 85% chance of success, if I don't succeed I try again until I do. What is the mean and variance of the number of tries necessary until I succeed?</p>
<p>Thanks!</p>
| 46,169 |
<p>As a programmer, I am used to a vibrant documentation system with exhaustive references and tutorials online. Does such a system exist for Stata? Where do people go for quick stata questions (besides Cross Validated)? How did people here learn Stata?</p>
| 48,738 |
<p>This may be a very simple question, but I am not sure about my logic.</p>
<p>I have a standard point hypothesis testing scenario, where I collect a sequence of $n$ independent observations $\{x_1,\ldots,x_n\}$ and attempt to classify them as either generated from the distribution $P(x_1,\ldots,x_n;\theta=\theta_0)$ or $P(x_1,\ldots,x_n;\theta=\theta_1)$, where $\theta_0$ and $\theta_1$ are known. Using the knowledge of the joint probability density function $f(x_1,\ldots,x_n;\theta)$, I can construct a <a href="http://en.wikipedia.org/wiki/Likelihood-ratio_test" rel="nofollow">likelihood ratio test</a>, which is optimal in the <a href="http://en.wikipedia.org/wiki/Neyman%E2%80%93Pearson_lemma" rel="nofollow">Neyman-Pearson</a> sense. Let's denote the likelihood ratio test statistic by $\Lambda(x_1,\ldots,x_n)$.</p>
<p>However, when either hypothesis $H_0$ or $H_1$ is true, the test statistic converges in probability to the same constant, i.e. $\Lambda(x_1,\ldots,x_n)\xrightarrow{\mathcal{P}}\lambda$. Formally, for any $\epsilon>0$, $\delta>0$, there exists $n_0$ such that for all $n\geq n_0$, the following equations hold: </p>
<p>$$\tag{1}P(|\Lambda(x_1,\ldots,x_n)-\lambda|>\epsilon|\theta=\theta_0)<\delta$$
$$\tag{2}P(|\Lambda(x_1,\ldots,x_n)-\lambda|>\epsilon|\theta=\theta_1)<\delta$$</p>
<p>with $\lambda$ a known constant.</p>
<p>Does this mean that it's impossible to classify between $H_0$ and $H_1$, even as one obtains increasing number of observations? I think it is and here is my logic. Fix $\epsilon$ and $\delta$. Suppose our $n\geq n_0$ and suppose we would like to upper bound the probability of Type I error by $\delta$. We thus select a threshold $t$ for the test such that</p>
<p>$$P(\Lambda(x_1,\ldots,x_n)>t|\theta=\theta_0)<\delta$$</p>
<p>By $(1)$ this implies that $t\geq\lambda+\epsilon$. However, by $(2)$, this also implies that</p>
<p>$$P(\Lambda(x_1,\ldots,x_n)\leq t|\theta=\theta_1)\geq 1-\delta$$</p>
<p>resulting in the probability of Type II error being lower-bounded close to one. I think means that no matter how you set your threshold, the best test is as good as deciding between the two hypothesis by flipping a coin (biased coin if there are unequal priors).</p>
<p>Is that correct?</p>
| 74,256 |
<p>I have the survival dataset of a population with a special disease. I´d like to compare this population with the general population to see whether this population has a decreased life-expectancy overall. What I had in mind was to create a control for each patient in the dataset and enter the age, sex and cohort specific life-expectancy from the national statistics databank and just run a Kaplan-Meier analysis. </p>
<p>However, I´m unsure as to how I should deal with the censoring issue. Should I just censor the control if the life-expectancy for the x-aged, y-sexed, z-cohort exceeds todays date, i.e.: a 50 year old male in 2000 was expected to live 28 years in the general population? My take is that he should enter with 11 years and a censoring status.</p>
<p>Or is there some other more mathematically savvy way of doing this taking into account the uncertainty with the projected life-expectancy for the population?</p>
| 74,257 |
<p>I have a set of 40 subjects that performed a task while the variables of interest were being recorded at regular time intervals (0, 1, 2,.... 10 seconds). The task was performed twice, once with and once without the aid of computer feedback. </p>
<p>For most of the variables, the distribution across subjects at each time point is a normal distribution. But for a few, they are heavily skewed. </p>
<p>I'd like to know if the variance of the collected variables between the subjects was significantly different during the feedback-assisted task. </p>
<p>What is the best way to do this?</p>
| 74,258 |
<p>Using the Stata <code>graph twoway</code> command, I have created a scatterplot with a quadratic best fit line, using the <code>qfit</code> command. How can I get the equation of the best fit line? </p>
<p>Example: </p>
<pre><code>graph tw (scatter y x) (qfit y x)
</code></pre>
| 992 |
<p>Proportion, ratio, and percentage data is very common in ecology (eg, % of flowers pollinated, male:female sex ratio, % mortality in response to a treatment, % of leaf eaten by an herbivore). An article was recently published by some applied statisticians in the journal <em>Ecology</em> titled "<a href="http://www.esajournals.org/doi/abs/10.1890/10-0340.1" rel="nofollow">The arcsine is asinine: the analysis of proportions in ecology</a>." They noted that the arcsine transformation has been promoted by long-running texts like Zar's "Biostatistical Analysis" and Sokal and Rohlf's "Biometry" (both in their 3rd or 4th eds.) but this technique has been outmoded by generalized linear models and better computing.</p>
<p>I was wondering how common proportion data is in other fields (psych? medicine?) if the arsine is still commonly used in other fields or if ecologists are exceptional in their use of this (or other) outmoded or less than optimal techniques. Have there been papers in other fields that highlight the need to use more advanced techniques?</p>
| 74,259 |
<p>I am trying to use a function from the <a href="http://cran.r-project.org/web/packages/forecast/index.html" rel="nofollow">forecast</a> package, <code>seasonaldummy()</code>, which requires a time series object as input, but I have an <a href="http://cran.r-project.org/web/packages/xts/index.html" rel="nofollow">xts</a> object, call it <code>x</code>. So first, is it correct that <a href="http://cran.r-project.org/web/packages/xts/index.html" rel="nofollow">xts</a> objects can not be directly passed to functions that require a <code>ts</code> object?</p>
<p>I then tried to coerce it to a <code>ts</code> object. I tried this by <code>as.ts(x)</code>. The function still produces an error.</p>
<p>I looked at the <a href="http://cran.r-project.org/web/packages/xts/index.html" rel="nofollow">xts</a> vignette, and in particular the <code>reclass</code> function, but all I could understand from that was how to coerce non xts objects to xts. It appears I have to do the opposite, and that <code>as.ts(x)</code> does not work.</p>
<p>I have put a MWE below, where the last two lines generate an error.</p>
<pre><code>> a = rnorm(20)
> dt = seq(as.POSIXct("2010-03-24"),by="days",len=20)
> library(xts)
> a = xts(a,dt)
> library(forecast)
> seasonaldummy(a)
Error in seasonaldummy(a) : Not a time series
> seasonaldummy(as.ts(a))
Error in seasonaldummy(as.ts(a)) : subscript out of bounds
</code></pre>
| 74,260 |
<p>I have several data sets of frequency values (See Fig. 1 for an example).</p>
<p>I'm interested in those tighter clusters (marked by green rectangles) and am using hierarchical clustering in MATLAB (with unweighted average distance method) to separate them. (*)</p>
<p>The spread of these clusters increases with frequency (the standard deviation is positively correlated with the frequency, while the coefficient of variation is not).</p>
<p><strong>So here's my question</strong>:
Is there a way for the clustering to factor in this relationship, so that the average distance a point has to have to the points of the nearest cluster to be included in that cluster is also dependent on - for example - the mean of that cluster?</p>
<p>I'm thinking that the cutoff would have to be different for each point, but I don't think this is possible in this method. I would also be open to alternative clustering methods, but not k-means, because I don't want to specify the number of clusters in advance.</p>
<p>Also, if you have suggestions on rephrasing my question so that it may be more useful to others I would be grateful.</p>
<p>Thank you for your time!</p>
<p><img src="http://i.stack.imgur.com/FgARe.png" alt="Frequency cluster example"></p>
<p>(*) I'm following this example for the clustering procedure:
<a href="http://www.mathworks.de/de/help/stats/examples/cluster-analysis.html" rel="nofollow">http://www.mathworks.de/de/help/stats/examples/cluster-analysis.html</a></p>
| 37,040 |
<p>I'm having trouble showing that the 2nd central moment is finite. I have $X_1,\ldots,X_n \overset{iid}{\sim} f(x)$ with $E[X_1]=\mu$ and $E[X_1^k]$ exists and is finite for any integer $k \geq 1$. </p>
<p>I would like to use Law of Large Numbers, so I need to show that either $E[|X_1|]$ is finite or that $E[(X_1-\mu)^2]$ is finite. I tried proving the first one with Jensen's but got stuck since absolute value is convex, not concave. </p>
<p>So now I'm stuck trying to show second central moment is finite. </p>
| 40,091 |
<p>I have a regression in which I try to understand how much variance of the metric dependent variable each of the regressors explains. I use the package R <a href="http://cran.r-project.org/web/packages/relaimpo/index.html" rel="nofollow">relaimpo</a> (Grömping, 2006) for that purpose, that allocates $R^2$ shares to each regressor using the LMG metric. The hypothesis is that the regressors differ drastically in their $R^2$ contribution. </p>
<p>If I had a cross-sectional sample of $N=6000$, things would be dandy. Unfortunately, I only have $N=2000$, but 3 measurement points (weeks 0, 6, 12). Both the dependent variable and regressors are time-varying, that is, they are assessed at each measurement point. </p>
<p>Currently I run the analyses separately for each measurement point and find large differences between relative importance estimates between regressors (0% to 25%), and I find that the ranking of regressors and the explained variance is surprisingly stable over time (at least from a qualitative perspective, plotting the explained variance of each of the 14 regressors for each measurement point). </p>
<p>For a scientific paper, there are two reasons why I want to do this in one regression instead of 3. (1) 3 different analyses take up a lot of space in the paper and obfuscate the main message a bit (regressors differ drastically in their relative importance). (2) $N=2000$ isn't all that much when disentangling the relative contributions of 14 correlated regressors (the CIs are rather large). </p>
<p>Therefore, I wondered whether there are ways to "pool" all subjects into one regression that would not lead to an outcry of anybody with some statistical background ("you severely violated the assumption of statistical independence!!"). In the best case I would simply have one regression that covers all 3 time points.</p>
<p>The method has to be a regression (ie., not using NLE or LME packages), because that is what the <code>relaimpo</code> package uses as baseline model to then calculate unique $R^2$ contributions of regressors. </p>
<p>What are my options?</p>
| 37,046 |
<p>It is known that odds ratios enjoy a certain symmetry. For example, the odds ratio of outcome $Y$ is the inverse of the odds ratio of outcome $\neg Y$. Risk ratios, on the other hand, do not enjoy this symmetry. However, risk ratios have the property of collapsibility. So adjusting for a covariate that is not a confounder does not change the magnitude of the risk ratio. Consider the more formal definition of collapsibility:</p>
<blockquote>
<p><em><strong>Definition</em></strong>. Let $g[P(x,y)]$ be any functional that measures the association between $Y$ and $X$ in the joint distribution $P(x,y)$. Then $g$ is collapsibile on a variable $Z$ if $$E_{z}g[P(x,y|z)] = g[P(x,y)]$$</p>
</blockquote>
<p>So in the case of the risk ratio, $g[P(x,y)]$ would be the risk ratio? What if we don't know $P(x,y)$? A risk ratio is the ratio of two incidence densities which doesn't seem to depend on any probability distribution.</p>
| 74,261 |
<p>Having recently run an experiment, I have been left with a dataset that I don't quite know how best to handle, I think simply due to the number of independent variables to consider.</p>
<h2>Setup</h2>
<p>I have implemented 4 new approaches to solve a problem, which I wish to compare to an existing approach and to each other, based on their execution time, which prior to the experiment is expected to be an improvement.</p>
<p>To compare these approaches, they are each tested with a set of 6 case studies. Each combination of approach-casestudy is repeated 30 times, to ensure the results are representative. </p>
<p>All of the above is repeated for two different libraries which are used as part of the approach.</p>
<h2>Results</h2>
<p>In total, I am left with a total of 1800 rows (calculated as...</p>
<pre><code>= (Approaches * Case Studies * Trials * Libraries)
= 5 * 6 * 30 * 2
= 1800
</code></pre>
<p>I believe I could validly compare the results of two approaches for a specific library and case study using the non-parametric Wilcoxon rank-sum test. However, I don't know of a testing approach I could use to determine, overall, whether one approach is better than another, or which (for the given case studies and libraries) is 'best'.</p>
<p>Is there some approach I could use to validly summarise the results, and therefore conclude to some level of significance which is 'best'?</p>
<p>Thanks, and apologies for any missing important details- I will edit whenever necessary!</p>
| 37,049 |
<p>Suppose I have a python function using scipy that returns the expectation $E\left[ X \right]$ for some data assuming it is gamma distributed:</p>
<pre><code>def expectation(data):
shape,loc,scale=scipy.stats.gamma.fit(data)
expected_value = shape * scale
return expected_value
</code></pre>
<p>(My understanding is that scipy's parameterization of the gamma leaves us with $E\left[ X \right] = shape \cdot scale$.) However, I would like to generalize my code so I can drop in different distributions in place of the gamma -- for example, the log-normal distribution. Is there a way to write that code in a general way? In other words, how do I finish this function:</p>
<pre><code>def expectation(data, dist=scipy.stats.gamma):
???
</code></pre>
<p>I see a few possible approaches:</p>
<ol>
<li><p>Use the <code>scipy.stats.*.expect</code> method. Thus far I haven't been able to figure out how to use it. How would I parameterize the method given the <code>shape,loc,scale</code> parameters above?</p></li>
<li><p>Use the <code>mean</code> method of a "frozen" random variable object. In scipy-speak, is "mean" equivalent to $E\left[ X \right]$?</p></li>
<li><p>Give up on writing general code and just compute $E\left[ X \right]$ directly for each distribution. I don't want to do this if I can avoid it.</p></li>
</ol>
<p>Additionally, please address whether under your suggested method I would pay any performance penalty, i.e. because it uses a numerical rather than analytical approach to the integral in evaluating the expectation.</p>
| 74,262 |
<p>I'd like to test the hypothesis that there is a monotonic relationship between two variables, without assuming a specific model. What is the most robust (i.e. lowest probability of type-II error) way to do this?</p>
<p>I can think of a few options:</p>
<ul>
<li><p>use a linear model of untransformed data. It'll be robust enough, even if I don't think the true relationship is linear.</p></li>
<li><p>look at rank-transformed data, e.g. with Spearman's rank correlation coefficient</p></li>
<li><p>use some kind of resampling approach, in which the order of the dependent variable is randomly shuffled. I'm not sure what statistic to compare in this approach.</p></li>
</ul>
<p>Is there a fairly standard approach to this problem?</p>
| 48,761 |
<p>I'm trying to measure the impact that rainfall causes in the number of incoming calls in a insurance-company. I have 4 years of daily data.</p>
<p>The plots below shown the correlation plot for each year:
<img src="http://i.stack.imgur.com/KTdSX.png" alt="enter image description here"></p>
<p>The same plots as above, but now taken the weekly-mean for each variable:
<img src="http://i.stack.imgur.com/0iZnQ.png" alt="enter image description here"></p>
<p>The rainfall has a <strong>yearly</strong> seasonality and the call-center data has a <strong>weekly pattern</strong>. The idea is to come up with a <em>weekly-based model</em>, so that i can measure the impact that a <em>weekly mean rainfall forecast</em> will cause.</p>
<p>Plotting the whole dataset, taken weekly-means (image below):
<img src="http://i.stack.imgur.com/CrSb9.png" alt="enter image description here"></p>
<p>I'd like some suggestions on how to measure this variable 'impact' - i can try to split the rainfall data into 3 categories (low, normal, high), then build some model.</p>
<p>Thanks for any help! (i'm using R for analysis).</p>
| 37,051 |
<p>I'm having difficulty understanding one or two aspects of the cluster package. I'm following the example from <a href="http://www.statmethods.net/advstats/cluster.html">Quick-R</a> closely, but don't understand one or two aspects of the analysis. I've included the code that I am using for this particular example.</p>
<pre><code>## Libraries
library(stats)
library(fpc)
## Data
mydata = structure(list(a = c(461.4210925, 1549.524107, 936.42856, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 131.4349206, 0, 762.6110846,
3837.850406), b = c(19578.64174, 2233.308842, 4714.514274, 0,
2760.510002, 1225.392118, 3706.428246, 2693.353714, 2674.126613,
592.7384164, 1820.976961, 1318.654162, 1075.854792, 1211.248996,
1851.363623, 3245.540062, 1711.817955, 2127.285272, 2186.671242
), c = c(1101.899095, 3.166506463, 0, 0, 0, 1130.890295, 0, 654.5054857,
100.9491289, 0, 0, 0, 0, 0, 789.091922, 0, 0, 0, 0), d = c(33184.53871,
11777.47447, 15961.71874, 10951.32402, 12840.14983, 13305.26424,
12193.16597, 14873.26461, 11129.10269, 11642.93146, 9684.238583,
15946.48195, 11025.08607, 11686.32213, 10608.82649, 8635.844964,
10837.96219, 10772.53223, 14844.76478), e = c(13252.50358, 2509.5037,
1418.364947, 2217.952853, 166.92007, 3585.488983, 1776.410835,
3445.14319, 1675.722506, 1902.396338, 945.5376228, 1205.456943,
2048.880329, 2883.497101, 1253.020175, 1507.442736, 0, 1686.548559,
5662.704559), f = c(44.24828759, 0, 485.9617601, 372.108855,
0, 509.4916263, 0, 0, 0, 212.9541122, 80.62920455, 0, 0, 30.16525587,
135.0501384, 68.38023073, 0, 21.9317122, 65.09052886), g = c(415.8909649,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 637.2629479, 0, 0,
0), h = c(583.2213618, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0), i = c(68206.47387, 18072.97762, 23516.98828,
13541.38572, 15767.5799, 19756.52726, 17676.00505, 21666.267,
15579.90094, 14351.02033, 12531.38237, 18470.59306, 14149.82119,
15811.23348, 14637.35235, 13588.64291, 12549.78014, 15370.90886,
26597.08152)), .Names = c("a", "b", "c", "d", "e", "f", "g",
"h", "i"), row.names = c(NA, -19L), class = "data.frame")
</code></pre>
<p>Then I standardize the variables:</p>
<pre><code># standardize variables
mydata <- scale(mydata)
## K-means Clustering
# Determine number of clusters
wss <- (nrow(mydata)-1)*sum(apply(mydata,2,var))
for (i in 2:15) wss[i] <- sum(kmeans(mydata, centers=i)$withinss)
# Q1
plot(1:15, wss, type="b", xlab="Number of Clusters", ylab="Within groups sum of squares")
# K-Means Cluster Analysis
fit <- kmeans(mydata, 3) # number of values in cluster solution
# get cluster means
aggregate(mydata,by=list(fit$cluster),FUN=mean)
# append cluster assignment
mydata <- data.frame(mydata, cluster = fit$cluster)
# Cluster Plot against 1st 2 principal components - vary parameters for most readable graph
clusplot(mydata, fit$cluster, color=TRUE, shade=TRUE, labels=0, lines=0) # Q2
# Centroid Plot against 1st 2 discriminant functions
plotcluster(mydata, fit$cluster)
</code></pre>
<p>My question is, how can the plot which shows the number of clusters (marked <code>Q1</code> in my code) be related to the actual values (cluster number and variable name) ? </p>
<p>Update: I now understand that the <code>clusplot()</code> function is a bivariate plot, with PCA1 and PCA2. However, I don't understand the link between the PCA components and the cluster groups. What is the relationship between the PCA values and the clustering groups? I've read elsewhere about the link between kmeans and PCA, but I still don't understand how they can be displayed on the same bivariate graph. </p>
| 37,052 |
<p><strong>Framework.</strong> Fix $\alpha\in ]0,1[$. Imagine you have $n$ $\alpha$-quantile forecast methodologies that give you, at time $t$ for look ahead time $t+h$, an estimation of the quantile of wind power. Formally, for $i=1,\dots,n$, you know how to produce $\hat{q}_{t+h|t}^{(i)}$ at time $t$ for look ahead time $t+h$ an estimation. Each methodology is based on a different modeling+estimation and can have performance that depend, for example, on the weather situation. </p>
<p><strong>Question.</strong> How do you construct a weighting scheme to combine quantile estimation (say with a linear combination) that can adapt along time $t$? Formally, how to best construct weights $\lambda_1(t,h),\dots,\lambda_n(t,h)$ such that </p>
<p>$$\hat{q}_{t+h|t}=\sum_{i=1}^n \lambda_i(t,h) \hat{q}_{t+h|t}^{(i)}$$</p>
<p>is a very good quantile forecast. </p>
<p><strong>Side Note.</strong> For Msc students interested in proposing and elaborating their ideas with the real data, I propose an internship on that subject for summer 2011 (see <a href="http://www-cep.cma.fr/Public/recrutement/proposition_de_stage/stage_prevision_eol/" rel="nofollow">here</a>, it's in french but I can translate to those interested). </p>
| 37,053 |
<p>I have an odd problem which can be phrased in a general way, and a more specific way. I'm curious about the answers to both. Although, really, it's the k=0 case that I'm really interested in - deriving the answer with respect to some properties of the distribution of m.</p>
<p>These may be totally basic, so, apologies if they are.</p>
<p>1) I have several different <a href="http://en.wikipedia.org/wiki/Hypergeometric_distribution" rel="nofollow">hypergeometric distributions</a>, H(k; N, m, n) where k is the number of 'success' draws, N is the population size, m is the number of possible success draws, and n is the total number of draws. Each distribution has a different value for m, but all else is the same. Is there an easy way to either sum them up or provide a more compact notation for them?</p>
<p>Or, to phrase it as a ball and urn question, I have many boxes, each with N balls. In each urn i, m(i) balls are white and the rest are black. If I take n draws from each urn in turn, what is the average probability of k white balls drawn in each urn? (indeed, is there a distribution for this - there must be)</p>
<p>2) More specifically, I'm interested in the case where k=0. This actually reduced quite nicely to </p>
<p>$\binom{N-m}{n}/\binom{N}{n}$</p>
<p>But...again, I want to sum over a lot of different values of m from different members of a population. Is there a way to get at this with, say, an average value of m or otherwise?</p>
<p>Again, in box urn terms, this would be the average probability of drawing NO white balls from any of the urns.</p>
<p>p.s. I have actually always wondered a similar thing for the binomial distribution and suspect the answers may be related. Sure, B(a,p)+B(b,p) = B(a+b,p) where a and b are the number of trials and p is the probability of success. But what about B(a,p1)+B(a,p2)?</p>
| 74,263 |
<p>As I have written in my question "<a href="http://stats.stackexchange.com/questions/7209/how-much-undersampling-should-be-done">How much undersampling should be done?</a>", I want to predict defaults, where a default is per se really unlikely (average ~ 0.3 percent). My models are not affected by the unequal distribution: It's all about saving computing time.</p>
<p>Undersampling the majority class to a ratio [defaulting/non-defaulting examples] of 1:1 is the same as expressing the believe that I think examples are equally important in increasing the prediction quality.</p>
<p>Does anyone know a reason why/when equal importance could <strong>not</strong> be the case? Is there literature on this specific topic (I could not find sampling-literature that modeling/computation-oriented)?</p>
<p>Thanks a lot for your help!</p>
| 48,763 |
<p>My data consists of three groups (A,B,C -independent of each other) and each group consists of 1000 correlation coefficients (generated using a stochastic simulation with 1000 iterations, correlating X and Y of the respective groups in each iteration). </p>
<p>Case 1: I would like to test each group whether significantly different from a threshold value, say 0.5. </p>
<p>Case 2: I would also like to test whether groups significantly differ from each other? </p>
<p>Does it make sense to use a hypothesis test for mean (mean=0.5) in the first case and ANOVA for the second case?</p>
| 74,264 |
<p>First off, I'm not trying to crowd source a personal printing press (i.e., not doing this: "I'm using strategies $x$, $y$, $z$ in the stock market and..."). Instead, I'm looking for feedback on research design. </p>
<p>The situation: Many equity/currency/futures traders use technical analysis as part of their trading approach. Technical analysis uses past price patterns to make predictions about future prices. Technical indicators quantify that past movement. Lots of practitioners swear by the techniques. Many academics and others call <a href="http://rads.stackoverflow.com/amzn/click/0812975219" rel="nofollow">BS</a>. </p>
<p>So, I want to test whether various technical indicators have predictive value: Do asset prices move as these indicators predict? I'd like to be very focused and just use independent t-tests as follows: </p>
<ol>
<li>Select an indicator (e.g., <a href="http://en.wikipedia.org/wiki/MACD" rel="nofollow">MACD</a>—moving average convergence/divergence).</li>
<li>Review the claims about the indicator's predictive value (e.g., MACD bullish cross indicates imminent upward trend). </li>
<li>Collect the price movement of an asset (stock/future/currency/etc.) and subset that data based on the indicator's value prior to the price movement. (e.g., IV: price movement following a bullish cross vs. price movement at times not following a bullish cross). </li>
<li>If we have a significantly unequal number of observations for each group (this will almost always happen), randomly select data from the larger population so that our number of observations are equal. </li>
<li>Run an independent samples t-test to compare the means of each group. (I'll also Shapiro-Wilks and Levene test the data, and adjust as necessary.) We'll test the null that the sample means are not significantly different. </li>
<li>If we reject the null, look at the effect size, etc.</li>
<li>Repeat 1-6 for the most popular indicators (or until my wife asks me to come watch <em>Dancing with the Stars</em> with her (and I pretend it's a favor even though it's not that bad)). </li>
</ol>
<p>Now, I know technical indicators are used in combinations, and some kind of moderating effect could be going on. But I wanted to start simple. Thoughts on how reliable this approach would be? Potential pitfalls?</p>
<p>Also, full disclosure on the axe I may be grinding: I'm leery of the very fat tails and their unmitigated positions for/against technical analysis. I think brokerages and hucksters use them as noisy, noisy bells and whistles as they compete for clients. For most clients, the indicators do nothing. On the other hand, I also think there's value, but what value technical indicators have is likely greatest on short time frames and perhaps better for currencies.</p>
| 48,773 |
<p>I've been looking at mixed effects modelling using the lme4 package in R. I'm primarily using the <code>lmer</code> command so I'll pose my question through code that uses that syntax. I suppose a general easy question might be, is it OK to compare any two models constructed in <code>lmer</code> using likelihood ratios based on identical datasets? I believe the answer to that must be, "no", but I could be incorrect. I've read conflicting information on whether the random effects have to be the same or not, and what component of the random effects is meant by that? So, I'll present a few examples. I'll take them from repeated measures data using word stimuli, perhaps something like <a href="http://www.ualberta.ca/~baayen/publications/baayenCUPstats.pdf">Baayen (2008)</a> would be useful in interpreting.</p>
<p>Let's say I have a model where there are two fixed effects predictors, we'll call them A, and B, and some random effects... words and subjects that perceived them. I might construct a model like the following. </p>
<pre><code>m <- lmer( y ~ A + B + (1|words) + (1|subjects) )
</code></pre>
<p>(note that I've intentionally left out <code>data =</code> and we'll assume I always mean <code>REML = FALSE</code> for clarity's sake)</p>
<p>Now, of the following models, which are OK to compare with a likelihood ratio test to the one above and which are not?</p>
<pre><code>m1 <- lmer( y ~ A + B + (A+B|words) + (1|subjects) )
m2 <- lmer( y ~ A + B + (1|subjects) )
m3 <- lmer( y ~ A + B + (C|words) + (A+B|subjects) )
m4 <- lmer( y ~ A + B + (1|words) )
m5 <- lmer( y ~ A * B + (1|subjects) )
</code></pre>
<p>I acknowledge that the interpretation of some of these differences may be difficult, or impossible. But let's put that aside for a second. I just want to know if there's something fundamental in the changes here that precludes the possibility of comparing. I also want to know whether, if LR tests are OK AIC, comparisons are as well.</p>
| 74,265 |
<p>How would I interpret a transformed dependent variable (4th root) with some of its predictor variables transformed as well? In our study, we transformed our dependent variable to 4th root, $Y^\frac{1}{4}$, and one of our predictor variables to $\ln(X)$. How would I interpret that?</p>
| 49,681 |
<p>When calculating Adjusted $R^2$ the formula is $1-(1-R^2)\frac{n-1}{n-k-1}$ with $k$ being how many predictors you have. If I am using a model with a single variable but that variable has been put to the 4th, 3rd, and 2nd power like the following, </p>
<p>$\hat{Y}=-0.0162x^4+0.2239x^3-1.0941x^2+2.0972x-0.9513$</p>
<p>would I have a single predictor or would I count each powered term as a predictor? Also if you could give short reasoning so I can try and grasp the concept as to why or why not to.</p>
<p>Thanks in advance.</p>
| 37,064 |
<p>If s1 is the variance of a small sample, and s2 is the variance of a larger sample from the same population. Is s1 an unbiased estimator for s2?</p>
<p>I am thinking since a sample variance is an unbiased estimator of the population variance, then s1 and s2 are both unbiased estimators for sigma^2, so s1 and s2 should also be unbiased estimators for each other.</p>
<p>But all the observations in a sample are part of the population, yet the observations between samples doesn't necessarily include each other. Does this make their variances biased?</p>
| 37,065 |
<p>I am experimenting with neural network package in R nnet and i have some questions.</p>
<ol>
<li>The regulatory environment i am working on requires me to reproduce my results to show them to the auditors. How can i reproduce my model results after few months/years ? Can i use a seed value to control the model output ?</li>
<li>How can i validate a neural network model ? are there any goodness of fit tests ?</li>
<li>How do i choose the number of hidden layers i need ? I have 18500 observations in my training dataset and 8 variables. Does that help in identifying hidden layers required in anyway?</li>
<li>Many times the model stops after 100 iterations. I have used maxiter option but sometimes you see the output and it says converged and sometimes it says stopped after 100 iterations. When it says stopped after 100 iterations does that mean i have a bad model ? and it did not converge ?</li>
</ol>
<p>Thank you</p>
| 37,067 |
<p>In "Elements of Bayesian Statistics" (1990), Florens, Mouchart and Rolin describe two basic forms of reduction of a Bayesian experiment: Marginalization and Conditioning (Ch. 1). I don't understand the conditioning reduction. More precisely, i struggle with the definition of a <em>regular</em> conditional experiment. I would appreciate an explanation, if possible in measure-theoretic terms. Thanks</p>
| 74,266 |
<p>I'm interested in testing a sequence of numbers for randomness. I've done some searching and come across <a href="http://www.stat.fsu.edu/pub/diehard/" rel="nofollow">die hard</a> and <a href="http://www.phy.duke.edu/~rgb/General/dieharder.php" rel="nofollow">die harder</a>. However these tests seem to require some expertise to apply, or at least aren't as well documented or easy to use as R, say. Also die harder require a *nix platform.</p>
<p>I'm working on windows platform, and although it would be possible to generate a csv file at work and send it home where I could process the file on linux using die harder, that would be a pain, and likely a lot of work. So I've looked at what's available in R, and come up with the <a href="http://cran.r-project.org/web/packages/lawstat/index.html" rel="nofollow">lawstat package</a>, which implements <code>bartels.test</code> and <code>runs.test</code>. It was pretty straightforward to get those tests going on my data.</p>
<p>I'm wondering, are there other tests that I could reasonably be running that wouldn't require much time, or should I just leave it there? There's no reason to suspect that the number I'm seeing aren't reasonably random: the main focus of the testing I need to do is to look at whether several vectors of random numbers are correlated according to some parameters that have been set up. So I'm looking at measures of correlation between sequences, but I also also want to test that each sequence is random, just to be sure that the process of introducing correlations hasn't introduced some problem in the marginal distributions.</p>
<p>What I suspect is that there might be a 90/10 rule here -- the final 10% of applying all the tests, in die hard, say, might be a disproportionate amount of effort, and if I can get 90% of the way there in R or similar, that would be good.</p>
| 74,267 |
<p>I am trying to generate confidence intervals for estimates of numbers of fish observed with a very small number of surveys. Most of the numbers of fish observed are very small as well.
The estimate is based upon the ratio of the number of adult fish and redds (fish nests) observed during the observation season.</p>
<p>example</p>
<p>estimate 12.5 fish</p>
<p><img src="http://i.stack.imgur.com/XxcC7.png" alt="enter image description here"></p>
<p>Thanks for the help.</p>
| 74,268 |
<p>Could you please shed some lights about how to interpret linear regresssion results (2-stage vs. 1 stage)?</p>
<p>For example, I have the following:</p>
<pre><code>lmStage1 <- lm(y~x1)
lmStage2 <- lm(residuals(lmStage1)~x2)
summary(lmStage2)
</code></pre>
<p>vs.</p>
<pre><code>lmAll <- lm(y~x1+x2)
summary(lmAll)
</code></pre>
<hr>
<p>How do I interpret and compare the coefficients/t-stats, etc. of the above two models?</p>
<p>And how do I compare the two approaches and what observations/diagnosis/studies shall I draw from the above two models?</p>
<p>In general, I feel that I am quite weak in drawing observations and obtaining intuitions from regression studies... are there books focusing on these interpretations and intuitions?</p>
<p>Thanks a lot!</p>
| 74,269 |
<p>I am trying to make predictions using a random forest model in R.</p>
<p>However I get errors since some factors have different values in the test set than in the training set. For example, a factor <code>Cat_2</code> has values <code>34, 68, 76</code>, etc., in the test set that do not appear in the training set. Unfortunately, I do not have control over the Test set... I must use it as-is.</p>
<p>My only workaround was to convert the problematic factors back to numerical values, using <code>as.numeric()</code>. It <em>works</em> but I am not very satisfied, since these values are codes that have no numerical sense...</p>
<p>Do you think there would be another solution, to drop the new values from the test set? But without removing all the other factor values (let say values <code>1, 2, 14, 32</code>, etc.) which are in both training and test, and contains information potentially useful for predictions.</p>
| 74,270 |
<p>Let's say there's one DV (<code>Y</code>) and three IVs (<code>X1</code>, <code>X2</code>, <code>X3</code>), and among IVs, <code>X1</code> is a dummy variable.
In the regression model without interaction terms, the results can be represented like this:</p>
<pre><code>Y ~ X1 + X2 + X3
X1 : non-significant
X2 : significant
X3 : significant
</code></pre>
<p>In this case, is it meaningful to check some interaction terms (e.g. <code>X1</code> $\cdot$ <code>X2</code> or <code>X1</code> $\cdot$ <code>X3</code>)? At first I thought I don't have to because the main effect of <code>X1</code> indicates non-significance. But I'm afraid I'm missing something important.</p>
| 49,515 |
<p>I'm very new to SAS, so please keep that in mind with any responses.</p>
<p>I've been running the following code in SAS:</p>
<pre><code>FILENAME fishfile URL
"http://www.amstat.org/publications/jse/datasets/fishcatch.dat";
PROC FORMAT;
VALUE sexfmt 0="female" 1="male";
VALUE speciesfmt 1="common bream" 2="whitefish" 3="roach"
4="silver bream" 5="smelt" 6="pike" 7="perch";
INVALUE misscode "NA"=. ;
RUN;
DATA fish;
INFILE fishfile;
INPUT obs species weight length1 length2
length3 hgtpct widpct sex;
INFORMAT weight sex misscode.;
LABEL length1="Nose to tail beginning length"
length2="Nose to tail notch length"
length3="Nose to tail end length";
FORMAT species speciesfmt. sex sexfmt.;
RUN;
</code></pre>
<p>All of the above code runs without any errors. The following code gives me errors:</p>
<pre><code>TITLE "Finnish Fish: Species distribution";
PROC SGPLOT DATA=fish;
VBAR species;
RUN;
TITLE "Finnish Fish: Weight in grams";
PROC SGPLOT DATA=fish;
HISTOGRAM weight;
RUN;
</code></pre>
<p>I can't access the data right now so I don't have the specific error, but it says something along the lines of "Insufficient authorization" when I try to view the resulting plots. I will post the actual error message when I can access the data again, but until then I'm hoping that someone has encountered this error and found a solution.</p>
| 37,079 |
<p>When I perform a linear regression in some software packages (for example Mathematica), I get p-values associated with the individual parameters in the model. For, instance the results of a linear regression that produces a result $ax+b$ will have a p-value associated with $a$ and one with $b$.</p>
<ol>
<li><p>What do these p-values mean individually about those parameters? </p></li>
<li><p>Is there a general way to compute parameters for any regression model?</p></li>
<li><p>Can the p-value associated with each parameter be combined into a p-value for the whole model?</p></li>
</ol>
<p>To keep this question mathematical in nature, I am seeking only the interpretation of p-values in terms of probabilities. </p>
| 48,796 |
<p>I have a data set with multiple y values per x value. Using Excel the scatter plot and the regression line tools, I wish to apply regression on the data to determine if there is any connection between the data points.</p>
<p>If I just pick the regression trend line (Layout 3) in Excel, it gives me a line with an R² value of 0.02, but if I pick the average y-values for each x-value, and use the trend line on them, I get a trend line with an R² value of 0,963 which, pardon my phrasing, looks correct.</p>
<p>My question is; Is it safe to use a set of averaged y-values in order to limit it to a single y-value per x-value?</p>
| 74,271 |
<p>How to estimate probability density function when we have set of feature vectors, but we don't know their statistical distribution?</p>
| 74,272 |
<p>I have done many 1-sample T-tests before, but I can't figure out if I am able to use one in this situation. In our experiment, we took 12 individual insects and placed them in a chamber where they could choose to be on whatever side they pleased (One side had sugar in it and the other did not). We recorded the number of insects on each side <em>at 30 second intervals</em> for 20 minutes.</p>
<p>Is it possible to use a 1-sample T test here? It seems the data wouldn't be random, violating one of the fundamental assumptions for the test. The number of insects on one side at any time strongly influences the number that will be there at the time of the next reading.</p>
<p>What exactly is data like this called? How can I analyze it and potentially reject the null hypothesis that the insects have no preference to the side with sugar vs. the side without?</p>
| 48,276 |
<p>This question has two parts, as I do not understand whether my problem is theoretical (identification of the parameters) or practical (insufficient R skills).</p>
<ul>
<li>Econometrics</li>
</ul>
<p>Most "probit" style models are identified through a normalization of the standard error to one. In my case, I would argue that this is not necessary as the order of magnitude is already set by fixing one coefficient to 1. More specifically, for each observation there is a dummy variable equal to one if the latent index is above a (observation-specific and observed) threshold :</p>
<p>$$d_i=1 \text{ iff } K_i > X_i \beta + \epsilon_i$$</p>
<p>The error term is assumed $\epsilon_i \thicksim \mathcal{N}(0, \sigma)$. In my naive understanding, the likelihood of this problem should somehow look like (where $\Phi$ is the standard normal cdf):</p>
<p>$$L= \prod_{i=1}^N \Phi \left(\frac{K_i - X_i \beta}{\sigma} \right)^{d_i} \Phi \left(\frac{X_i \beta -K_i}{\sigma} \right)^{1-d_i} $$</p>
<p>Is it possible to estimate $\sigma$ without further normalization? </p>
<ul>
<li>Estimation</li>
</ul>
<p>If the answer to previous part is "yes" -- then why does my R implementation not work? </p>
<pre><code>### simulate data
set.seed(5849)
N <- 2000
b.cons <- 8
b.x <- 10
sig <- 2
x <- cbind(rep(1, N), runif(N)) #"observed variables"
e <- rnorm(N, sig) # "unobserved error"
k <- runif(N)*10+8 # threshold: something random, but high enough to guarantee some variation in i
t <- x%*%c(b.cons, b.x)+e
i <- 1*(k>t) #participation dummy
### likelihood function
probit.sim <- function(params, I, K, X) {
params[1:2] -> b
params[3] -> s
z= (K-X%*%b)/s
pr.1 = pnorm(z)
pr.1[pr.1==0] <- 0.001 #seems somehow weird to me, but how is this problem usually treated??
pr.1[pr.1==1] <- 0.999
pr.0 = 1-pr.1
llik = t(I)%*%log(pr.1) + t(1-I)%*%log(pr.0)
-llik
}
### maximize likelihood
optim(c(1,1,1), probit.sim, I = i, K = k, X = x) #using a random starting vector
st <- coef(lm(k*(1-i) ~ x-1)) #searching for better standard values
optim(c(st, 1), probit.sim, I = i, K = k, X = x)
</code></pre>
<p>The estimated parameters are clearly not c(8, 10, 2) as they should be.</p>
<p>I asked a related question on already on stack overflow, and the answer was "take better starting values", but this does not seem to do the trick here. Or maybe I don't know how to do it right.</p>
<p>Any ideas?</p>
<ul>
<li>Alternative approach</li>
</ul>
<p>My alternative was to use standard statistical software and estimate a probit (needs a bit of twisting but should be possible to make it equivalent). This estimates a coefficient for K, which should be equal to $-1/\sigma$; how about taking this $\sigma$ and computing the "non-normalized"/"true" values of the other coefficients?</p>
<p>Many thanks in advance for any suggestion on any of these 3 parts.</p>
| 37,083 |
<p>I have a dataset that comprises several instances for different patients, with multiple instances per patient. I need to perform some classification tasks and I was using cross-validation, but this way I may have instances for the same patient in the training and the test folds. I would like to perform "leave one patient out" cross validation, this is, ensure that the test fold contains always all the instances from a patient. This way I could evaluate better how the classifier would perform with data from new subjects. Is this possible in Weka? </p>
| 74,273 |
<p>This is a homework problem. I have figured out part (a) but I need help with part (b). I include part (a) for completion. </p>
<p>Suppose $X_1,\ldots,X_n$ are iid Poisson random variables. Furthermore, let $Z_n$ be the proportion of zeroes observed i.e. $Z_n = n^{-1}\sum_{i=1}^n 1\{X_j=0\}$. </p>
<p>$(a)$ Find the joint asymptotic distribution of $\left(\bar{X}_n,Z_n\right)$</p>
<p>Since $\text{E}[X_1]=\theta$ and $\text{Var}[X_1]=\theta$, by the central limit theorem we have $$\sqrt{n}(\bar{X}_n-\theta) \overset{D}{\longrightarrow} Z_1,\quad Z_1\sim N(0,\theta)$$ and since $\text{E}[1\{X_1=0\}] = P(X_1=0) = e^{-\theta}$ and $$\text{Var}[1\{X_1=0\}] = \text{E}[1\{X_1=0\}^2] - \text{E}[1\{X_1=0\}]^2=e^{-\theta}-e^{-2\theta}=e^{-\theta}(1-e^{-\theta})$$ by the central limit theorem we have $$\sqrt{n}(Z_n-e^{-\theta}) \overset{D}{\longrightarrow} Z_2,\quad Z_2\sim N(0,e^{-\theta}(1-e^{-\theta}))$$ Furthermore we have $$\text{Cov}[X_1,1\{X_1=0\}] = 0 - \theta e^{-\theta}$$ Therefore, by the multivariate central limit theorem $$\sqrt{n}\begin{pmatrix} \bar{X}_n-\theta \\ Z_n - e^{-\theta}\end{pmatrix} \overset{D}{\longrightarrow} \mathbf{Y}, \quad \mathbf{Y} \sim \text{MVN}\left( \begin{pmatrix} 0 \\ 0 \end{pmatrix}, \begin{pmatrix} \theta & -\theta e^{-\theta}\\-\theta e^{-\theta} & e^{-\theta}(1-e^{-\theta})\end{pmatrix}\right)$$</p>
<p>$(b)$ Based on your answer in (a), find the asymptotic distribution of $\sum_{i=1}^n X_i \big/ \sum_{i=1}^n 1\{X_i>0\}$. This is an estimate of the mean $\text{E}[X|X\geq 1]$ from a truncated Poisson.</p>
<p>We have $$\dfrac{\sum_{i=1}^n X_i}{\sum_{i=1}^n 1\{X_i>0\}}=\dfrac{n\bar{X}_n}{n-nZ_n} = \dfrac{\bar{X}_n}{1-Z_n}$$ I do not know how to proceed from here! I have a ratio of two normal distributions (marginal normal, and jointly normal). </p>
<p>$(c)$ Compute the exact mean and variance from a truncated Poisson$(\theta)$ with zero values truncated; i.e. $X\sim \text{Poisson}(\theta)$, compute $\text{E}[X|X\geq 1]$ and $\text{Var}[X|X\geq 1]$. Compare this to the asymptotic result in (b).</p>
| 74,274 |
<p>An <strong>interim analysis</strong> is an analysis of the data at one or more time points prior the official close of the study with the intention of, e.g., possibly terminating the study early.</p>
<p>According to Piantadosi, S. (<a href="http://eu.wiley.com/WileyCDA/WileyTitle/productCd-0471727814.html">Clinical trials - a methodologic perspective</a>):
"<em>The estimate of a treatment effect will be biased when a trial is terminated at an early stage. The earlier the decision, the larger the bias.</em>"</p>
<p>Can you explain me this claim. I can easily understand that the accuracy is going to be affected, but the claim about the bias is not obvious to me...</p>
| 38,207 |
<p>I have some data to analyze where $y$ is dependent of $x$ - a linear regression was used.</p>
<p>It's a question from an exam, so I think it should be solvable. The regression was used to estimate the mean <em>miles per gallon</em> (response) from the <em>amount of miles driven</em> (predictor).</p>
<p>I have the following statistics available:</p>
<ul>
<li>Correlation coefficient (0.117)</li>
<li>Standard deviation (0.482)</li>
<li>Number of observations (101)</li>
</ul>
<p>An ANOVA of this regression yields (Regression and residuals, respectively):</p>
<ul>
<li>df: 1, 99</li>
<li>SS: 0.319, 22.96</li>
<li>MS: 0.319, 0.232</li>
<li>F-value: 1.374, critical F-value: 0.244</li>
</ul>
<p>The regression itself (Intercept and Slope, respectively):</p>
<ul>
<li>Coefficients: 6.51, -0.00024</li>
<li>Standard deviations: 0.186, 0.0002</li>
<li>t-Values: 34.90, -1.17</li>
<li>p-Values: 1.93E-57, 0.2439</li>
</ul>
<p>Also, the "upper and lower 95% and 99%" are given for the above regression (although I'm not sure what that means).</p>
<p>Now, I am asked to calculate the mean $y$ for several values $x$, that's relatively easy, I just use the coefficients. So for example, I can calculate the mean miles per gallon for 500 miles driven.</p>
<p>Part where I'm stuck: <strong>I need to calculate the 99% confidence interval for the mean of $y$.</strong>. Obviously, this is what the example is all about - the introduction states that the mileage of a car should be estimated.</p>
<p>My question: How can I find out the mean of $y$ using the data provided above? (And, subsequently, the 99% confidence interval, although I seem to have the standard deviation, so that shouldn't be the problem)</p>
| 37,085 |
<p>If I have an arima object like <code>a</code>:</p>
<pre><code>set.seed(100)
x1 <- cumsum(runif(100))
x2 <- c(rnorm(25, 20), rep(0, 75))
x3 <- x1 + x2
dummy = c(rep(1, 25), rep(0, 75))
a <- arima(x3, order=c(0, 1, 0), xreg=dummy)
print(a)
</code></pre>
<p>.</p>
<pre><code>Series: x3
ARIMA(0,1,0)
Call: arima(x = x3, order = c(0, 1, 0), xreg = dummy)
Coefficients:
dummy
17.7665
s.e. 1.1434
sigma^2 estimated as 1.307: log likelihood = -153.74
AIC = 311.48 AICc = 311.6 BIC = 316.67
</code></pre>
<p>How do calculate the R squared of this regression?</p>
| 74,275 |
<p>I want to perform quadrat count analysis on several point processes (or one marked point process), to then apply some dimensionality reduction techniques.</p>
<p>The marks are not identically distributed, i.e. some marks are appearing quite often, and some are pretty rare. Thus, I cannot simply divide my 2D space in a regular grid, because the more frequent marks will "overwhelm" the lesser frequent ones, masking their appearance.</p>
<p>Thus, I tried to build my grid such that each cell has at most N points in it (to do so, I simply divide each cell in four smaller (and equally sized) cells, recursively, until no cell has more than N points in it). </p>
<p>What do you think of this "normalization" technique ? Is there a standard way to do such things ?</p>
| 37,087 |
<p>I have a small question about a small research that I wanted to do regarding stock symbol similarities and their abnormal returns. I have about a sample of 10 000 stocks and I calculate how much they differ from each other (which gives a matrix of 10K x 10K with their "difference measure").</p>
<p>Now I was planning to test this by first grouping those stocks with a distance less than a certain threshold into a group and dropping those that have a larger threshold and then check whether these stocks have a similar return even though they are not in the same industry.</p>
<p>I have a few questions, is this a good method to test for this? Or should I use the matrix of 10K x 10K? Because I need to check for totally unsimilar stocks too right?</p>
<p>Furthermore am I forgetting something? I am trying to replicate the study done by Rashes (2001) on a larger scale. </p>
| 37,088 |
<p>According to Wikipedia:</p>
<blockquote>
<p>Although in practice [Fisher's exact test] is employed when sample sizes are small, it is valid for all sample sizes.</p>
</blockquote>
<p>If Fisher's exact test can provide the exact p-values in lieu of the approximation that is given by the chi squared, what is the reason to ever use the $\chi^2$ test?</p>
| 49,928 |
<p>I have a set of density plots that contain the distribution of stock prices. Each graph has 5 density plots as follows that shows the distribution of monthly returns based on their ratings - a,b,c,d,e. A is the lowest rank while E is the highest rank. I need to come up with a score that tells me how different/away the 'A' plot is from the 'E' plot. i.e. </p>
<p><img src="http://i.stack.imgur.com/IuwrX.png" alt="Stock price distribution across different ranks"></p>
<p>Though the plots don't look very different, they are slightly different in the magnified version. I need to come up with a measure that can compare A's plot with E's plot. Although, mean, median, mode, skewness, kurtosis give the different aspects of a plot, I need one value that summarizes their differences. Any idea on what can be used?</p>
<p>Thanks in advance!</p>
| 74,276 |
<p>This is data cleaning and preparation stage question for me. I apologize if the question is basic, but I am a beginner. I have a dataset of a bit less than 4500 records. This is a survey and <code>year of birth</code> is an important field. Now 670 records do not have this information. I am inclined to think that I should treat this field as 'unknown' but I wanted to ask you a question: Does it ever make sense to impute year of birth? </p>
<p>Perhaps you could also point me to any readings about whether demographic data can or should be imputed? Many thanks for your thoughts.</p>
| 37,090 |
<p>Take this example:</p>
<pre><code>data <-matrix(c(227,751,193,541), ncol=2)
column1 <- c(227, 751)
probabilities <- c( 193/(193+541), 541/(193+541) )
chisq.test(data)
chisq.test(column1, p= probabilities)
</code></pre>
<p>when i apply the chi-squared test providing a matrix the results says that this is a</p>
<blockquote>
<p>Pearson's Chi-squared test with Yates' continuity correction</p>
</blockquote>
<p>and provides a p-value of 0.158.</p>
<p>when i perform the second chi-squared, providing the first column of the matrix and the probabilities calculated from the second column the both the results and the name of the test change dramatically:</p>
<blockquote>
<p>Chi-squared test for given probabilities</p>
</blockquote>
<p>the reported p-value is 0.028.</p>
<p>considering that i am trying to determine if the two datasets i have (columns in the matrix) are <strong>NOT</strong> different from each other:</p>
<p><strong>what is the difference between these two tests? which one should i use?</strong></p>
| 74,277 |
<p>In my project, one of my objectives is to find outliers in aeronautical engine data and chose to use the Replicator Neural Network to do so and read the following report on it (<a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.12.3366&rep=rep1&type=pdf" rel="nofollow">http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.12.3366&rep=rep1&type=pdf</a>) and am having a slight understanding issue with the step-wise function (page 4, figure 3) and the prediction values due to it.</p>
<p>The explanation of a replicator neural network is best described in the above report but as a background the replicator neural network I have built works by having the same number of outputs as inputs and having 3 hidden layers with the following activation functions:</p>
<ul>
<li>Hidden layer 1 = tanh sigmoid S1(θ) = tanh, </li>
<li>Hidden layer 2 = step-wise, S2(θ) = 1/2 + 1/(2(k − 1)) {summation each variable j} tanh[a3(θ −j/N)] </li>
<li>Hidden Layer 3 = tanh sigmoid S1(θ) = tanh, </li>
<li>Output Layer 4 = normal sigmoid S3(θ) = 1/1+e^-θ</li>
</ul>
<p>I have implemented the algorithm and it seems to be training (since the mean squared error decreases steadily during training). The only thing I don't understand is how the predictions are made when the middle layer with the step-wise activation function is applied since it causes the 3 middle nodes' activations to be become specific discrete values (e.g. my last activations on the 3 middle were 1.0, -1.0, 2.0 ) , this causes these values to be forward propagated and me getting very similar or exactly the same predictions every time.</p>
<p>The section in the report on page 3-4 best describes the algorithm but i have no idea what i have to do to fix this, i don't have much time either :(</p>
<p>Any help would be greatly appreciated. </p>
<p>Thank you</p>
| 74,278 |
<p>I have performed an experiment on plants and have some trouble analysing the data. For the experiment, we have three replicates (in time), two species and three treatments. In each experiment, we recorded the height of the plants. The dataset looks like this:</p>
<pre><code> replicate | treatment | species | height
</code></pre>
<p>I assume <code>treatment</code> and <code>species</code> are fixed factors while <code>replicate</code> is a random one. </p>
<p>For the analysis, I have used the procedure <a href="http://www.r-bloggers.com/linear-mixed-models-in-r/" rel="nofollow">described here</a>, so the code looks like this:</p>
<pre><code>library(nlme)
rs <- read.table('mydata.txt', header=T)
rs$plot <- rs$species
m1.nlme = lme(height ~ species*traitement,
random = ~ 1|replicate,
data = rs)
summary(m1.nlme)
</code></pre>
<p>I got these results:</p>
<pre><code>Linear mixed-effects model fit by REML
Data: rs
AIC BIC logLik
-326.2813 -310.415 169.1407
Random effects:
Formula: ~1 | replicate
(Intercept) Residual
StdDev: 0.03603131 0.04329552
Fixed effects: height ~ species * traitement
Value Std.Error DF t-value p-value
(Intercept) 0.31546667 0.02225388 102 14.175806 0.0000
speciesmaize 0.01645839 0.01127669 102 1.459506 0.1475
traitementtemoin 0.21213868 0.01178502 102 18.000707 0.0000
speciesmaize:traitementtemoin -0.04794414 0.01686908 102 -2.842131 0.0054
Correlation:
(Intr) gnF2F353 trtmnt
speciesmaize -0.249
traitementtemoin -0.238 0.472
speciesmaize:traitementtemoin 0.166 -0.670 -0.702
Standardized Within-Group Residuals:
Min Q1 Med Q3 Max
-4.7398461 -0.3546214 0.1025950 0.6297446 1.6118499
Number of Observations: 108
Number of Groups: 3
</code></pre>
<p>The way I interpret this is that I have a <code>treatment</code> effect, as well as a <code>treatment*species</code> effect, but no effect of the <code>species</code> in itself. </p>
<p>Is this correct?</p>
| 74,279 |
<p>Disclaimer: Statistics is not my strong side, so if my question is nonsense I apologize. I'm a beginner, but really wanting to understand this.</p>
<p>My question is: why do I get so widely different parameter estimates when using different transformations on my data in a non-linear regression ?</p>
<p>I'm trying to do a nonlinear regression and to estimate the uncertainty of the fit (confidence interval) using linear approximation. From my understanding the more linear-like the shape of the nonlinear function, the more accurate will the confidence interval calculation by linear approximation be. I therefore want to transform the data to make it as linear as possible. The errors in $y$ can be assumed to be log-normal. My data is monotonic and assumed to follow a power function in most cases.</p>
<p>$$ y = a*(x-x_0)^b $$</p>
<p>where $y$ is river discharge, $x$ is an arbitrary water level in the river and $x_0$ is the water level where where discharge $y$ is 0. This can be rewritten as log transformed, and nice and linear
$$ log(y) = a + b \times log(x-x_0) $$.</p>
<p>I need to estimate the parameters $a$, $b$ and $x_0$, so to do so simultaneously I use nonlinear regression. I also have some data that follows quadratic functions, so I would like to set up (and understand) a non-linear method.</p>
<p>I use r and <code>nlsLM()</code> from <code>minpack.lm</code> to carry out the non-linear regression.
Here is some example code:</p>
<pre><code>library(minpack.lm)
xdata <- c(19, 21, 24, 25, 29, 34, 35, 40, 40, 46, 48, 48, 52, 56, 57, 65, 65, 68)
ydata <- c(10, 11, 14, 20, 24, 50, 42, 96, 89, 134, 135, 161, 171, 218, 261, 371, 347, 393)
df<-data.frame(x=xdata, y=ydata)
#weights applied in the case of no transformation (relative error assumed to be the same for all y data)
W<-1/ydata
# NLS regression with weights, no transformation
nlsmodel1<-nlsLM(y ~ a*(x-x0)^b,data=df,start=list(a=0.1, b=2.5,x0=0))
# log transformed
nlsmodel2<-nlsLM(log(y) ~ a+b*(log(x-x0)),data=df,start=list(a=0.1, b=2.5,x0=0))
> coef(nlsmodel1)
a b x0
0.005158377 2.719693093 4.896772931
> coef(nlsmodel2)
a b x0
-8.683758 3.445699 -4.139127
> exp(-8.683758)
[1] 0.0001693136
</code></pre>
<p>I understand that the weights are very important, and can have a say in the differences here, but not by this much? My judgement of the two parameter sets is that <code>nlsmodel1</code> performs "better", and that the <code>b</code> coefficient is too high in the fit from <code>nlsmodel2</code>. <code>nlsmodel2</code> does a poor job in the upper end of the data, with large residuals there. But why are they so different? I feel like I'm doing something very silly here, and is unable to see the error. I have tried some other transformations, for example only transforming LHS as <code>log(y)</code>, but the problem remains.</p>
<p>I appreciate any tips that can help me improve, and not the least understand, the transformed fit.</p>
<p>Cheers</p>
<p>Related <a href="http://stats.stackexchange.com/questions/58928/nonlinear-regression-confidence-intervals-on-transformed-or-untransformed-param">post #1</a> and <a href="http://stats.stackexchange.com/questions/69524/on-nonlinear-regression-fits-and-transformations">post #2</a></p>
| 74,280 |
<p>I wanted to find out if there are any implications to using OLS when i have cardinal data as dependent variables. So my dependent variables are counts of a certain outcome and they exist as natural numbers 0,1,2,3,...N</p>
<p>My independent variable is a series of realized volatility.</p>
<p>Is there a better way such data could be described?</p>
| 74,281 |
<p>I have a background in computer programming and elementary number theory, but no real statistics training, and have recently "discovered" that the amazing world of a whole range of techniques is actually a statistical world. It seems that matrix factorizations, matrix completion, high dimensional tensors, embeddings, density estimation, Bayesian inference, Markov partitions, eigenvector computation, PageRank are all highly statistical techniques, and that the machine learning algorithms that use such things, use a lot of statistics. </p>
<p>My goal is to be able to read papers that discuss such things, and implement or create the algorithms, while understanding the notation, "proofs" and statistical arguments used. I guess the hardest thing is to follow all the proofs that involve matrices. </p>
<p>What basic papers can get me started? Or a good textbook with exercises that are worth working through ? </p>
<p>Specifically, some papers I would like to understand completely are :</p>
<ol>
<li><a href="http://www-stat.stanford.edu/~candes/papers/MatrixCompletion.pdf" rel="nofollow">Exact Matrix Completion via Convex Optimization, Candes, Recht, 2008</a></li>
<li><a href="http://arxiv.org/pdf/1207.4684.pdf" rel="nofollow">The Fast Cauchy Transform and Faster Robust Linear Regression, Clarkson et al, 2013</a></li>
<li><a href="http://arxiv.org/pdf/1211.6085.pdf" rel="nofollow">Random Projections for Support Vector Machines, Paul et al, 2013</a></li>
<li><a href="http://arxiv.org/pdf/1302.5125.pdf" rel="nofollow">High-Dimensional Probability Estimation with Deep Density Models, Rippel, Adams, 2013</a></li>
<li><a href="http://arxiv.org/pdf/1302.5337.pdf" rel="nofollow">Obtaining Error-Minimizing Estimates and Universal Entry-Wise Error Bounds for Low-Rank Matrix Completion, Király, Theran, 2013</a></li>
</ol>
| 45,474 |
<p>Given two multivariate gaussian (say in 2D with mean $\mu$ as a 2D point and convariance marix $\Sigma$ as $2$x$2$ Matrix) $N_1(\mu_1,\Sigma_1)$ and $N_2(\mu_2,\Sigma_1)$, I would like to derive the pdf of $N_1+N_2$. </p>
<p>Can any one point me to the reference where i can find the pdf derivation of $N_1 + N_2$.</p>
<p>Thanks in advance</p>
| 74,282 |
<p>Good day all,</p>
<p>Suppose that I am conducting a questionnaire study that is trying to measure level of awareness of subjects about a programming language and find the relation of those level of awareness to working conditions and methods etc.</p>
<p>To improve my precision I decided to go with stratified sampling. If I have 1 criteria for stratification such as geo-distribution (to make sure I don't over represent subjects from areas that have less programmers), then I end up with 6 distinct strata (country provinces).</p>
<p>I know how to go about analysing these to find margin error, standard error etc but I realised that is not good enough and I need to introduce more criteria's for stratification, such as level of education (so i don't over represent a group who are not very present between programmers based on their education), level of seniority etc.</p>
<p>I have the proportion (%) for all these criteria's but I don't know how I go about sampling when I have more than one criteria?</p>
<p>Thanks for your help :) </p>
| 37,106 |
<p>I want add to my regression additional features to complicate my model and lower bias, after searching around internet is seems a good idea is to add square features, that can help regression learn in coefficients more knowledge.</p>
<p>I want add to my regression model square of pairs of features $x * y$, i think that can help learn my model relationships between features variables that can be highly correlated.</p>
<p>Problem is when I see at interpretation of $x * y$ it seems incorrect, when i get $high * high = higher$ ex. $5 * 5 = 25$ that is good, but $low * low = higher$ seems strongly invalid ex. $-5 * -5 = 25$, should be $-25$ for my problem statement. </p>
<p>Same problems I have for $high * low = low$, ex. $5 * -5 = -25$, i think should be somewhere between $<-5; 5>$, for example in $0$.</p>
<p>That anyone have any thoughts about what equation I can use to get correct representation?</p>
<p>I will appreciate other ideas for new virtual calculated features.</p>
| 74,283 |
<p>I have a set of variables for building credit scorecards with logistic-regression. I need to bin some variables, for e.g. years of credit history. What is the method to determine how many bins and what is the interval for each bin? Thanks</p>
| 74,284 |
<p>So, when I did first year stats in undergrad, we did an experiment where we tampered with a bunch of coins, to see if it would cause a statistical difference in the results. This is a graph of the ratio of $heads:tosses$ for each series of flips:</p>
<p><img src="http://i.stack.imgur.com/0KGkp.png" alt="coin tosses"></p>
<p>We took the full data set in each case, and found that there were no significant differences from a null hypothesis of 50% heads.</p>
<p>Had we stopped at 100 flips, or 150 flips we would have probably concluded that the Cupped coin was significantly biased. Would this have been invalid, and why? In particular, does it mean anything that the ratio is outside the 95% confidence interval more than 5% of the time?</p>
| 74,285 |
<p>I have a "basic statistics" concept question. As a student I would like to know if I'm thinking about this totally wrong and why, if so: </p>
<p>Let's say I am hypothetically trying to look at the relationship between "anger management issues" and say divorce (yes/no) in a logistic regression and I have the option of using two different anger management scores -- both out of 100.<br>
Score 1 comes from questionnaire rating instrument 1 and my other choice; score 2 comes from a different questionnaire. Hypothetically, we have reason to believe from previous work that anger management issues give rise to divorce.<br>
If, in my sample of 500 people, the variance of score 1 is much higher than that of score 2, is there any reason to believe that score 1 would be a better score to use as a predictor of divorce based on its variance? </p>
<p>To me, this instinctively seems right, but is it so?</p>
| 74,286 |
<p>I have time series data that represent dates/times of trades taken in a financial market. </p>
<p>I would like to assign a score to this data that represents whether the trades are <code>mostly clustered</code> around particular time values or if they are <code>mostly spread out</code> evenly. I am going to have about 1000+ results per dataset.</p>
<p><strong>Example situation one (High degree of "clustering" ):</strong> </p>
<pre><code>1. 01/01/01 : 13:00
2. 01/01/01 : 13:10
3. 01/01/01 : 13:15
4. 01/01/01 : 13:25
5. 03/05/01 : 17:20
6. 03/05/01 : 17:35
7. 03/05/01 : 17:40
8. 03/05/01 : 17:45
</code></pre>
<p><strong>Example situation two( Low degree of "clustering)"</strong></p>
<pre><code>1. 01/01/01 : 13:00
2. 01/05/01 : 02:30
4. 02/12/01 : 06:40
5. 02/25/01 : 02:30
6. 03/30/01 : 21:10
7. 04/12/01 : 02:20
8. 05/02/01 : 03:25
</code></pre>
<p>I can of course convert all the timestamps to posix time or whatnot so doing calculation with the time values won't be a problem. </p>
<p>I was thinking possibly standard error?</p>
<p>(For those who want more background info: I am using backtest results to modulate the size of my entry position in a complex manner. If the results contain trades that are clustered together, then they don't really count as 1 trade each (more like one big trade). This means that such results are untrustworthy and I should not act on them.)
Thanks!</p>
| 37,109 |
<p>I am going to be hosting a number (~10) of <a href="http://en.wikipedia.org/wiki/Potluck" rel="nofollow">potluck meals</a> over the course of the summer, my pool of people to invite is about 40 people with about 10-15 coming to each meal. So I figure this would be a good opportunity to record data over time about the meals/people. The issue I am having is I am not sure what information to keep track of and what format to record it in.</p>
<p>Here are some examples of trends I think would be interesting: </p>
<ol>
<li>How many meals I have invited people to</li>
<li>On average which round of invites did people get invited to (some people rsvp as no in the beginning and so there is another 'round' of invites)
<li>How many meals people have attended</li>
<li>What items people have brought</li>
</ol>
<p>I have started a spreadsheet where each page is a meal, the first few columns of the page represent different rounds of invites, I input a persons name in the column that corresponds to the round of their invite. The last two columns are the ultimate rsvp from any round of invitation and the item brought if applicable.</p>
<p>To summarize I am looking for an efficient and concise way of recording the data associated with these meals for the trends mentioned. Additionally I am looking for other trends I can keep track of, I am doing a lot of this communication via email so timestamps would potentially be available for other interesting trends.</p>
<p>Help with good tags for this question would be appreciated.</p>
| 74,287 |
<p>in a plot of my time series there is clearly visible that there is structural break, but I have to find the exact date. I want test this with the chow test. Although I understand how to perform this test if the date of the structural break is know, by simply using a linear regression with two dummy's one for the intercept and one for the slope,</p>
<p>$R_t$ = $\beta_0 $+ $\beta_0^* · D_i + \beta_1 R_{m,t} + \beta_1^* · D_t · R_{m,t} + ε_t,$ </p>
<p>Than using the chow test.. But if I do not know the exact date, (in other words: I do not know when $D_i$ and $D_t$ should be 1) how can I find the exact date?</p>
<p>Thank you very much for reply</p>
| 74,288 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.