question
stringlengths 37
38.8k
| group_id
int64 0
74.5k
|
---|---|
<p>I want to cluster <em>Facebook</em> user based on the number of mutual friends. If two users have more number of mutual friends then they are designated more closer to each other. I am thinking about using k-medoids clustering algorithm. In R I can use PAM for k-medoid clustering which needs data as a distance matrix.But How can I have mutual friend as similarity criteria? How can I convert this similarity to a distance matrix? </p>
| 74,408 |
<p>I need to do a weighted multiple linear regression. If I want to weigh certain observations differently, am I correct that I simply have to multiply the y(i) of that observation, and the corresponding row in X, with that weight?</p>
| 74,409 |
<p>I have been running a linear regression where my dependent variable is a composite. By this I mean that it is built up of components that are added and multiplied together. Specifically, for the composite variable A:</p>
<pre><code>A = (B*C + D*E + F*G + H*I + J*K + L*M)*(1 - N)*(1 + O*P)
</code></pre>
<p>None of the component variables are used as independent variables (the only independent variables are dummy variables). The component variables are mostly (though not completely) independent of one another.</p>
<p>Currently I just run a regression with A as the DV, to estimate each dummy variable's effect on A. But I would also like to estimate each dummy variable's effect on the separate components of A (and in the future I hope to try applying separate priors for each component). To do this I have been running several separate regressions, each with a different one of the components as the DV (and using the same IVs for all the regressions). If I do this, should I expect that for a given dummy IV, I could recombine the coefficient estimates from all the separate regressions (using the formula listed above) and get the same value as I get for that IV when I run the composite A regression? Am I magnifying the coefficient standard errors by running all these separate regressions and then trying to recombine the values (there is a lot of multicollinearity in the dummy variables)? Is there some other structure than linear regression that would be better for a case like this?</p>
| 49,090 |
<p>I need some guidance on the appropriate level of pooling to use for difference of means tests on time series data. I am concerned about temporal and sacrificial pseudo-replication, which seem to be in tension on this application. This is in reference to a mensural study rather than a manipulative experiment.</p>
<p><strong>Consider a monitoring exercise</strong>: A system of sensors measures dissolved oxygen (DO) content at many locations across the width and depth of a pond. Measurements for each sensor are recorded twice daily, as DO is known to vary diurnally. The two values are averaged to record a daily value. Once a week, the daily results are aggregated spatially to arrive at a single weekly DO concentration for the whole pond.</p>
<p>Those weekly results are reported periodically, and further aggregated – weekly results are averaged to give a monthly DO concentration for the pond. The monthly results are averaged to give an annual value. The annual averages are themselves averaged to report decadal DO concentrations for the pond.</p>
<p>The goal is to answer questions such as: Was the pond's DO concentration in year X higher, lower, or the same as the concentration in year Y? Is the average DO concentration of the last ten years different than that of the prior decade? The DO concentrations in a pond respond to many inputs of large magnitude, and thus vary considerably. A significance test is needed. The method is to use a T-test comparison of means. Given that the decadal values are the mean of the annual values, and the annual values are the mean of the monthly values, this seems appropriate.</p>
<p><strong>Here’s the question</strong> – you can calculate the decadal means and the T-values of those means from the monthly DO values, or from the annual DO values. The mean doesn’t change of course, but the width of the confidence interval and the T-value does. Due to the order of magnitude higher N attained by using monthly values, the CI often tightens up considerably if you go that route. This can give the opposite conclusion vs using the annual values with respect to the statistical significance of an observed difference in the means, using the same test on the same data. <strong>What is the proper interpretation of this discrepancy?</strong></p>
<p>If you use the monthly results to compute the test stats for a difference in decadal means, are you running afoul of temporal pseudoreplication? If you use the annual results to calc the decadal tests, are you sacrificing information and thus pseudoreplicating?</p>
| 37,281 |
<p>I am fitting a few time series using <code>fitdistr</code> in R. To see how different distributions fit the data, I compare the log likelihood from the <code>fitdistr</code> function. Also, I am fitting both the original data, and the standardized data (ie. (x-mean)/sd).</p>
<p>What I am confused about is that, the original and standardized data generate log likelihood of different signs.</p>
<p>For example,</p>
<p>original:</p>
<pre><code> loglik m s df
t 1890.340 0.007371982 0.05494671 2.697321186
cauchy 1758.588 0.006721215 0.04089592 0.006721215
logistic 1787.952 0.007758433 0.04641496 0.007758433
</code></pre>
<p>standardized:</p>
<pre><code> loglik m s df
t -2108.163 -0.02705098 0.5469259 2.69758567
cauchy -2239.915 -0.03361670 0.4069660 -0.03361670
logistic -2210.552 -0.02328445 0.4619152 -0.02328445
</code></pre>
<p>How can I interprete this? Is larger loglik better or smaller better?</p>
<p>Thank you!</p>
| 5,353 |
<p>Let us take two formulations of the $\ell_{2}$ SVM optimization problem, one constrained: </p>
<p>$\min_{\alpha,b} ||w||_2^2 + C \sum_{i=1}^n {\xi_{i}^2}$ </p>
<p>s.t $ y_i(w^T x_i +b) \geq 1 - \xi_i$<br>
and $\xi_i \geq 0 \forall i$</p>
<p>and one unconstrained: </p>
<p>$\min_{\alpha,b} ||w||_2^2 + C \sum_{i=1}^n \max(0,1 - y_i (w^T x_i + b))^2$</p>
<p>What is the difference between those two formulations of the optimization problem? Is one better than the other?</p>
<p>Hope I didn't make any mistake in the equations. Thanks.</p>
<p>Update : I took the unconstrained formulation from <a href="http://www.kyb.mpg.de/publications/attachments/primal_%5b0%5d.pdf" rel="nofollow">Olivier Chapelle's work</a>. It seems that people use the unconstrained optimization problem when they want to work on the primal and the other way around when they want to work on the dual, I was wondering why?</p>
| 74,410 |
<p>(redirected here from mathoverflow.net)
Hello, </p>
<p>At work I was asked the probability of a user hitting an outage on the website. I have some following metrics. Total system downtime = 500,000 seconds a year. Total amount of seconds a year = 31,556,926 seconds. Thus, p of system down = 0.159 or 1.59%
We can also assume that downtime occurs evenly for a period of approximately 2 hours per week.</p>
<p>Now, here is the tricky part. We have a metric for amount of total users attempting to use the service = 16,000,000 during the same time-frame. However, these are subdivided, in the total time spent using the service. So, lets say we have 7,000,000 users that spend between 0 - 30 seconds attempting to use the service. So for these users what is the probability of hitting the system when it is unavailable? (We can assume an average of 15 seconds spent total if this simplifies things)</p>
<p>I looked up odds ratios and risk factors, but I am not sure how to calculate the probability of the event occurring at all.</p>
<p>Thanks in advance!</p>
<p>P.S. I was given a possible answer, at <a href="http://mathoverflow.net/questions/52816/probability-calculation-system-uptime-likelihood-of-occurence" rel="nofollow">http://mathoverflow.net/questions/52816/probability-calculation-system-uptime-likelihood-of-occurence</a> and was following the advice on posting the question in the most appropriate forum.</p>
| 74,411 |
<p>I need some help here.</p>
<p>I have some data in which every entry can take one or more levels of a categorical variable. for example, I have a category with 3 levels:</p>
<pre><code>entry category
1 A, B, C
2 B
3 C, A, B
</code></pre>
<p>How should I organized it into a file in order to easily let R discriminate among levels in order to do my analysis. Something like this:</p>
<pre><code>Categories entries other_results
A 2 ~
B 3 ~
C 2 ~
</code></pre>
<p>I thought about doing a comma separated list, as shown in the example 1. But then what should I do in R in order to transform those string into categories?</p>
<p>CLARIFICATION:
I'd like to avoid creating a column for every level. This variable has many levels and many other variables are already present. This would make the file uselessly big and not readable easily by humans.</p>
<p>thanks!</p>
| 74,412 |
<p>I am trying to analyze a set of nonnegative continuous non-integer data (i.e. the data points are not counts) that are mostly between 0 and 3 whose distribution is highly right-skewed even after log transformation. I am thinking that one possibility may be hurdle model to model the zeros and positive data points separately, but could anyone please suggest other possible choices?
Covariates include categorical and non-negative continuous variables.</p>
| 74,413 |
<p>I am working on a project which requires me to watch video of athletes and measure the straightness/smoothness/waviness, whatever term is acceptable, of their spine. Dividing the spine into segments may also be acceptable if it provides the best results.</p>
<p>Right now I can plot x and y points on still images. Not sure what my next step should be. Simple measures like curvature and radius if I made little circles don't work because the subject may be near or far in the image, skewing the results. I would like the output to be some sort of ratio.</p>
| 37,283 |
<p>I have a general question regarding a varying intercept / varying slope model in jags/stan:</p>
<p>I have data from a psychophysics experiment, with one covariate, one within-subjects factor and several subjects:</p>
<p>The response variable y is binary, and I want to model the probability of giving a response as a function of the (centred) covariate x in all conditions. I think the lme4 model should be:</p>
<pre><code>glm_fit <- glmer(formula = y ~ x + (1 | cond/subject),
data = df,
family = binomial(probit))
</code></pre>
<p>where the slope and intercept vary among conditions and among subjects within conditions.</p>
<p>The probit link function could be replaced by a logit link.</p>
<p>My question is: how do I correctly model the correlations between subjects' interecepts and slopes in all conditions in jags or stan?</p>
<p>The data are in long format, and the jags model uses nested indexing for the condition factor.
The jags model is:</p>
<pre><code>model {
# likelihood
for (n in 1:N) { # loop over N observations
# varying intercept/slope for every condition*subject combination
probit(theta[n]) <- alpha[cond[n], subject[n]] + beta[cond[n], subject[n]] * x[n]
y[n] ~ dbern(theta[n])
}
# priors
for (j in 1:J) { # loop over J conditions
for (s in 1:S) { # loop over S subjects
# each subjects intercept/slope in each condition comes from a
# group-level prior
alpha[s, j] ~ dnorm(mu_a[j], tau_a[j])
beta[s, j] ~ dnorm(mu_b[j], tau_b[j])
}
# non-informative group level priors
mu_a[j] ~ dnorm (0, 1e-03)
mu_b[j] ~ dnorm (0, 1e-03)
tau_a[j] <- pow(sigma_a[j], -2)
tau_b[j] <- pow(sigma_b[j], -2)
sigma_a[j] ~ dunif (0, 100)
sigma_b[j] ~ dunif (0, 100)
}
}
</code></pre>
<p>I have left out all correlations between the parameters on purpose. The problem I'm having is that the intercepts and slope are correlated for each subject in each condition, and the intercept/slope pairs are correlated across conditions.</p>
<p>Does anyone have any ideas? What is the best way to implement this? </p>
| 74,414 |
<p>If polynomial regression models nonlinear relationships, how can it be considered a special case of multiple linear regression?</p>
<p>Wikipedia notes that "Although polynomial regression fits a nonlinear model to the data, as a statistical estimation problem it is linear, in the sense that the regression function $\mathbb{E}(y | x)$ is linear in the unknown parameters that are estimated from the data."</p>
<p>How is polynomial regression linear in the unknown parameters if the parameters are coefficients for terms with order $\ge$ 2?</p>
| 74,415 |
<p>I have been studying Statistics recently, using a few introductory texts. </p>
<p>My issue is these texts only seem to provide analysis methods that are suitable to linear relationships: Pearson r correlation coefficients etc. Additionally all the Statistical methods presented seem to be based on the normal Gaussian distribution.</p>
<p>My question is what tools, methods and techniques are applicable to the analysis of non-linear, non-Gaussian statistics, with the related question of what resources (particularly books/textbooks) can I use to become acquainted with these procedures?</p>
| 37,284 |
<p>Given $T=G+A$ where $A$ and $G$ are independent random variables, I'd like to estimate the distribution of $G$ given empirical (measured) distributions of $T$ and $A$. Of note: all three random variables are guaranteed to be bounded on $[0,\infty]$ (unlike the more studied case where $A$ would be a zero mean gaussian). </p>
<p>I understand this is a deconvolution, and I've tried two different deconvolutions with little success. Before deconvolving, most references recommend converting the empirical densities (in this case, $T$ and $A$) into smoother forms using kernel density estimation. I'll call these $P_T$ (black) and $P_A$ (red) respectively.</p>
<p>Then I've tried two deconvolution approaches. </p>
<p>(1) Computing $\mathcal{F}^{-1}(\mathcal{F}(P_T)/\mathcal{F}(P_A))$ using matlab's fft and ifft functions. (green)</p>
<p>(2) Computing the deconvolution matrix for $A$ and using a truncated SVD method to solve for $P_G$. (blue)</p>
<p><img src="http://i.stack.imgur.com/S0mLe.jpg" alt="Deconvolution"></p>
<p>Almost certainly the general shape of both solutions is correct, except.. (1) both oscillate significantly, whereas the correct $P_G$ will not, and (2) both deconvolutions give me negative numbers. Seems like there should be a way to enforce that $G$ is positive but I'm not sure how. Thoughts?</p>
| 74,416 |
<p>I have a sample $x=(4, 3, 1, 2, 2, 2, 2, 5, 7, 3, 1, 2, 3, 4, 3, 2, 3, 3, 3, 4)$ of size $n=20$. from a binomial distribution with 10 trials and probability of success $p$. I am asked to construct the asymptotic 95% confidence interval based on the likelihood. I think this should be the set
$$
C = \left\{ p \in (0,1) : \prod_{i=1}^{n} p^{x_i} (1-p)^{10-x_i}\geq \exp(-q/2) \prod_{i=1}^{n} (\overline{x}_n/10)^{x_i} (1-\overline{x}_n/10)^{10-x_i} \right\}
$$
where q is the $0.95$ quantile of the $\chi^2_1$ distribution. If I take logs of both sides, the inequality becomes
\begin{align*}
&n\overline{x}_n\log p + (n10 - n\overline{x}_n)\log(1-p) \\
&\quad \quad \geq -q/2 + n\overline{x}_n\log(\overline{x}_n/10) + (n10 - n\overline{x}_n)\log(1-\overline{x}_n/10)
\end{align*}
If I plug in the data, it becomes
$$
59\log p + 141\log(1-p) \geq -123.2343
$$</p>
<p>The problem is that this inequality is not satisfied for any $p \in (0,1)$, implying that the confidence set $C$ is empty.</p>
<p>EDIT: The previous sentence is false. I was plotting the function on too large a scale to notice that it does indeed poke above $-123.2343$ briefly. I suppose the question in the title still stands.</p>
<p>Am I doing something wrong?</p>
<p>EDIT: Is something like this <a href="http://stats.stackexchange.com/a/41003/42850">http://stats.stackexchange.com/a/41003/42850</a> going on here?</p>
| 37,285 |
<p>I am trying to test whether there is a significant interaction between an ordinal (<code>A</code>) and categorical variable (<code>B</code>) in R using <code>glm</code>. When I create a model that only includes the interaction term <code>A:B</code>, the model runs fine and I get a reasonable estimate. When I run the "full" model <code>X ~ A+B+A*B</code>, I get an unreasonably high standard error. However, when I run each term on its own <code>X ~ A</code> or <code>X ~ B</code>, I also get reasonable estimates. I suspect it might have something to do with near-perfect fit for one combination of my ordinal and categorical variables but I'm not sure. Any ideas on what is going on? Is it bad form to just have a model with only an interaction term <code>A:B</code> and not the <code>A+B+A*B</code>?</p>
<pre><code>model1 <- glm(X~A:B, family=binomial(logit))
model2 <- glm(X~A, family=binomial(logit))
model3 <- glm(X~B, family=binomial(logit))
model4 <- glm(X~A+B+A*B, family=binomial(logit))
summary(model1)
Estimate Std. Error z value Pr(>|z|)
(Int) 3.4320 1.1497 2.985 0.00283 **
A:B [no] -1.3857 0.6813 -2.034 0.04195 *
A:B [yes] -2.2847 0.8017 -2.850 0.00437 **
summary(model2)
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 2.9572 1.0792 2.740 0.00614 **
A -1.5221 0.6495 -2.343 0.01911 *
summary(model3)
Estimate Std. Error z value Pr(>|z|)
(Intercept) 1.2809 0.5055 2.534 0.0113 *
B[yes] -1.1268 0.6406 -1.759 0.0786 .
summary(model4)
Estimate Std. Error z value Pr(>|z|)
(Intercept) 36.66 4125.28 0.009 0.993
A -18.10 2062.64 -0.009 0.993
B[yes] -34.24 4125.28 -0.008 0.993
A:B[yes] 16.46 2062.64 0.008 0.994
> dput(my.data)
structure(list(X = structure(c(2L, 2L, 2L, 2L, 2L, 1L, 1L, 2L, 1L, 2L, 2L, 2L, 1L, 1L,
1L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 1L,
2L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 2L, 2L,
1L, 2L, 2L, 1L, 2L, 2L, 2L), .Label = c("0", "1"),
class = "factor"),
A = structure(c(1L, 1L, 2L, 2L, 1L, 2L, 2L, 1L, 2L, 2L, 1L, 2L, 1L, 2L,
2L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L,
1L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 2L, 2L, 1L, 2L, 1L, 1L,
1L, 1L, 1L, 1L, 2L, 1L, 1L), .Label = c("1", "2"),
class = "factor"),
B = structure(c(1L, 1L, 1L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 1L, 2L, 2L, 2L,
2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L,
2L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L,
2L, 2L, 2L, 2L, 2L, 1L, 1L, 2L), .Label = c("no","yes"),
class = "factor")),
.Names = c("X", "A", "B"), row.names = c(NA, -49L), class = "data.frame")
</code></pre>
| 74,417 |
<p>Following are the results from Fisher-type unit-root test for RDI (dependent variable). <strong>How do you interpret it?</strong> </p>
<pre><code>Based on Phillips-Perron tests:
Ho: All panels contain unit roots Number of panels = 100
Ha: At least one panel is stationary Avg. number of periods = 10.74
AR parameter: Panel-specific Asymptotics: T -> Infinity
Panel means: Included
Time trend: Included
Newey-West lags: 1 lag
Statistic p-value
Inverse chi-squared(196) P 207.1519 0.2788
Inverse normal Z 2.0005 0.9773
Inverse logit t(389) L* 0.5211 0.6987
Modified inv. chi-squared Pm 0.5633 0.2866
P statistic requires number of panels to be finite.
Other statistics are suitable for finite or infinite number of panels.
</code></pre>
| 49,932 |
<p>Following the question <a href="http://stats.stackexchange.com/questions/24516/interaction-of-categorical-and-continuous-variable-using-glm-in-spss">here</a>, how to I refer to GLM in my project? </p>
<p>Just a recap:</p>
<p>GLM is a general model that includes a number of regression techniques. I have used it to test the main effects of IVs (continuous and categorical) on the DV and also the IV-IV interactive effect on the DV.</p>
<p>I am confused about whether I should just say that I used GLM for data analysis, or is there a specific method to mention with GLM (e.g. I used SSI in GLM for data analysis). </p>
<p>This is necessary (in my mind) because names of other statistical tests are precise (e.g. ANOVA). I know GLM is general but there has to be some way to communicate precise information on this technique.</p>
<p>Many thanks for any pointers.</p>
| 74,418 |
<p>I'm fitting a multiple linear regression model between 4 categorical variables (with 4 levels each) and a numerical output. My dataset has 43 observations.</p>
<p>R gives me the following $p$-values from the $t$-test for every slope coefficient: $.15, .67, .27, .02$. Thus, the coefficient for the 4th predictor is significant at $\alpha = .05$ confidence level.</p>
<p>On the other hand, R gives me a $p$-value from an $F$-test of the null hypothesis that all my slope coefficients are equal to zero. For my dataset, this $p$-value is $.11$.</p>
<p>My question: how should I interpret these results? Which $p$-value should I use and why? Is the coefficient for the 4th variable significantly different from $0$ at the $\alpha = .05$ confidence level?</p>
<p>I've seen a related question, <a href="http://stats.stackexchange.com/questions/3549/f-and-t-statistics-in-a-regression">$F$ and $t$ statistics in a regression</a>, but there was an opposite situation: high $t$-test $p$-values and low $F$-test $p$-value. Honestly, I don't quite understand why we would need an $F$-test in addition to a $t$-test to see if linear regression coefficients are significantly different from zero.</p>
| 49,368 |
<p>I'm working with a dataset consisting of degradation rates for proteins in an organism, a total of approx 3750 rates in total. Obtaining these rates is difficult, so the majority of rates are reported with only n=1, however a small subset have been reported with n=2, so the sample standard deviation for each protein can be (poorly) estimated from this.</p>
<p>The rates are sampled from a large population of cells so, it's reasonable to say that any variance is experimental rather than due to variance between cells. </p>
<p>My question is: if all the errors are due to experimental error, and thus probably drawn from the same distribution, is it reasonable to attempt to infer the error distribution and then extrapolate this to the proteins whose rate has only been sampled once?</p>
<p>Alternatively, am I missing a trick, and should really be doing something completely different! I don't have much formal statistical training, so am at a bit of a loss here.</p>
<p>Thanks in advance.</p>
| 74,419 |
<p>Normally when I do factor analysis, I have a whole bunch of variables that need to be reduced. But here I only have two binary variables (yes/no) that I need to reduce into one interval factor. Is Principle Components / Factor Analysis appropriate for this? When I do it, my extraction communalities are really high. I might need a reference to back this up with reviewers.</p>
| 49,110 |
<p>A set is said to be fully-symmetric if for every x in it, negating one of its components results in y such that y is in the set as well.</p>
<p>A set is said to be semi-symmetric if for every x in it, negating all of its components (at once) results in y such that y is in the set as well.</p>
<p>Now examine the optimal solution of the Kmeans objective with K=2d+1 for d-dimensional unique observations that are fully-symmetric.</p>
<p>Suppose it is known that the optimal means set w.r.t the above setup is unique and contains the zero vector. Prove or give a counter example to the following claim: The set of optimal means is semi-symmetric</p>
| 49,111 |
<p>Suppose I have <em>repeated</em> time observations for 800 meter sprint for a group of athletes over the course of a season. <strong>I would like to test the hypothesis that all the times are drawn from the same unspecified distribution (i.e., that the distributions of each athlete's finishing times are all the same).</strong> The number of trials for each athlete may be somewhat different. I am willing to assume that athletes do not influence one another through some sort of competitive effect (Bob doesn't run faster because Bill is ahead), but that each athlete's performances are correlated over time.</p>
<p>Are there any simple hypothesis tests that can handle this? My first inclination was a k-sample Anderson Darling test or the Kruskal-Wallis equality-of-populations rank test, but I am not sure if these can be used on panel data since each athlete's performance over time is not independent. I have only seen these tests applied to groups within a cross-section, like testing for the equality of the median income distribution across all <a href="http://www.i-italy.org/files/55image/where%20we%20live/US%20Census%20Regions%20and%20Divisions.png" rel="nofollow">four census regions</a> simultaneously with US state-level data. Instead, could I use these to test whether the distribution for each state's median income over time looks like all the other states' distributions?</p>
| 74,420 |
<p><img src="http://i.stack.imgur.com/1vzQ1.jpg" alt="Table"></p>
<p>I am trying to come up with a method for deciding the winner from among eight student groups competing for a prize.</p>
<p>The raw data and corresponding percentages measure participation per group in a campus program. </p>
<p>The current rules say that the students who have the highest participation by percent of their population in the group win a prize.</p>
<p>However, I have received complaints that this scoring system unfairly benefits small groups because it is supposedly easier to coordinate smaller groups of people and get a larger percentage of successes.</p>
<p>I am not a mathematician, obviously, so I hope my description of the problem makes sense. </p>
<p>As you can see, one of the student groups has 341 students and another has ony 11.</p>
<p>Any help you can give will be very much appreciated and may keep a riot from breaking out among the winners/losers.</p>
| 74,421 |
<p>Is there a way of investigating publications bias in a meta-analysis of single case studies? Usually one can assess publication bias using funnel plots or Egger's test to assess funnel plot asymmetry. However those methods require some estimate of the individual studies' variance. In case of single case studies such measure of variance is usually not available. So is there any way to assess publication bias in this case?</p>
<p>Here i provide a dummy example in R. In a meta-analysis of single case studies, all studies investigating a change in symptom score following a treatment have been assessed:</p>
<pre><code># change_score <- c(-3,-2,-1,-7,-8,-11,-1,-10,-4,-11)
</code></pre>
| 74,422 |
<p>I have started with a time series of 5000 random numbers drawn from uniform distribution with mean 0 and variance of 1.</p>
<p>I then construct a Variance-Covariance matrix and use this to induce correlation into the random series.</p>
<p>I want the correlated series to have an acf of the form exp(-mk), where m is, say 0.1.</p>
<p>My problem is this: I ought to be able to calculate the resulting variance of the correlated time series, as I know the variance of the random series, and I have the Variance-Covariance matrix. However, my answers are a factor of 2 larger than they should be.</p>
<p>For example: in the case described above - Uniform distribution, mean 0, variance 1, with autocorrelation of exp(-0.1*k), the variance of the correlated series should be 20, but it is only 10.....</p>
<p>I am sure I must be missing something really simple or doing something very silly, but I just can't see it. Help please! </p>
| 74,423 |
<p>Assume we have two variables A and B, and we are trying to find the Mutual Information between them. Can the mutual information enable us to find if there exists a positive or negative relationship between two variables?</p>
| 74,424 |
<p><strong>Background:</strong> I asked hundreds of participants in my survey how much they are interested in selected areas (by five point Likert scales with 1 indicating "not interested" and 5 indicating "interested").</p>
<p>Then I tried PCA. The picture below is a projection into first two principal components. Colors are used for genders and PCA arrows are original variables (i.e. interests).</p>
<p>I noticed that: </p>
<ul>
<li>Dots (respondents) are quite well separated by the second component.</li>
<li>No arrow points left.</li>
<li>Some arrows are much shorter than others.</li>
<li>Variables tend to make clusters, but not observations.</li>
<li>It seems that arrows pointing down (to males) are mainly males' interests and arrows pointing up are mainly females' interests.</li>
<li>Some arrows point neither down nor up.</li>
</ul>
<p><strong>Questions:</strong> How to correctly interpret relationships between dots (respondents), colors (genders) and arrows (variables)? What other conclusions about respondents and their interests can be mined from this plot? </p>
<p><img src="http://i.stack.imgur.com/FHYKO.jpg" alt="PCA analysis"></p>
| 74,425 |
<p>I am trying to perform a multiple regression in <code>R</code>. However, my dependent variable has the following plot:</p>
<p><img src="http://i.stack.imgur.com/AMXDm.jpg" alt="DV"></p>
<p>Here is a scatterplot matrix with all my variables (<code>WAR</code> is the dependent variable):</p>
<p><img src="http://i.stack.imgur.com/qKsGL.jpg" alt="SPLOM"></p>
<p>I know that I need to perform a transformation on this variable (and possibly the independent variables?) but I am not sure of the exact transformation required. Can someone point me in the right direction? I am happy to provide any additional information about the relationship between the independent and dependent variables.</p>
<p>The diagnostic graphics from my regression look as follows:</p>
<p><img src="http://i.stack.imgur.com/sduyK.jpg" alt="Diagnostic plots"></p>
<p><strong>EDIT</strong></p>
<p>After transforming the dependent and independent variables using Yeo-Johnson transformations, the diagnostic plots look like this:</p>
<p><img src="http://i.stack.imgur.com/6WZTC.jpg" alt="After transforming"></p>
<p>If I use a GLM with a log-link, the diagnostic graphics are:</p>
<p><img src="http://i.stack.imgur.com/SjfdK.jpg" alt="GLM with log-link"></p>
| 74,426 |
<p>I recently did some experimenting comparing some common method of internal validation. In my field, the use of a single 1:1 holdout validation is extremely common, even with very small datasets, and I wanted to show my colleagues that there might sometimes be alternatives.</p>
<p>I had a large dataset of approx 30,000 observations. I took 1,000 at random, fitted a model, and then performed the 4 methods of internal validation below to estimate the error. I compared this against the 'true' rate of error from the remaining 29,000 observations. I repeated this whole process 500 times, each time re-sampling 1,000 observations and re-fitting the model etc. The model was an OLS regression with 10 variables.</p>
<p>The results were largely as I had expected: the resubstitution error was optimistically biased, bias and variance in the bootstrap and cross-validation methods was low, and the variance of the single holdout validation method was very high; what I hadn't expected (and what I am at a loss to explain) is the bias I observe in the holdout method. I had assumed it would be high variance, but with low bias. Has anyone else seen this kind of behaviour? Or is it perhaps a consequence of my experimental design?</p>
<p><strong>Clockwise from top left: optimism-corrected bootstrap, resubstitution error, 10-fold CV, single 1:1 holdout</strong><img src="http://i.stack.imgur.com/PdVzd.png" alt="Results"></p>
<p>I should note, I'm not concerned here about model selection here - the model had been previously published, and I'm just interested in how it performs when applied to my data.</p>
| 74,427 |
<p>I have a correlated multivariate Bernoulli random variable $\textbf{X} = (X_1, ..., X_N)$, where the $X_i$ are Bernoulli random variables with parameters $p_i$ and $N \times N$ covariance matrix $\textbf{C}$.</p>
<p>How do choices of $p_i$ constrain choices of $C$ and vice-versa?</p>
<p>In one extreme case, where the $X_i$ are all independent, all choices of $p_i$ are valid. In another extreme case, where the $X_i$ are all perfectly correlated, the $p_i$ must be identical. But I would like to better understand the intermediate cases; both intuitive and more rigorous answers would be much appreciated.</p>
| 74,428 |
<p>I have a longitudinal data set of individuals and some of them were subject to a treatment and others were not. All individuals are in the sample from birth until age 18 and the treatment happens at some age in between that range. The age of the treatment may differ across cases. Using propensity score matching I would like to match treated and control units in pairs with exact matching on the year of birth such that I can track each pair from their birthyear until age 18. All in all there are about 150 treated and 4000 untreated individuals. After the matching the idea is to use a difference-in-differences strategy to estimate the effect of the treatment.</p>
<p>The problem I face at the moment is to do the matching with panel data. I am using Stata's <code>psmatch2</code> command and I match on household and individual characteristics using propensity score matching. In general with panel data there will be different optimal matches at each age. As an example: if A is treated, B and C are controls, and all of them were born in 1980, then A and B may be matched in 1980 at age 0 whilst A and C are matched in 1981 at age 1 and so on. Also A may be matched with its own pre-treatment values from previous years.</p>
<p>To get around this issue, I took the average of all time-varying variables such that the matching can identify individuals who are on average the most similar over the duration of the sample and I do the matching separately for each age group 0 to 18. Unfortunately this still matches a different control unit to each treated unit per age group.</p>
<p>If someone could direct me towards a method to do pairwise matching with panel data in Stata this would be very much appreciated.</p>
| 74,429 |
<p>I have the following histogram of count data. And I would like to fit a discrete distribution to it. I am not sure how I should go about this. <img src="http://i.stack.imgur.com/C3up0.png" alt="enter image description here"></p>
<p>Should I first superimpose a discrete distribution, say Negative Binomial distribution, on the histogram so that I would obtain the parameters of the discrete distribution and then run a Kolmogorov–Smirnov test to check the p-values? </p>
<p>I am not sure if this method is correct or not. </p>
<p>Is there a general method to tackle a problem like this?</p>
<p>This is a frequency table of the count data. In my problem, I am only focusing on non-zero counts. </p>
<pre><code> Counts: 1 2 3 4 5 6 7 9 10
Frequency: 3875 2454 921 192 37 11 1 1 2
</code></pre>
<p><strong>UPDATE:</strong> I would like to ask: I used the fitdistr function in R to obtain the parameters for fitting the data.</p>
<pre><code>fitdistr(abc[abc != 0], "Poisson")
lambda
1.68147852
(0.01497921)
</code></pre>
<p>I then plot the probability mass function of Poisson distribution on top of the histogram. <img src="http://i.stack.imgur.com/djIBz.png" alt="enter image description here"></p>
<p>However, it seems like the Poisson distribution fails to model the count data. <strong>Is there anything I can do?</strong></p>
| 74,430 |
<p>Suppose I have a data set, and have trained up a regression model (happens to be a bayesian linear model, I'm just using the R package). The model outputs a wide range of values, greater than 0 and less than 0, although the actual output can only be greater than 0.</p>
<p>Is there an accepted away to apply bounds to the output of a model to "force" it into a possible value? Or is it perhaps an indicator that I'm doing something wrong or applying the wrong technique to my problem?</p>
| 37,295 |
<p><strong>I have logs from an autocomplete form, which I would like to leverage to increase the intelligence of the results it returns.</strong></p>
<p>I have a project that revolves around users selecting opera characters from a database of ~15,000 unique characters. My difficulty is that each character appears in the database as only one name but it may also be known to the public by any number of other colloquial names.</p>
<p>I have had been lucky enough to receive a modest amount of traffic and currently have ~20,000 rows of logs of strings which my users have typed and the opera character they ended up selecting.</p>
<p>If a user doesn't find the character they are searching for with their first string, they will often try the character by another name. When they are successful, this data correlates the characters' colloquial names with the character itself. I am hoping to leverage this data to enable my autocomplete form to match against these colloquial names.</p>
<p>Unfortunately along with the useful correlations there are many (perhaps more) random correlations. Often when a user's attempt(s) do not return the result they are looking for, instead of trying the character by another name, they simply try (and locate) a completely different character.</p>
<p>I have read a number of scholarly papers on the subject of using search logs to improve natural language search queries, but none of the methods seem to have much application in this narrow case.</p>
<p>Are there known methods that would be useful for this application?</p>
<p>My project can be viewed at <a href="http://fachme.com" rel="nofollow">http://fachme.com</a></p>
| 37,298 |
<p>I am looking for a simple code example of how to run a Particle Filter in R. The pomp package appears to support the state space math bit, but the examples are a little tricky to follow programmatically for a simple OO developer such as myself, particularly how to load the observed data into a pomp object.</p>
<ul>
<li>Examples here: <a href="http://cran.r-project.org/web/packages/pomp/vignettes/intro_to_pomp.pdf" rel="nofollow">http://cran.r-project.org/web/packages/pomp/vignettes/intro_to_pomp.pdf</a></li>
</ul>
<p>Lets say I have a csv file with 1 column of noisy data as input, and I would like to run it through a Particle Filter in order to hopefully clean it up, with the output being the estimations, to another csv file.</p>
<pre><code> y <- read.csv("C:/Dev/VeryCleverStatArb/inputData.csv", header=FALSE)
#CSV to Pomp object ???
#Run Particle Filter
#Write estimates to csv.
</code></pre>
<p>The main difficulty with the examples is loading csv data into a pomp object. </p>
<p>A very simple state space model should be good enough for now.</p>
<p>Any ideas for the R-curious?</p>
| 49,125 |
<p>I would like to find a <em>hierarchical-clustering</em> method useful to assign a group membership into <em>k</em> groups for all individuals in my dataset. I have considered several classic ordination methods, PCA, NMDS, "mclust", etc., but three of my variables are categorical (<em>see data description below</em>). Further, I was wondering if it is preferable to a method that reports a posterior probability of group membership for each individual? I am using R.</p>
<p><strong>Data description</strong>: I have sampled almost 2000 individual birds (single species representing two subspecies or phenotypes) across Sweden. All individuals are adult males. Although this is one species, in middle of Sweden there is a (migratory) divide where the southern individuals presumably migrate to West Africa and north of the divide they presumably migrate to East Africa. There is a zone of overlap approximately 300 km wide at the migratory divide.</p>
<p><strong>Variables</strong>:</p>
<ul>
<li>Wing (mm) - continuous</li>
<li>Tail (mm) - continuous</li>
<li>Bill-head (mm) - continuous </li>
<li>Tarsus (mm) - continuous</li>
<li>Mass (g) - continuous</li>
<li>Colour (9 levels) - categorical</li>
<li>Stable carbon-isotopes (parts per mil) - continuous</li>
<li>Stable nitrogen-istopes (parts per mil) - continuous</li>
<li>SNP WW1 (0, 1, 2) - molecular marker, 0 and 2 are fixed and 1 is
heterozygote</li>
<li>SNP WW2 (0, 1, 2) - molecular marker, 0 and 2 are fixed and 1 is
heterozygote</li>
</ul>
<p>Description of the colour variable: (brightest yellow) S+, S, S-, M+, M (medium), M-, N+, N, N- (dullest yellow-grey)</p>
| 49,126 |
<p>Suppose I care about mispredicting one class much more than another.
Is there a way I can communicate this information to the standard classification techniques?</p>
<p>The only way I can think of is adjusting the threshold, but I wonder if there is a better way.</p>
| 74,431 |
<p>I'm a newbie so I have only basic statistical skills.</p>
<p>I'm wondering if it is possible to use any stat test to analyze the previous matches between 2 tennis players to build a solid betting tip that proves to be profitable in the long run.</p>
<p>I don't expect an answer that makes me billionair (lol, 4 sure I'm not the first who thought this), but some inputs, how you would approach this task and if it would be feasible.</p>
<p>I'm asking this question because recently I've seen a website that makes something similar analyzing the previous matches of tennis players and giving hints to bet.
[I don't know if I'm allowed to post links, if I'm not please remove this mods ;)
<a href="http://www.superduper-tennis.com/" rel="nofollow">http://www.superduper-tennis.com/</a>
]</p>
<p>And this seems to be profitable given his graphs:
<a href="http://www.tennisinsight.com/allTimeUserEarningsSummary.php?userID=28" rel="nofollow">http://www.tennisinsight.com/allTimeUserEarningsSummary.php?userID=28</a></p>
<p>Thanks in advance for any hint!</p>
| 37,299 |
<p>Does anyone know of recommendations/references for plotting <em>binary</em> time series data? Or categorical time series data? I'm looking at win/loss records, and it seems like there should be plots that exploit the binary nature beyond just a simple line plot.</p>
<p>Late edit: I'm familiar with Tufte's suggestions, especially those given in the sparklines chapter of <em>Beautiful Evidence</em> (<a href="http://stats.stackexchange.com/a/31926/7591">see whuber's answer below</a>). I'm interested in other references, particularly those that provide justification for their recommendations.</p>
<p>Second edit: To clarify some of the questions in the comment... the key issue for me is the binary nature of the series. I'd be interested in references to anything that discusses special issues that come up when plotting binary (or categorical or ordinal variables in general) time series instead of interval/quantitative variables. Highly technical papers are fine, as are nontechnical books aimed at a popular audience. It's really the binary vs. general distinction that I'm interested in, and I don't know of any references beyond those listed in the answers below. </p>
| 74,432 |
<p>I have a function of a dozen discrete variables, $$y = f(x_1, ..., x_n)$$ and $k$ samples of $y$ (a lot), $y$ being continuous.</p>
<p>I can use Matlab to analyse the data, with the statistics toolbox. The goal is to analyse the relationship between $y$ and my variables. For example, which variables $x_m$ explain best variations of $y$? (I was thinking of using an Principal Components Analysis, but I can't think of a way to adapt the method to it) To what extent do they have an influence on $y$? Can I find clusters?</p>
<p>My question is: do you have methods I should look into and hints about how to adapt them to my analysis?</p>
<p>Thank you</p>
| 37,301 |
<p>Is the validity coefficient the same thing as $\rho$, where $\rho = \frac{COV_{XY}}{SD_X SD_Y}$?</p>
| 37,303 |
<p>Let $X$ and $Y$ be two i.i.d.
chi-square distributed random variables with four degrees of freedom.
How can we get the joint probability distribution function of the random variables $U=(X-Y)/(X+Y)$, $V=X+Y$?</p>
| 74,433 |
<p>If machine is viewed as function approximation, what class of functions are modeled by a neural network?</p>
| 74,434 |
<p>I have data on about 20000 consumers who were exposed to some form of advertising. The data is in the following form.</p>
<pre><code>Cookie_Id Observation_Number Ad_Id Ad_Id_Lookup Placement_Id Placement_Category Placement_Cpi Cookie_Lookup
2 1 325 Standard 3722 News 20 0
3 1 325 Standard 3722 News 20 0
4 1 325 Standard 3719 Weather 8 2
4 2 325 Standard 3719 Weather 8 2
5 1 324 Standard 3718 Weather 8 0
5 2 324 Standard 3718 Weather 8 0
6 1 327 Rich-Media 3716 Travel 20 0
6 2 327 Rich-Media 3716 Travel 20 0
6 3 327 Rich-Media 3716 Travel 20 0
6 4 327 Rich-Media 3716 Travel 20 0
7 1 324 Standard 3718 Weather 8 1
7 2 324 Standard 3718 Weather 8 1
8 1 323 Standard 3717 Weather 8 0
8 2 323 Standard 3717 Weather 8 0
9 1 325 Standard 3719 Weather 8 0
9 2 325 Standard 3719 Weather 8 0
11 1 324 Standard 3713 Travel 12 0
11 2 324 Standard 3713 Travel 12 0
11 3 324 Standard 3713 Travel 12 0
11 4 324 Standard 3713 Travel 12 0
12 1 324 Standard 3713 Travel 12 0
12 2 324 Standard 3713 Travel 12 0
12 3 324 Standard 3713 Travel 12 0
12 4 324 Standard 3713 Travel 12 0
13 1 327 Rich-Media 3723 News 28 0
14 1 325 Standard 3722 News 20 0
15 1 325 Standard 3722 News 20 0
</code></pre>
<p>I'm looking to model the data using a linear mixed model, with Ad_Id_Lookup and Placement_Category as input variables and Cookie_Lookup as my output variable (the states 0,1 and 2 correspond to different outcomes, such as whether someone made a purchase).</p>
<p>The trouble is that many of the rows are identical to each other, other than the observation number (this ordering is a bit artificial because I don't have time stamps). I want to treat each ad exposure as a new treatment on each individual cookie ID.</p>
<p>Can I do this using the nlme package in R? If not, is there another package that can deal with data of this type?</p>
<p>Many thanks,</p>
| 74,435 |
<p>I have three ARMA(p,q) models for the variable X and using each model I produce forecasts for 12 months ahead. Note that in all three models the residual is normally distributed.</p>
<p>Now if I want to produce an <strong><em>average forecast</em></strong> and a <strong><em>standard deviation</em></strong> for the period $T+12$ using the forecasts from the three ARMA(p,q) models, should I be doing the following:</p>
<p>$$
\mu_{avg,T+12} = \frac{\mu_{model1, T+12}+ \mu_{model2, T+12}+\mu_{model3, T+12}}{3}
$$</p>
<p>$$
\sigma_{avg,T+12} = \sqrt{\sigma^{2}_{model1, T+12} +\sigma^{2}_{model2, T+12}+\sigma^{2}_{model3, T+12} }
$$</p>
<p>If not, how do I go about it?</p>
| 18,700 |
<p>I'm learning clustering analysis and one book I read says the clustering model should be applied to a disjoint data set to examine the consistency of the model. </p>
<p>I think in clustering analysis we don't need to split the data into train and test sets like in supervised learning since without labels there is nothing to "train". </p>
<p>So what is the possible meaning of this "consistency"? How is it evaluated? Is this disjoint data set really necessary?</p>
<p>Thank you!</p>
<p>Edit: There isn't really a broader context. The text talks about how to select optimal number of clusters and then mentions this. I don't think this consistency is about the number of clusters...</p>
| 166 |
<p>I believe one major advantage of Bayesian inference is the intuitiveness of interpretation. This is my primary interest. However, it's not completely clear to me when it's OK to make such an interpretation.</p>
<p>I make the potentially false assumption that fitting a probability model in the frequentist way is virtually the same as fitting the same model with a flat prior in a Bayesian way. Please nuance or correct that as interest number one (1).</p>
<p>And my main interest (2) is if my assumption is true (and if a flat prior happens to be the best prior I could possibly come up with), does the posterior of the model fitted in a frequentist way, but sampled say with an MCMC sampler, allow for me to make a Bayesian type of interpretation? e.g., the probability that an individual described by some particular configuration of $X$ will have an income ($Y$) greater than $100K is 76%.</p>
<p>I've read that what makes an analysis Bayesian is that it involves prior information. Is that really the essence? I've also read that you can't make such interpretations as my example about income from frequentist results. Does sampling from the posterior with MCMC methods move me away from frequentist methods sufficiently to make a Bayesian interpretation?</p>
<p>I greatly appreciate your direction. Thank you.</p>
| 74,436 |
<p>I am doing a research on two groups, one experimental and one control, which are not randomly selected. In fact the two groups are <em>intact groups</em>. The purpose is to find the effect of the number of languages that the learners know on their academic achievement. The selected design is <a href="http://www.socialresearchmethods.net/kb/quasnegd.php" rel="nofollow">Pretest Posttest Nonequivalent Group</a>. </p>
<p>Which statistical method should I use to analyze the data?</p>
| 74,437 |
<p>I have been working with fuzzy logic (FL) for years and I know there are differences between FL and probability specially concerning the way FL deals with uncertainty. However, I would like to ask what more differences exist between FL and probability?</p>
<p>In other words, if I deal with probabilities (fusing information, aggregating knowledge), can I do the same with FL? </p>
| 74,438 |
<p>I was thinking this may be similar to a Mark and recapture problem where there is a known upper bound, hence the title.</p>
<p>I am doing a proportion but sometimes the estimate is larger than the known upper bound. Is there a way to restrict the possible answers to be between the two bounds?</p>
<p>Example: Assume out of a known population (U=10,000) a total of (X=8,505) people made (T=25,916) purchases. Of those, only (A=4,697) of them recorded anything with (R=8,632) recorded purchases. </p>
<p>The only unknown value is X and we wish to estimate it. </p>
<p>Most of the time the ratio estimate works great but does usually over-estimate the true size - and in some cases it is greater than the known population. In this case (X_est/T)=(A/R) which yielded X_est=14,102 which is larger than the known population.</p>
<p>Is there a way to incorporate the known population size to limit the estimate between 0 and U? Currently I am just censoring anything above U as U which seems ad hoc as the bounds should be 'built in' to the formula (hopefully).</p>
| 74,439 |
<p>I've been analysing some data using linear mixed effect modelling in R. I'm planning to make a poster with the results and I was just wondering if anyone experienced with mixed effect models could suggest which plots to use in illustrating the results of the model.
I was thinking about residual plots, plot of fitted values vs original values, etc. </p>
<p>I know this will very much depend on my data but I was just trying to get a feel for the best way to illustrate results of linear mixed effect models. I'm using the nlme package in R.</p>
<p>Thanks</p>
| 74,440 |
<p>I have 50 measurements of 10 descriptors and 1 binary output variable.</p>
<p>I want to use a classification procedure to be able to predict the output, so I split the data into a training and a test set and I can then generate my classifier (I am using a decision tree) and test it on the test set.</p>
<p>Now, obviously the choice of test set is absolutely arbitrary and, whatever the result of my procedure, I cannot be certain that the result I get for that specific test set is similar to what I will get from any other test set.</p>
<p>So, would I have a point in repeating my classification several times, each time with a randomly chosen test/training set, then report the distribution of misclassification errors for my classifier?</p>
<p>I understand that this is a bit similar to what Random Forests are doing, but I am wondering if this procedure makes sense also when applied to other type of classifiers, not necessarily decision trees.</p>
| 37,315 |
<p>In King and Zheng's paper:
<a href="http://gking.harvard.edu/files/gking/files/0s.pdf" rel="nofollow">http://gking.harvard.edu/files/gking/files/0s.pdf</a></p>
<p>They mention about $\tau$ and $\bar{y}$. I already have data with 90000 0's and 450 1's. I have already fitted a logistic regression with the whole data and want to make a prior correction on the intercept. </p>
<p>Or should it be that I take about 3000 0's and 450 1's and then run logistic regression and then apply the prior correction on intercept? would then $\tau$ = 450/90450 and $\bar{y}$ = 450/3450? </p>
<p><strong>Edit</strong> Based on answer from <code>Scortchi</code></p>
<p>I am trying to predict probability of a matchmaking happening. A match might be happening between a buyer and seller, two possible individuals in a dating site, or a job seeker and prospective employee. A 1 is when a match happens, zero for all other pair-wise interactions that have been recorded. I have real life data from one of these use cases. As said before, the rate of 1's in the data is very small (=450/(450+90000). I want to build a logistic regression model with correction from King et.al.</p>
<p>The data I have can be presumed to be all possible data i.e. it is the whole universe. I would presume the rate of 1's in the universe would be 450/(450 + 90000).</p>
<p>I want to sample all the 1's (450 of them) and a random 3000 0's from this data universe. This would be sampling based on 1's. Once the logistic regression is built on this, I want to make a bias correction.</p>
<p>Is it right to presume here that $\tau$ = 450/(450 + 90000) and $\bar{y}$ = 450/(450+3000)?</p>
<p>I am arguing that $\tau$ is indeed the universe estimates because for my use case I pretty much have all the target population data. My question is, with the current setup of the problem how would $\tau$ and $\bar{y}$ be defined? Running time is not the issue, but how to make the bias correction for a rare event is the issue.</p>
| 37,316 |
<p>I have two categorical variables and was looking into doing a chi-square test. I then noticed I had some low frequencies in my contingency table and thought Fisher's Exact Test may be useful. I've now come full circle after doing some reading and want to use Pearson's Chi Squared with n-1 correction. Is there a way in R to run chisq.test with the n-1 correction (discussed here: <a href="http://stats.stackexchange.com/a/14230/13526">Given the power of computers these days, is there ever a reason to do a chi-squared test rather than Fisher's exact test?</a>)?</p>
<p>If not, how would I apply the correction to the output of Pearson's chi-squared?</p>
<p>Presuming a sample size of 80:</p>
<p>(80-1)/80 = 0.9875</p>
<p>Do I simply multiply the Chi-Squared statistic by 0.9875 and then use this value to derive the p value?</p>
<p>2.9687 * 0.9875 = 2.931591</p>
<pre><code>1-pchisq(2.931591,4)
</code></pre>
<p>p = 0.569338</p>
| 74,441 |
<p>Let suppose we are given two vectors <strong>u</strong>, <strong>v</strong> $\in \mathbb{R}^n$ and we want a function that returns $0$ if the ordering of the elements of both vectors are the same or a positive number otherwise, where the larger the number of mismatches the larger the positive value. We want to ensure that the highest values of the vector <strong>u</strong> are in the same positions of the highest values in vector <strong>v</strong>.</p>
<p>For example: Lets say that <strong>u</strong> = {3.2, 1.5, -3, 0} and <strong>v</strong>={-2.1, 1, 1.1, -0.5}, so the rank of these vectors are (descending order): $r_u$={1,2,4,3} and $r_v$={4,2,1,3}. So, in this case we have two matches {2,3} and two mismatches {1,4}. Another point is that the highest value of the vectors <strong>u</strong> and <strong>v</strong> is in the mismatch set, then in this case the penalization should be harder than a mismatch in lower values. </p>
<p>The simplest way to do that is by calculating the rank of each vector, $r_u$ and $r_v$, and then compute the $L_2$-norm, for example, of the difference between these ranks:</p>
<p>$f(r_u,r_v) = \|r_u-r_v\|^2_2$.</p>
<p>However, since this function is a term of my optimization problem, at each iteration of the optimization algorithm it will be necessary to recalculate the ranks of these vectors, once both vectors are parameters to be learned and they are going to change every iteration. So, this approach can be very expensive, computationally speaking.</p>
<p>Another option is by counting the number of concordant pairs (<a href="http://en.wikipedia.org/wiki/Concordant_pair" rel="nofollow">http://en.wikipedia.org/wiki/Concordant_pair</a>) of the vectors, however we
need to recount the number of concordant (or discordant) pairs at each iteration.</p>
<p>Does anybody has faced with a similar problem?</p>
| 74,442 |
<p>I am evaluating the accuracy of GPS watches, taking many readings over a known distance. I've been calculating standard deviation using the mean reading, but because I know what the reading should be, I could use that instead of the mean.
Would this be a reasonable thing to do?</p>
| 74,443 |
<p>Usually all the tests we are using have increasing power as the sample size increases. But what if a test is not consistent?
Is it not worthy to develop such a test? Or can it be justified to use an inconsistent test under some circumstances? In particular, as inconsistency is often due to the distributional model? An inconsistent test may still have nontrivial power, only increasing sample sizes don't pay off. Could it be reasonable to use an inconsistent test only for small sample sizes and an consistent test as his power beats the inconsistent one?</p>
<p>(I know that there are relevance tests. They are consistent as well but they don't have the disadvantage of consistent point hypothesis tests that show even practically irrelevant effects as significant.)</p>
<p>(There are inconsistent tests: E.g. for the location parameter of a shifted Cauchy distribution. As it's mean has the same distribution as a single observation, one can for all sample sizes choose the critical value from the quantile of a Cauchy distribution. But usually one would use a nonparametric consistent test.)</p>
| 74,444 |
<p>I'm building a logit model using R and I'm getting a result of 88.9% of accuracy (verified using the ROC [in rattle, evaluation tab] using 30% of my 34k dataset).</p>
<p>What kind of tests would be interesting to do to certify myself that it's a good model?</p>
| 74,445 |
<p>If the outcome of a market could be expressed as a probability it might be: </p>
<p>Outcome - Description - Probability as a %</p>
<ol>
<li>Up a lot 20% (a move of say more than 10%) </li>
<li>Down a lot 20% </li>
<li>Up a bit 20% (a move of between 0 and 10%) </li>
<li>Down a bit 20% </li>
<li>Sideways 20% </li>
</ol>
<p>So the probability of any single outcome is 1/5 or 20%. </p>
<p>Please could some one educate me on the math of adding another and subsequent markets?</p>
| 37,319 |
<p>I am familiar with meta analysis and meta regression techniques (using the R package <code>metafor</code> from Viechtbauer), but I recently stumbled on a problem I can't easily solve. Say we have a disease that can go from mother to the unborn child, and it has been studied already a number of times. Mother and child were tested for the virus right after birth. As an unborn child can impossibly get the virus other than from the mother, one would expect crosstabulations like :</p>
<pre><code> | neg kid | pos kid
mother neg | A | C=0
-----------|---------|--------
mother pos | B | D
</code></pre>
<p>Obviously using odds ratios (OR) gives errors as one would be dividing by 0. Same for relative risks :</p>
<p>$\frac{A/(A+B)}{0/(0+D)}$
</p>
<p>Now the researchers want to test the (senseless) hypothesis whether infection of the child is related to the infection of the mother (which seems very, very obvious). I'm trying to reformulate the hypothesis and come up with something that makes sense, but I can't really find something. </p>
<p>To complicate things, some kids with negative moms actually are positive, probably due to infection in the first week. So I only have a number of studies where C = 0. </p>
<p>Anybody an idea on how to statistically summarize the data of different studies following such a pattern. Links to scientific papers are also more than welcome. </p>
| 74,446 |
<p>I originally asked this on a machine learning site, but one of the responses made me think that maybe this site is more suitable.</p>
<p>Suppose you have two weighted coins, and every day you flip each one a number of times and record the total number of heads. So on the tenth day you might have flipped coin A 106 times, coin B 381 times, and recorded 137 heads. Supposing your goal is to figure out the weights of each coin, is it reasonable to just regress the number of heads on the number of flips for each coin? E.g, something along the lines of:</p>
<p>num_heads ~ num_flips_A + num_flips_B + intercept</p>
<p>However,it doesn't seem to make sense to have an intercept term in this scenario (it is negative for my data, which is confusing), so I tried subtracting out -1 from the formula, and it seemed to yield reasonable results. My first question is whether this approach is a good one.</p>
<p>Now, suppose that you suspect the existence of a third coin, C, that someone else is flipping unbeknownst to you, and the heads for that coin are getting mixed up in your count. The number of flips for this coin are not recorded, but you do not particularly care about its weight - it's more of a confounding factor. Then would it be reasonable to fit a similar regression, but constrain the intercept to be positive?</p>
<p>Thanks for any help</p>
| 37,440 |
<p>In R, the <code>step</code> command is supposedly intended to help you select the input variables to your model, right? </p>
<p>The following comes from
<code>example(step)#-> swiss</code> &
<code>step(lm1)</code></p>
<pre><code>> step(lm1)
Start: AIC=190.69
Fertility ~ Agriculture + Examination + Education + Catholic +
Infant.Mortality
Df Sum of Sq RSS AIC
- Examination 1 53.03 2158.1 189.86
<none> 2105.0 190.69
- Agriculture 1 307.72 2412.8 195.10
- Infant.Mortality 1 408.75 2513.8 197.03
- Catholic 1 447.71 2552.8 197.75
- Education 1 1162.56 3267.6 209.36
Step: AIC=189.86
Fertility ~ Agriculture + Education + Catholic + Infant.Mortality
Df Sum of Sq RSS AIC
<none> 2158.1 189.86
- Agriculture 1 264.18 2422.2 193.29
- Infant.Mortality 1 409.81 2567.9 196.03
- Catholic 1 956.57 3114.6 205.10
- Education 1 2249.97 4408.0 221.43
Call:
lm(formula = Fertility ~ Agriculture + Education + Catholic + Infant.Mortality, data = swiss)
Coefficients:
(Intercept) Agriculture Education
62.1013 -0.1546 -0.9803
Catholic Infant.Mortality
0.1247 1.0784
</code></pre>
<p>Now, when I look at this, I guess the last Step table is the model which we should use? The last few lines include the "Call" function, which describes the actual model and what input variables it includes, and the "Coefficients" are the actual parameter estimates for these values, right? So this is the model I want, right?
I'm trying to extrapolate this to my project, where there are more variables. </p>
| 37,706 |
<p>I don't have much experience with panel data so I apologize in advance if this sounds ridiculous.</p>
<p>Let's say that I am trying to control for individual and temporal fixed effects when running a panel data regression and I have 998 individuals and 29 years of data. In Stata the way to deal with multi-variate fixed effects is to create dummy variables that uniquely identify each combination. In my case this would be (29×998) - 1 = 28,941 dummy variables.</p>
<p>Will this result in very high multicolinearity? What if I had ~74,000 individuals?</p>
<p>My gut tells me that this is ridiculous, but gut feel and statistics don't really go well together. </p>
| 74,447 |
<p>I start with a vector of random numbers sampled from a normal distribution:</p>
<pre><code>R<-rnorm(100, mean=0, sd=30)
</code></pre>
<p>I would now like to create 3 variables that are correlated with each other with a pre-specified correlation. In addition I would like to have these three variables correlated with <code>R</code> with a pre-specified correlation.</p>
<p>For example <code>A</code>, <code>B</code>, <code>C</code> would have correlation 0.7 with each other, and <code>A</code>, <code>B</code>, <code>C</code> would have correlation 0.6 with <code>R</code>.</p>
<p>I.e. I am looking for the following covariance matrix:</p>
<pre><code> R A B C
R 1 .6 .6 .6
A .6 1 .7 .7
B .6 .7 1 .7
C .6 .7 .7 1
</code></pre>
<p>How can this be done in R?</p>
| 49,933 |
<p>Suppose you have a set of numbers $\{1,2,...,m\}$ where $m \ge 5$ . Now you randomly choose five of those elements with replacement, $\text{a}_1$ ... $\text{a}_5$.</p>
<p>What is the distribution of max($\text{a}_1$,$\text{a}_2$,$\text{a}_3$,$\text{a}_4$,$\text{a}_5$)?</p>
| 37,325 |
<p>I have an independent sequence of random variables such that P($X_n = \pm1) = \frac{1-2^{-n}} {2} $<br>
and P($X_n = 2^ k)=2^{-k}$ for k = n+1,n+2,...
Define a new sequence of random variables by $Y_n = X_n$ if $X_n = \pm1$<br>
$Y_n = 0$ otherwise.
Find P($Y_n = y)$</p>
<p>How to do this?
I have so far proven $X_n$ and $Y_n$ to be tail equivalent. I have to prove Lyapunov's condition for sum of $Y_n$ which I will be able to if I can find E($Y_n)$ but that is where I am stuck.</p>
| 74,448 |
<p>I am doing a project on low birth weight cohort and normal birth weight tested at 3 different age groups on one cognitive test. Is this considered a 2x3 ANOVA if I want to know birth group differences? Do I need to do t-tests or one-way anova first to test if there is an actual birth group different with age collapsed before I do the 2x3 ANOVA? </p>
| 74,449 |
<p>I understand that it's instead correct to cross-validate using new data. Why is it
so? It is just that a model will tend to fit the data set that was used to created it better than another randomly sampled set of data?</p>
<p>Could it ever be justified to use the same data for EFA and CFA?</p>
| 74,450 |
<p>I am an actuary working on a Bayesian loss reserve model using incremental average severity data. Exploratory analysis of the response seems to suggest a skew normal distribution of some sort would be appropriate, as there are some negative values in the left tail, and the log transformed positive values fit a normal distribution fairly well. I was inspired by <a href="http://www.casact.org/newsletter/index.cfm?fa=viewart&id=6544" rel="nofollow">this posting</a> by Glenn Meyers and feel like the latter parameterization should be easy to implement in JAGS. However, since I am not used to JAGS, I am struggling to set the model up. Here is a example of what I am trying to do.</p>
<pre><code>#Likelihood (training set)
for (i in 1:n){
y[i] ~ dnorm(z[i], tau)
z[i] ~ dlnorm(mu[i], tauln)
mu[i] <- beta1*Dev[i] + ...
}
</code></pre>
<p>I am not sure how to handle the negative $\mu_i$s in the absence of control structures (<code>if</code>, <code>else</code>, etc...) I am used to. I need to pass the zero valued or negative $\mu_i$s directly to the mean parameter for my response <em>y</em><sub>i</sub>.
Is there a clever way to do this with the step function? Should I consider specifying the normal-log-normal mixture another way?
FYI – I also posted this question <a href="https://sourceforge.net/p/mcmc-jags/discussion/610036/thread/9061a96e/" rel="nofollow">here</a>.</p>
| 74,451 |
<p>My question is: is it possible to pool observations when it is the same countries that are observed through the years.
I have observations on 37 countries in 2010, 47 countries in 2011 and 60 countries in 2012. However, it is the same countries that have been observed (more countries are though added). </p>
<p>Which pitfalls might exist if i pool the countries and years, so i get 144 observations and use pooled ols?
Should i use this command: reg y x (with time dummies), cluster(id)</p>
<p>furthermore, would you say it is balanced or unbalanced dataset?
And it is panel data I have, right? </p>
| 74,452 |
<p>I have a data set of around 5000 features. For that data I first used Chi Square test for feature selection; after that, I got around 1500 variables which showed significance relationship with the response variable. </p>
<p>Now I need to fit logistic regression on that. I am using glmulti package for R (glmulti package provides efficient subset selection for vlm) but it can use only 30 features at a time, else its performance goes down as the number of rows in my dataset is around 20000.</p>
<p>Is there any other approach or techniques to solve the above problems? If I go by the above method it will take too much time to fit the model.</p>
| 74,453 |
<p>The height for 1000 students is approximately normal with a mean 174.5cm and a standard deviation of 6.9cm. If 200 random samples of size 25 are chosen from this population and the values of the mean are recorded to the nearest integer, determine the probability that the mean height for the students is more than 176cm.</p>
<p>Since the samples were rounded to the nearest integer, I should find $P(X>176.5)$ instead of $P(X>176)$. Is this how we account for the effect of rounding the observations?</p>
<p>EDIT: In light of whuber's answer:</p>
<p>The answer given by my module (no workings were provided):</p>
<p>$\hspace{1cm} n=25; Normal$
</p>
<p>$\hspace{1cm} \mu_{\overline{x}}=174.5cm$
</p>
<p>$\hspace{1cm} \sigma_{\overline{x}}=6.9/5=1.38$
</p>
<p>The answer is 0.1379. Which I'm pretty sure was found using $1-\phi(\dfrac{176-174.5}{1.38})$
</p>
<p>So,</p>
<ol>
<li>Is this an acceptable answer?</li>
<li>Since $n$ was less than 30, would it be ok to find the probability using a t-distribution?</li>
</ol>
<p>Thank you.</p>
| 74,454 |
<p>I understand that when sampling from a finite population and our sample size is more than 5% of the population, we need to a correction on the sample's mean and standard error using this formula:</p>
<p>$\hspace{10mm} FPC=\sqrt{\frac{N-n}{N-1}}$
</p>
<p>Where N is the population size and n is the sample size.</p>
<p>I have 3 questions about this formula:</p>
<ol>
<li>Why is the threshold set at 5%?</li>
<li>How was the formula derived?</li>
<li>Are there other online resources that comprehensively explain this formula besides <a href="http://www.jstor.org/pss/2340569">this</a> paper?</li>
</ol>
<p>Thank you,</p>
| 37,333 |
<p>I've called my question "clustering" but I am not sure if that's the right term. Imagine my matrix looks like this:</p>
<pre><code>[ 0. , 0.92, 0. , 0.85, 0. ]
[ 0.92, 0. , 0. , 0.89, 0. ]
[ 0.85, 0. , 0. , 0.89, 0. ]
[ 0. , 0. , 0. , 0. , 0. ]
[ 0. , 0.89, 0. , 0.89, 0. ]
</code></pre>
<p>What I am after is not a single clustering/dendrogram of indices, I am after extracting largest possible rectangles from the matrix (by re-arranging X/Y indices).</p>
<p>For example, from the above I am expecting to get:</p>
<pre><code>[ 0.92, 0.89, 0. , 0. , 0. ]
[ 0.85, 0.89, 0. , 0. , 0. ]
[ 0. , 0.85, 0.92, 0. , 0. ]
[ 0. , 0.89, 0.89, 0. , 0. ]
[ 0. , 0. , 0. , 0. , 0. ]
</code></pre>
<p>However, I expect each cluster to be identified, even if it is not possible to show them all in a single matrix.</p>
<p>What would be an algorithm I should be after? I've tried various clustering algorithms and they give me best single clustering, not focusing on the best individuals (which are mutually incompatible).</p>
| 23,909 |
<p>I have a biometric authentication system that is using a person's gait to authenticate them. I extract features from gait, run it through a comparison versus a template and produce a similarity score (where if this similarity score is below a certain threshold, then the user is authenticated). So, I have 72 trials total (36 trials containing a positive case and 36 that contain a negative case). What I want to do is graph the ability of this system to authenticate people by illustrating it with a ROC graph. </p>
<p>Unfortunately, I don't quite understand how to choose a threshold. Is there some mathematical procedure involved for choosing a threshold for the similarity scores? Do I just choose a bunch of different thresholds, and graph the corresponding ROC curves for all these different threshold values? The resulting similarity scores vary from [0.6,1.2] where the positive cases tend to lie around 0.6. All my coding is being done in Matlab. </p>
| 51 |
<p>I am conducting a study on a cohort of people with a follow-up period of 7 years. I wish to use Cox Proportional Hazard model to estimate HR between an exposure and the length of time of an event. One missing information is the date of birth for the all subjects, but month and year are available.This prevents the calculation of exact age at the time of the study.</p>
<p>Any suggestions will be much appreciated? Any sensitivity analysis should be conducted?</p>
<p>Thanks </p>
| 49,162 |
<p>What statistical research blogs would you recommend, and why?</p>
| 74,455 |
<p>I have been looking into theoretical frameworks for method selection (note: not model selection) and have found very little systematic, mathematically-motivated work. By 'method selection', I mean a framework for distinguishing the appropriate (or better, optimal) method with respect to a problem, or problem type.</p>
<p>What I have found is substantial, if piecemeal, work on particular methods and their tuning (i.e. prior selection in Bayesian methods), and method selection via bias selection (e.g. <a href="http://portal.acm.org/citation.cfm?id=218546">Inductive Policy: The Pragmatics of Bias Selection</a>). I may be unrealistic at this early stage of machine learning's development, but I was hoping to find something like what <a href="ftp://ftp.sas.com/pub/neural/measurement.html">measurement theory</a> does in prescribing admissible transformations and tests by scale type, only writ large in the arena of learning problems.</p>
<p>Any suggestions?</p>
| 49,167 |
<p>We want to compare two distributions of ages (birth years) of individuals. Given a set of individuals (<em>all</em>) and a subset of that set (<em>subset</em>), we want to find out:</p>
<ol>
<li>Is it valid to compare the age distribution of <em>all</em> with that of <em>subset</em> (when |<em>subset</em>| is much smaller than |<em>all</em>|) or can we only compare <em>all</em> minus <em>subset</em>? </li>
<li>We did a Shapiro-Wilk test on both distributions and got W = 0.7456 (p-value = 7.499e-08) for <em>all</em> and W = 0.7467 (p-value = 7.865e-08) for <em>subset</em>. This confirms that both distributions are not normal distributions, rather exponential or power-law. Therefore, we can not use Pearson correlation to compare them but should rather use Spearman?</li>
<li><p>Comparing both distributions with Spearman, R says</p>
<pre><code>t = 93.9151, df = 47, p-value < 2.2e-16
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
0.9952748 0.9985103
sample estimates:
cor
0.9973462
</code></pre>
<p>which shows a high correlation. How meaningful is this result, given the large amount of young people?</p></li>
<li>Can we infer from the correlation test that both distributions are (very) similar?</li>
<li>If so, does this allow us to conclude that <em>subset</em> is not more biased towards younger people than <em>all</em> is?</li>
<li>Finally, if our assumption was that <em>subset</em> contains more younger people than one would expect, can we now infer that this assumption is wrong?</li>
</ol>
<p>Since I can not upload images, yet: <a href="https://plus.google.com/106609077152064677019/posts/RWvnXLMVosT" rel="nofollow">here is a plot of both distributions</a> and here the raw data:</p>
<pre><code># year age(all) age(subset)
1938 368 1
1939 360 1
1941 394 1
1942 809 3
1943 964 1
1944 686 1
1945 701 1
1946 1301 1
1947 1228 5
1948 1565 2
1949 2019 1
1950 2146 3
1951 2202 6
1952 2343 5
1953 2061 7
1954 2313 3
1955 2963 11
1956 3157 8
1957 3676 16
1958 4051 12
1959 5024 18
1960 5282 19
1961 6849 29
1962 7376 21
1963 8951 38
1964 10314 29
1965 13052 60
1966 13601 65
1967 14606 68
1968 18571 97
1969 19796 101
1970 21248 101
1971 20997 101
1972 24852 135
1973 26648 145
1974 30310 170
1975 34124 190
1976 39344 232
1977 45367 240
1978 54883 303
1979 63037 302
1980 73844 390
1981 77437 377
1982 83985 428
1983 90484 482
1984 100153 500
1985 100139 526
1986 106011 529
1987 101197 472
</code></pre>
| 74,456 |
<p>In the definition of standard deviation, why do we have to <strong>square</strong> the difference from the mean to get the mean (E) and take the <strong>square root back</strong> at the end? Can't we just simply take <strong>the absolute value</strong> of the difference instead and get the expected value (mean) of those, and wouldn't that also show the variation of the data? The number is going to be different from square method (the absolute-value method will be smaller), but it should still show the spread of data. Anybody know why we take this square approach as a standard?</p>
<p>The definition of standard deviation:</p>
<p>$\sigma = \sqrt{E\left[\left(X - \mu\right)^2\right]}.$
</p>
<p>Can't we just take the absolute value instead and still be a good measurement?</p>
<p>$\sigma = E\left[|X - \mu|\right]$
</p>
| 49,295 |
<p>In logistic regression, if we considered residuals, could they only take on the values $0$ or $1$? The data points themselves take on only $1$ or $0$. The logistic curve can take on any value between $0$ and $1$. What would the distribution of the residuals look like?</p>
| 74,457 |
<p>I hope that this is a right place and way to ask this question. I am trying to understand how to derive the probability density function of x(t) in an AR model of order K given (t-k) past observations.<br>
I am primarily referring to this <a href="http://reference.kfupm.edu.sa/content/u/n/a_unifying_framework_for_detecting_outli_70701.pdf" rel="nofollow">paper</a> (Section 2.2) but it cites a Japanese book for this derivation. I could not find similar derivation in the other time series books(Shumway and stoffer, Pourahmadi). Can someone please provide me appropriate references or explain it. </p>
<p>Thanks for your time.
iinception</p>
| 74,458 |
<p>I have a data set with 24 predictor variables, all continuous, but with different scales and potential collinearity. I’m trying to decide whether to use <code>randomForest</code> or <code>cforest</code> in <a href="http://cran.r-project.org/web/packages/party/index.html" rel="nofollow">party</a> with conditional importance permutation. </p>
<p>I recognize that I should probably use <code>cforest</code> if I want to overcome variable selection bias, but I find the ability to get partial dependence plots and percent variance explained from the <code>randomForest</code> package to be quite appealing. </p>
<p>I was wondering if anyone knew if it were possible to get partial dependence plots and percent variance explained from <code>cforest</code>?</p>
<p>Also, it appears that <code>ctree</code> uses a significance test to select variables; is this the same for <code>cforest</code>? And how might I get these significance values for each variable in cforest?</p>
| 74,459 |
<p>I did multinomial logistic regression using SPSS chi-square is .000 , df is 0 and significance =. </p>
<p>So what does it mean significance = .? </p>
| 37,344 |
<p>My book outlines a procedure but a preliminary part of it is unclear to me. </p>
<p>Let X be the number of occurences of an event over a unit of time and assume that it has a Poisson distribution with mean $m=\lambda $. Let $T_1, T_2 , T_3, \ldots $ be the interarrival times of the occurences and they are iid with an exponential $\lambda $ distribution. Note that $ X=k $ iff $$ \sum_{j=1}^k T_j \leq 1 \quad \text{and} \quad \sum_{j=1}^{k+1} T_j >1. $$</p>
<p>This is precisely what I do not understand. Why does the total waiting tme until $k$ occurences have to be less than or equal to 1? Any help is greatly appreciated. Thank you.</p>
| 74,460 |
<p>I have obtained optimally scaled variables from a highly mixed nature of data containing binary, nominal, ordinal and scale type variables. The optimal scaling was obtained in SPSS through a CATPCA procedure. Now I want to use these variables in a Factor Analysis and want to use a rotation to obtain meaningful loads. What I think is that I should avoid any normality assumption for these optimally scaled variables. So, maximum likelihood method of factor extraction is probably not applicable here (although I am not so good at FA, so not exactly sure). Should I use principal components method of factor extraction instead? </p>
<p>What else method can be useful that avoids distributional assumption?</p>
<p>If I want the factor scores in a further regression as IVs, will it be at all a good idea to use oblique rotation? I think I should try to keep the factors as uncorrelated as possible if I want them as IVs in a further regression. Is this concept right?</p>
<p>Thanks for any kind of suggestion. :)</p>
| 37,345 |
<p>I have a data set containing a daily sensor data measurements recorded from 20 participants for 60 days (baseline data).</p>
<p>I am trying to develop methods for predicting/estimating decline in
long-term monitoring studies, i.e. can measurement of a parameter on a daily basis be used to detect/predict decline (i.e. significant change) by identification of negative trends or abberant measurements.
I hope to be able to generate statistical thresholds to allow identification of decline by examining the trends (using some combination of sensor derived parameters and referencing these to baseline clinical data.</p>
<p>What is the most appropriate method to define a threshold for a significant change or decline for each participant and how do I best predict/detect negative trending or aberrant behaviour in unseen data? </p>
<p>I wondered if I could get some opinions on a good approach?
(note I have been using ICC, ANOVA as well as examining std. dev. of the baseline but have not found any of these approaches particularly useful)</p>
| 74,461 |
<p>I am working on a problem where we are interested in finding the MLE for a function of two parameters.</p>
<p>I am having problems with going about finding this. Intuitively, the idea makes sense. I am just wondering about the definition of the MLE of a function of two parameters (Google isn't turning up much). The question is as follows:</p>
<p><strong>Question:</strong> Suppose that $X_1,\ldots,\,X_n$ are iid $N(\mu, \sigma^2)$ with unknown $\mu,\sigma^2$. Find the MLE for $\frac{\mu}{\sigma}$. </p>
<p>Note that this is not just a homework problem, but part of a take home final. I really am not looking for much of an answer, but more or less the idea for such problems. </p>
<p><strong>Edit</strong></p>
<p>Apparently MLE's are invariant under function. TY</p>
| 27,621 |
<p>I am currently trying to better understand probabilistic skill ranking systems for games, but I find that I have trouble properly understanding the basic concept of how skill as a pairwise comparison can be generalized.</p>
<p>For instance, if all you know is that player C wins player B 80% of the time, while that same player B wins player A 80% of the time, would this be enough data to determine how often C would win against A? How would those calculations work?</p>
<p>Of course it might even be possible for a game to have different styles of play where A might win specifically against C, which would completely confuse the issue, but I am talking about general ranking systems such as <a href="http://en.wikipedia.org/wiki/Elo_rating_system" rel="nofollow">ELO</a> or <a href="http://research.microsoft.com/en-us/projects/trueskill/faq.aspx" rel="nofollow">Trueskill</a> that only take winning into account.</p>
| 74,462 |
<p>I have a question regarding how to evaluate agreement between an individual's rating, and that of a group of people (of which the individual was a part). The group score was achieved through consensus (i.e. they agreed on one score as a group). I was originally planning to use kappa to look at agreement between the two scores, but I am now questioning this approach. Any ideas on how I can evaluate either the difference or the agreement between the two scores? My main worry is independence.</p>
<p>The data looks something like this:<br>
ID IndividualScore GroupScore<br>
1 5 3<br>
2 4 2<br>
etc</p>
| 74,463 |
<p>I'm looking at the concentration of an event occurrences in a given interval of time. For example, we suppose that an event occurred 4 times in an interval of length 10. I can represent this as a string where <code>X</code> means <em>the event occurred</em> whereas <code>o</code> means <em>the event didn't occur</em>.</p>
<p>The distribution of events could be rather "uniform" in time :</p>
<pre><code>X o o X o o X o o X
</code></pre>
<p>Or it can be "concentrated" like this :</p>
<pre><code>X X X o o o o o o X
</code></pre>
<p>Or like this :</p>
<pre><code>X o o o X X o o o X
</code></pre>
<p>I'm looking for an index or a distance that would allow me to measure and compare the concentration of event occurrence in my interval. Ideally it would allow to compare interval of different lengths, for example it would distinguish between : </p>
<pre><code>X o X o X o X o X o X
</code></pre>
<p>And :</p>
<pre><code>X o o o X X o X
</code></pre>
<p>Do you know of any index or measure that would allow such a thing ?</p>
<p>Many thanks in advance.</p>
| 74,464 |
<p>I would be very grateful for some advice on how to model mixture distributions with R.</p>
<p>Given a problem to create a ranking of graduate students by their yearly income after completing their education, what are some suited models for this task?</p>
<p>Specifically, my data has a distribution with a point mass at 0 (the majority of graduates doesn't find, or start, a full-time job right away). The rest of the data is sort of nicely distributed. The data $x$ was transformed $\log(x+1)$.
<img src="http://i.stack.imgur.com/4lUnO.png" alt="histogram"></p>
<ul>
<li>My first approach was a simple regression model </li>
<li>My second approach were two models (one for classifying whether they get a job - is very weak, and second to predict the income) Simply chaining these two models works much worse than the simple model.</li>
</ul>
<p>My next step would be a Bayesian mixture model to predict income. I was thinking about fitting a mixture with 2 Gaussians, where I would set the mean for one of them to be known as equal to 0. Would that make sense? Has anyone a good experience with some package?</p>
<p>Another problem might be that I am always predicting the income using a regression and building a ranking from that, rather than running an ordinal regression. What is the best way handle this situation - if the target variable (income) that the ranking is based on is itself available for training data?</p>
<p>Full disclosure: This is a fictitious scenario, as I cannot discuss the exact details of the real case.</p>
| 74,465 |
<p>Based on estimated classification accuracy, I want to test whether one classifier is statistically better on a base set than another classifier . For each classifier, I select a training and testing sample randomly from the base set, train the model, and test the model. I do this ten times for each classifier. I therefore have ten estimate classification accuracy measurements for each classifier. How do I statistically test whether the $classifier 1$ is a better classifier than the $classifier 2$ on the base dataset. What t-test is appropriate to use? </p>
| 37,346 |
<p>A random sample of <strong>388</strong> married couples found that <strong>292</strong> had two or more personality preferences in common. In another random sample of <strong>552</strong> married couples, it was found that only <strong>24</strong> had no preferences in common. Let p1 be the population proportion of all married couples who have two or more personality preferences in common. Let p2 be the population proportion of all married couples who have no personality preferences in common.</p>
<p>Find a 99% confidence interval for p1 – p2.
Lower Limit:
Upper Limit:</p>
| 74,466 |
<p>I am fairly new to data analysis and visualization, and I'm trying to figure out the best model to show some data regarding page load time (in seconds).</p>
<p>The current view is a line graph where the x-axis is the page, and the y-axis is the load time. This graph is visually misleading though, because in my mind, the psychological expectation of a line graph is to convey something over time. Because there are a couple of outliers in the data where the page load time is much longer than the average, someone looking at the graph might be turned off because their eyes jump to the outliers, so it looks like our pages take forever to load. In actuality, about 95% of the pages take fewer than two seconds to load, and 5% of the pages take longer.</p>
<p>So what's the best method to display this data? I want people to understand that on average, our page load time is <em>great</em>, and there are very few exceptions. I feel like depicting a bell curve might help to show that the majority of the pages fall within x-standard deviations of the mean, but A) I'm not sure how to do that in Excel, and B) I'm not sure that's the best option.</p>
| 37,349 |
<p>what is the pdf of the product of two independent random variables X and Y, if X and Y are independent?
X is normal distributed and Y is chi-square distributed.</p>
<p>Z = XY</p>
<p>if $X$ has normal distribution $$X\sim N(\mu_x,\sigma_x^2)$$
$$f_X(x)={1\over\sigma_x\sqrt{2\pi}}e^{-{1\over2}({x-\mu_x\over\sigma_x})^2}$$
and $Y$ has Chi-square distribution with $k$ degree of freedom
$$Y\sim \chi_k^2$$
$$f_Y(y)={y^{(k/2)-1}e^{-y/2}\over{2^{k/2}\Gamma({k\over2})}}u(y)$$
whre $u(y)$ is unit step function.</p>
<p>Now, what is the pdf of $Z$ if $X$ and $Y$ are independent?</p>
<p>One way to find the solution is to use Rohatgi's well known result (1976,p.141)
if $f_{XY}(x,y)$ be the joint pdf of continuous RV's $X$ and $Y$, the pdf of $Z$ is
$$f_Z(z) = \int_{-\infty}^{\infty}{{1\over|y|}f_{XY}({z\over y},y)dy} $$</p>
<p>since, $X$ and $Y$ are independent $f_{XY}(x,y)=f_X(x)f_Y(y)$
$$f_Z(z) = \int_{-\infty}^{\infty}{{1\over|y|}f_{X}({z\over y})f_{Y}(y)dy} $$
$$f_Z(z) = {1\over\sigma_x\sqrt{2\pi}}{1\over{2^{k/2}\Gamma({k\over2})}}\int_{0}^{\infty}{{1\over|y|}e^{-{1\over2}({{z\over y}-\mu_x\over\sigma_x})^2} {y^{(k/2)-1}e^{-y/2}}dy} $$
Where we face the problem of solving the integral $\int_{0}^{\infty}{{1\over|y|}e^{-{1\over2}({{z\over y}-\mu_x\over\sigma_x})^2} {y^{(k/2)-1}e^{-y/2}}dy}$. Can anyone help me with this problem. </p>
<p>is there any alternative way to solve this?</p>
| 47,393 |
<p>A probability distribution is a member of the two-parameter exponential family if the distribution can be expresses in the following form:</p>
<p>$$ h(\theta, \phi) \text{exp}\left[\sum t(x_{i})\psi(\theta, \phi) + \sum u(x_{i})\chi(\theta, \phi)\right] $$ </p>
<p>for parameters: $\theta$ and $\phi$, data: $x_{1}, x_{2}, \ldots, x_{n}$ and functions: $h$, $t$, $u$, $\psi$ and $\chi$.</p>
<p>Furthermore, the conjugate prior for the two-exponential family takes the following form:</p>
<p>$$ p(\theta, \phi) \propto h(\theta, \phi)^{v}\text{exp}\left[\tau\psi(\theta, \phi) + \omega\chi(\theta, \phi)\right] $$</p>
<p>Whilst looking at some members of the two-parameter exponential family, I observed the following example:</p>
<p>The gamma distribution ($\text{Ga}(\alpha, \beta)$) can be expressed in the above form, where $h = \frac{\phi^{\theta}}{\Gamma(\theta)}$, $\sum t = \sum \text{log}x_{i}$, $\sum u = \sum x_{i}, \psi = \theta, \chi = -\phi$.</p>
<p>The example also states, following from the above, that the conjugate prior form for this gamma distribution is given by:</p>
<p>$$ \left[\frac{\phi^{\theta}}{\Gamma(\theta)}\right]^{v} \text{exp}[\tau\theta - \omega\phi] $$</p>
<p>Could somebody explain how this gamma conjugate prior can be expressed in this form?</p>
| 37,351 |
<p>I’m using Stata 12.0, and I’ve downloaded the <code>polychoricpca</code> command written by Stas Kolenikov, which I wanted to use with data that includes a mix of categorical and continuous variables. Given the number of variables (around 25), my hunch is that I will need to generate more than 3 components. Ultimately, I would like to generate a handful of meaningful components (rather than dozens of variables) and use the components as independent variables in logistic regression.</p>
<p>Using <code>polychoricpca</code>, I am able to generate a table showing the eigenvalues and the eigenvectors (loadings) for each variable for the first three (3) components only. <code>polychoricpca</code> appears to call these loadings “scoring coefficients” and produces these for every level of the variable, such that if a variable has three categories you’ll see three scoring coefficients (“loadings”) for that variable. Never having worked with polychoric PCA before, I’m used to only seeing one loading per variable/item. I want to examine these coefficients (“loadings”) to try to understand what the components are and how they might be labelled. </p>
<p>My questions:</p>
<p>(1) What if it looks as if I should generate 4 components? It seems as if I wouldn’t be able to examine and understand what that 4th component is because I can’t see how each of the items load on that 4th component, only the first 3. Is there a way to see how each item loads on more than the first three components?</p>
<p>(2) Can I simply use the polychoric correlation matrix combined with Stata’s <code>pcamat</code> command to examine how each item loads on each component (the eigenvector table). I thought this might be a way of being able to examine loadings if I have more than 3 components. The idea came from <a href="http://www.ats.ucla.edu/stat/stata/faq/efa_categorical.htm" rel="nofollow">this UCLA stats help post</a> on using <code>factormat</code> with a polychoric correlation matrix. <code>pcamat</code> in Stata, however, produces only 1 loading (coefficient) per variable, not 1 loading for every level of the variable. Any thoughts on whether it would be appropriate just to report the single loading from <code>pcamat</code>? </p>
| 37,352 |
<p>I am building a cox survival analysis model with a retailer's transaction data. Almost all the variables have failed the proportionality test. Can I continue with the Cox model? Should I build a LIFEREG model instead of the Cox model?</p>
| 74,467 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.