question
stringlengths 37
38.8k
| group_id
int64 0
74.5k
|
---|---|
<p>While analysing the effect of environmental data on the activity of an animal species (the latter given as count data) I am fitting negative binomial GLMs with one predictor using the MASS library in R. Unfortunatley, the data set is very small (n=7 to 9).</p>
<p>In some cases, the theta value in glm.nb gets very large (accompanied with the warning "iteration limit reached"), possibly indicating that there's no overdispersion and a poisson GLM might be a better choice. Using a poisson GLM, however, a residual deviance of e.g. 150 on 7 degrees of freedom indicates that there actually is overdispersion - or did I miss something? </p>
<p>Using a quasi-poisson GLM works, but I would like to retain ML-based measures such as AIC and Vuong test for model comparison. Any suggestions for alternative approaches are greatly appreciated! </p>
| 36,713 |
<p>I have just started using the autoencoder package in R. <a href="http://cran.r-project.org/web/packages/autoencoder/index.html" rel="nofollow">http://cran.r-project.org/web/packages/autoencoder/index.html</a></p>
<p>Inputs to the autoencode() function include lambda, beta, rho and epsilon.</p>
<p>What are the bounds for these values? Do they vary for each activation function?
Are these parameters called "hyperparameters"?</p>
<p>Assuming a sparse autoencoder, is rho=.01 good for the logistic activation function and rho=-.9 good for the hyperbolic tangent activation function?</p>
<p>Why does the manual set epsilon to .001? If I remember correctly, "Efficient Backpropagation" by LeCun recommends starting values which are not so close to zero.</p>
<p>How much does a "good" value for beta matter?</p>
<p>Is there a "rule of thumb" for choosing the number of nuerons in the hidden layer? For example, if the input layers has N nodes, is it reasonable to have 2N nuerons in the in the hidden layer?</p>
<p>Can you recommend some literature on the practical use of autoencoders?</p>
| 74,067 |
<p>The idea of degrees of freedom is pretty well sunk into my head, but I was wondering could someone perhaps give me few easy examples on how one would determine the number of degrees of freedom? </p>
<p>For example: Lets say that we have a sample of $n$ observations $x_1, x_2, ..., x_n$ following some distribution.</p>
<p>Could someone come up artificial examples of different number of degrees of freedom with this sample, say: examples where degrees of freedom are:</p>
<p>$$\text{degrees of freedom} = n$$
$$\text{degrees of freedom} = n-1$$
$$\text{degrees of freedom} = n-2$$
$$\text{degrees of freedom} = n-3$$</p>
<p>This would help get a grasp on, how one would determine the actual number of degrees of freedom. Thank you! :) </p>
| 49,341 |
<p>I apologise for the mix-up. Am trying to do a diallel analysis involving three parents in poultry. I tried using SAS05 program but it didnt work, i guess it was designed for diallel analysis involving 4-12 parents. I will appreciate any advice or direction on how to perform diallel analysis with 3 parents using either R or SAS. Thank you</p>
| 74,068 |
<p>My understanding of information entropy is that it requires the input probabilities to sum to 1.</p>
<p>So, for a sequence a,a,b,b you then have -([1/2 log2 1/2] + [1/2 log2 1/2]) = 1</p>
<p>Are there versions of information entropy that don't require probabilities to sum to 1? Or, is there a way to measure entropy that is also sensitive to the quantity of items, not only their probability? Or, is there an accepted way to derive a form of 'non-normalised' information entropy that somehow takes into account that the longer the information stream is, the more likely you'll come across various arrangements of information?</p>
<p>e.g. Not that this is accurate, but to convey the question. Lets say you can compute a non-normalised entropy for the same sequence a,a,b,b as such:</p>
<p>-([1/2 log2 1/2] + [1/2 log2 1/2] + [1/2 log2 1/2] + [1/2 log2 1/2]) = 2</p>
<p>Alternately, can you sum the information content over a string of information?</p>
<ul>
<li>For a,a,b,b you have four items at 1 bit of surprise each, therefore 4 total bits.</li>
<li>For a,a,a,a,a,a,a,a,a,b you have 10 items at 0.469 bits average surprise, therefore 4.69 total bits?</li>
</ul>
| 74,069 |
<p>In section 6.2, in the second paragraph of p. 335 (image below) of <a href="http://books.google.com/books?id=DG-6QgAACAAJ&dq=9780131464131&hl=en&sa=X&ei=bBfiT8WtJqbs2QXXv527Cw&ved=0CEIQ6AEwAA" rel="nofollow">"Probability and statistical inference 7e" by Hogg and Tanis</a> states: </p>
<blockquote>
<p>perhaps it is known that $f(x;\Theta)=(1/\Theta)e^{x/\Theta}$</p>
</blockquote>
<p>where $x$ is data and $\Theta$ is a parameter.</p>
<p>What does "$;$" mean in this context, as opposed to "$,$" or "$|$", all three are used in different ways in the same textbook ("," and ";" are used on the same page, $|$ in the standard statement of Bayes' Theorem)?</p>
<p>I think I understand "$,$" and "$|$" and I read $f(x,\Theta)$ as "function of data and parameters" and $f(x|\Theta)$ as "function of the data given parameters". </p>
<p>Here is a scan of the page for more context:</p>
<p><img src="http://i.stack.imgur.com/poU4m.jpg" alt="enter image description here"></p>
| 49,858 |
<p>I have been reading a lot about Dynamic Time Warping (DTW) lately. I am very surprised that there is no literature at all on the application of DTW to irregular time series, or at least I could not find it.</p>
<p>Could anybody give me a reference to something related to that issue, or maybe even an implementation of it?</p>
| 36,721 |
<p>Several texts (both online and published books) have been reviewed prior to asking this.</p>
<p>What diagnostics are accepted as best practise for a generalised linear mixed-effects model fitted in R using glmmPQL. I am modelling mortality incidence (count data) using repeated measures, longitudinal survival data. The response is either 0 (alive) or 1 (death). Therefore when plotting residuals, a random scatter is difficult to observe as the residuals cluster around the observed values of 0 or 1. How can I plot residuals that are easy to interpret when fitting a glmmPQL model? Is there a way to get one residual per unique subject as opposed to 1 residual per observation of a given subject?</p>
<p>What are the other goodness-of-fit measures/tests to apply to a glmmPQL model. E.g. it appears that AIC and BIC can not be obtained from a glmmPQL.</p>
<p>I would greatly appreciate any help, even if direction to certain texts.</p>
<p>Thanks</p>
| 36,722 |
<p>Hello: I am a computer science student working as a research assistant in an undergrad IR lab, feeling spectacularly out of my element.</p>
<p>Given an input of a single continuous value and a vector of several dozen boolean values, I must estimate the expected value of a single continuous value and output it. I have several thousand training examples where I have <em>both</em> the full input and the actual output. I suspect there is some sort of machine learning algorithm that will produce a function mapping the input to the output, but I'm unsure enough of the terminology involved that I'm not sure how best to seek these resources.</p>
<p>What I am looking for reminds me of classic IR classifiers such as the naive Bayesian classifier, but classifiers give you the probability of a sample belonging to a discrete class, not an expected continuous variable. Is this a form of regression? If so, what type?</p>
<p>Does anybody have any insight? Any webpages to help fill in the gaps in my knowledge? Or even any helpful search terms? Also, if this is an inappropriate question for stats.stackexchange.com, is there a more appropriate community based QA site you could direct me to?</p>
<p>Thanks.</p>
| 38,179 |
<p>I am doing a permutation to know if two datasets are significantly different. These datasets, both, have a median of 1 (in fast, 60% of observations in one datasets and 80% of the other dataset are the same observation).</p>
<p>Said that, my permutation test gives me a p-value of 1 (they are the same distribution). I use the difference between medians, because my data is extremely skewed. Nevertheless, I've done a Fisher test and says they are significantly different.</p>
<p>Do permutations tests inflate type-II errors in these cases where vast majority of observations are the same, when using a median?</p>
| 29,195 |
<p>I am familiar with the debate surrounding Bates's decision to exclude p-values for mixed effects regression coefficients in lmer. However, I operate in a very p-value-focused discipline and am trying to clear something up. </p>
<p>What are the degrees of freedom for group-level predictors in a two-level model? I can use pvals.fnc() to obtain p-values for these coefficients but something tells me it is way overestimating the degrees of freedom for the tests, using the individual-level sample size and not the group-level sample size. Any help would be greatly appreciated. </p>
| 44,232 |
<h3>Context</h3>
<p>I have two processes that each emit an event at various times:</p>
<pre><code>Times that event A occurs: 1, 15, 47, 73, 108
Times that event B occurs: 2, 18, 75, 90, 112, 140
</code></pre>
<p>I suspect that there is a weak causal relationship from A to B</p>
<ul>
<li>Process B is sometimes triggered by process A (note B-2 after A-1, B-18 after A-15, B-75 after A-73 ...). </li>
<li>But not always (note that A-47 is not followed by an event in B)</li>
<li>And B also has its own independent events (like B-90 and B-140)</li>
</ul>
<h3>Question</h3>
<p>What are some principled ways to model this relationship? Note that I want to capture both the delay (about 1-3 time units in this case) and a strength of connection (B usually follows A somewhat strongly here). Actual event streams may be far less strongly connected. </p>
| 74,070 |
<p>The standard error of an estimator is defined as the square root of the the estimator variance (or mean squared error, MSE, for unbiased estimators). More specifically, if we wanted to get the standard error of the sample mean $\bar{X}$, we would divide the variance of the sample used to calculate $\bar{X}$ by the square root of $n$:<br>
$$
{\rm MSE}(\bar{X}) = \frac{s^2}{n}, \\
{\rm and,} \\
~ \\
{\rm S.E.} (\bar{X}) = \sqrt\frac{s^2}{n} = \frac{s}{\sqrt{n}}
$$
where $s^2$ is the sample variance for the sample of $X$s.</p>
<p>However, in OLS regressions, the standard error of the regression is defined as:
$$
\sqrt\frac{SSR}{n - K}
$$
where SSR is the sum of squared residuals.</p>
<p><strong>My question is: does this not need to be corrected again by dividing by the sample size $n$?</strong></p>
<p>Surely $\sqrt{^{SSR}/_{n - K}}$ is just the sample estimate of the population variance and, according to the formula above, it needs to be corrected. I understand that in both cases the standard error is the square root of the MSE but in the first case the MSE is the sample variance that the estimator came from divided by $n$ and in the second example the MSE is just the sample variance. Does anyone have an explanation?</p>
| 74,071 |
<p>On page 10 of <a href="http://www.chicagobooth.edu/jar/conference/docs/DoMarketsUnder-Kumar.pdf" rel="nofollow">this paper</a> the author states that </p>
<blockquote>
<p>In this section, I use monthly Fama and MacBeth (1973) logit
regression [...]</p>
</blockquote>
<p>to examine the type of forecast. My question(s) are: </p>
<ol>
<li>what does this mean,</li>
<li>is it even remotely justified by the typical conditions making
normal <a href="http://en.wikipedia.org/wiki/Fama%E2%80%93MacBeth_regression" rel="nofollow">Fama-MacBeth
procedure</a>
(namely, that asset returns are approximately serially
uncorrelated), and</li>
<li>how does the nonlinearity of error in the coefficient estimate
affect the aggregation of the cross-sectional estimates?</li>
</ol>
<p>My take is that </p>
<ol>
<li>it means that he runs a single cross-sectional regression each month
and forms the point estimates and standard errors from the time
series of these estimates,</li>
<li>probably not exactly, but this is not so important (people use
Fama-MacBeth in many contexts where the individual estimates are not
independent), and</li>
<li>I have no idea.</li>
</ol>
<p>Any insight or other interpretations of this would be much appreciated. I would also welcome references to other papers applying a similar methodology (<a href="http://www.bauer.uh.edu/departments/finance/documents/Religion.pdf" rel="nofollow">here is one</a>). A fully rigorous argument (or pointer towards one) justifying (or showing it to be incorrect) such a procedure would be optimal. </p>
| 74,072 |
<p>I just went through the following work <a href="http://epublications.bond.edu.au/cgi/viewcontent.cgi?article=1199&context=ijbf" rel="nofollow">http://epublications.bond.edu.au/cgi/viewcontent.cgi?article=1199&context=ijbf</a> and was wondering what is the difference between model (9) on page 12 which models volatility as a function of past volatility and past error and model (10) which is the GARCH(1,1) model. Both seem to be of the same nature to me, please correct me if i'm wrong.</p>
| 74,073 |
<p>In an experiment several measurements are taken using similar but different measuring instruments. (The number of measurement tools used in a single experiment could range from 2 to 500 instruments, but most have a low number (~2 - 3) of instruments used.) Since all the instruments are measuring the same effect, it is expected that they produce similar measurements, but possibly with difference sources and levels of noise. Some of the measuring tools may unknowingly be malfunctioning and produce erroneous data altogether. This means most of the measurements are somewhat correlated (> 0.8), but some could be uncorrelated or even inversely correlated. How can one summarize the measurements of the instruments in such a way as to best represent the real value of the quantity being measured?</p>
<p>Possible approaches to this problem might include using:</p>
<p>(1) a regression model to fit the measurements and then interpolate the measurement's summarized value,
(2) the first component of a principle component analysis,
(3) or the scores from a factor analysis.</p>
<p>Which method is most appropriate for dealing with the task or is another approach better for doing this summarization?</p>
| 74,074 |
<p>What are the main ideas, that is, concepts related to <a href="http://en.wikipedia.org/wiki/Bayes%27_theorem">Bayes' theorem</a>?
I am not asking for any derivations of complex mathematical notation.</p>
| 36,728 |
<p>I am looking for a list of modeling algorithms (as a package in <code>R</code>) that can accept: </p>
<ol>
<li>Continuous or categorical predictors </li>
<li>Continuous response </li>
<li>Can effectively treat missing values. For example, <code>glm</code> and <code>randomForest</code> discard records with missing values and thus, it is not in my final list. I do not want to perform imputation either. </li>
</ol>
<p>So far, I have the following list: </p>
<ol>
<li>GBM (<code>gbm</code> package in R)</li>
<li>RPART (<code>rpart</code> package)</li>
<li>Bagging with rpart (<code>ipred</code>)</li>
<li>Tree (<code>tree</code>)</li>
</ol>
| 74,075 |
<p>I have a set of samples, for which I know the "true groups". For this samples I have about 200 binary variables, I would like to know a method to select the subset of variables, that gives me a clustering as closer as possible of my known groups.</p>
<pre><code># sample labels
labelColors2 <-c("black", "black","black","black","black","black", "blue","blue","blue","blue","green", "green",
"red","red","red","red","red","red","red","red","red","red","red","red")
# data matrix
library(RCurl)
x <- getURL("https://dl.dropboxusercontent.com/u/10712588/binMatrix")
tab3 <- read.table(text = x)
colLab <- function(n) {
if(is.leaf(n)) {
a <- attributes(n)
#clusMember a vector designating leaf grouping
#labelColors <- colors # a vector of colors for the above grouping
labCol <- labelColors2[clusMember[which(names(clusMember) == a$label)]]
attr(n, "nodePar") <- c(a$nodePar, list(lab.col = labCol,lab.cex=0.8))
}
n
}
mclust <- hclust(dist(tab3, method ="binary"))
dhc <- as.dendrogram(mclust)
clusMember <- cutree(mclust, k=24)
clusDendro <- dendrapply(dhc, colLab)
plot(clusDendro)
</code></pre>
<p><img src="http://i.stack.imgur.com/CBNQq.png" alt="example of the true groups given by colours"> </p>
<p>The colors should be grouped, this is my actual way to access the goodness of clustering, visually, but I would like to know a feature selection technique.</p>
<p>thks in advance... </p>
<p>updating the question, I found the <strong>klaR::stepclass</strong> function, that should to what I want, or some similar implementation, but I did not find a work around yet.</p>
<pre><code>fac <- as.factor(labelColors2)
mylda <- function(x, grouping) {
clust <- pam(dist(x, method="binary"), k=4,
cluster.only = TRUE)
posterior <- matrix(0, 24, 4)
colnames(posterior) <- c("black", "blue", "green", "red")
for(i in 1:nrow(posterior)) posterior[i, clust[i]] <- 1
l <- list(class=grouping, posterior=posterior)
class(l) <- "foo"
return(l)
}
</code></pre>
<p>With the function above I can reproduce an output of my classification, similar to what <strong>klaR::ucpm</strong> needs, but I can't manage to run the function</p>
<pre><code>sc_obj <- stepclass(x=tab3, grouping=fac, method="mylda", direction="forward")
Error in parse(text = x) : <text>:2:0: unexpected end of input
1: fac ~
^
</code></pre>
<p>Well, I think I had some improvement, I established a "fitness function", and with a random search (it is still running, I found a better clustering already</p>
<pre><code>predict.foo <- function(x) x
for(i in 1:1000000) {
s <- sample(1:ncol(tab3),sample(68:200,1))
cr <- ucpm(predict(mylda(tab3[,s], fac))$posterior, fac)$CR
write.table(matrix(c(cr, s), nrow=1), "randonSearch.txt", append=TRUE, row.names=FALSE, col.names=FALSE)
}
</code></pre>
<p>With this I'm monitoring the <strong>randonSearch.txt</strong> file with:</p>
<pre><code>cut -d " " -f1 ../randonSearch.txt | grep 0.8
</code></pre>
<p>I already found a "Correctness Rate" of 0.833, check it out</p>
<p><img src="http://i.stack.imgur.com/a9kgV.png" alt="enter image description here"></p>
<p>I think there is still room for improvement, I'm thinking in a genetic algorithm... </p>
| 36,729 |
<p>The EM algorithm roughly has two steps.<br>
E-Step:<br>
Calculate the conditional expectation of the likelihood function given the data $x_1, . . . , x_n $ and the current estimates of parameters $\Theta^{[k]}$. So the objective function would be
$Q(\Theta, \Theta^{[k]})=E[\ln(\Theta,x_1, . . . , x_n)|x_1, . . . , x_n,\Theta^{[k]}]$<br>
M-step:<br>
Maximize the objective function with respect to $\Theta $ to obtain the next set of estimates $\Theta^{[k+1]}$.</p>
<p>Now does the EM algorithm require i.i.d data to estimate the parameters? Is that possible to use EM algorithm even in the case of non i.i.d data?</p>
| 19,095 |
<p>Let $D$ be a data set containing $n$ instances of two variables $x_1$ and $x_2$. Further let $x_1$, $x_2$ be independent according to some measure like linear correlation or mutual information etc.</p>
<p>How do we find a reordering of the values of $x_2$ that results in some predetermined (achievable) correlation coefficient against $x_1$? </p>
<p>I guess that a simple way to achieve (close to?) maximum correlation would be to order both in increasing order (independently) and then reorder row-wise into $x_1$'s original positions.</p>
<p>Thanks</p>
| 49,925 |
<p>I draw following figure using the <code>DLNM</code> package in <strong>R</strong>.</p>
<p>Datasheet a has variables <code>y</code> and <code>x1</code>. I assume that the continuous dependent variable <code>y</code> has gaussian distribution and that continuous independent variable <code>x1</code> has a relationship with <code>y</code> with a normal spline of $df=3$ (two knots).</p>
<p>I used following <strong>R</strong> code</p>
<pre><code>x1.bs <- crossbasis(a$x1, lag=c(0,16),
argvar=list(fun="ns", knots=c(55,75), int=FALSE, cen=48),
arglag=list(fun="ns", df=3))
model <- glm(y~x1.bs, data=a, family=gaussian())
pred.bs <- crosspred(x1.bs, model, cumul=TRUE)
plot(pred.bs, "contour", xlim=c(40,80),
xlab="independent x", ylab="lag (weeks)", main="Contour graph")
plot(pred.bs, "overall", ylab="Unit", lwd = 1.5,
main = "Overall effect", ci.level = 0.95,
ci.arg = list(density = 20, col = grey(0.7)))
</code></pre>
<p><strong>Question 1.</strong>
How can I interpret following Contour Figure?
<img src="http://i.stack.imgur.com/TD1ct.png" alt="Countour"></p>
<p>For example, <code>x1</code>=75 and <code>lag</code>=6 gives <code>0.04</code>. What does 0.04 means?</p>
<p><strong>Question 2.</strong>
How can I interpret following Overall Figure?
<img src="http://i.stack.imgur.com/dqEwK.png" alt="Overall"></p>
<p>For example, <code>x1</code>=70 and <code>lag</code>=6 gives overall effect=0.4. What does overall effect 0.4 means?</p>
| 74,076 |
<p>I have a sample of about 1000 values. These data are obtained from the product of two independent random variables $\xi \ast \psi $. The first random variable has a uniform distribution $\xi \sim U(0,1)$. The distribution of the second random variable is not known. How can I estimate the distribution of the second ($ \psi $) random variable?</p>
| 74,077 |
<p>I am trying to learn the difference between the three approaches and their applications.</p>
<p>a) As I understand,</p>
<pre><code>AIC = -LL+K
BIC = -LL+(K*logN)/2
</code></pre>
<p>Unless I am missing something, shouldn't the K that minimizes the AIC minimize BIC as well since N is constant. </p>
<p>I looked at this <a href="http://stats.stackexchange.com/questions/577/is-there-any-reason-to-prefer-the-aic-or-bic-over-the-other/767#767">thread</a> but couldn't find a satisfactory answer. </p>
<p>b) According to Witten's book on Data Mining (pg 267) the definition of MDL for evaluating the quality of network is the same as BIC. Is there a difference between BIC and MDL?</p>
<p>c) What are the different approaches to compute MDL? I am looking for its application in Clustering, Time Series Analysis (ARIMA and Regime Switching) and Attribute Selection. While almost all commonly used packages in R report AIC and BIC, I couldn't find any that implements MDL and I wanted to see if I can write it myself.</p>
<p>Thank you.</p>
| 74,078 |
<p>I have a dataset:</p>
<pre><code>y = array([ 4460000., 2100000., 938000., 500000., 204000., 130000.,
124000., 118000., 106000., 100000., 98000., 98000.,
97000., 99000.])
x = array([ 470., 955., 1700., 2900., 5520., 7000., 8500.,
9700., 14600., 22000., 27000., 33000., 37500., 42200.])
</code></pre>
<p><img src="http://i.stack.imgur.com/G5bIG.png" alt="loglogplot"></p>
<p>which I need to fit using an arbitrary function. Unfortunately I have no further (useful) information for that data. </p>
<p>Is there some kind of function that can fit this data?</p>
| 74,079 |
<p>If one-third of the persons donating blood at a clinic have a O+ blood, find the probability that the 5th O+ donor is the fourth donor of the day.</p>
| 74,080 |
<p>I am trying to fit the different variables types and measurement levels into some logic, like a tree diagram, and I am not sure about some things. On one hand, you have <strong>quantitative</strong> variable vs. <strong>qualitative</strong>. For a quantitative, you have <strong>continuous</strong> or <strong>discrete</strong>. On the other hand, you have <strong>nominal, ordinal, interval and ratio</strong>. It's easy to say that qualitative can be either nominal or ordinal. But what about quantitative ? Does interval and ratio must be continuous ? How do I classify ? Can I say for example that quantitative must be discrete or continuous and if continuous either interval or ratio ? what is the correct way to build this diagram ?</p>
| 36,732 |
<p>I am trying to make two plots next to one another using the R cookbook <code>multiplot</code> function, although I am getting the following error:</p>
<blockquote>
<p><code>Aesthetics must either be length one, or the same length as the dataProblems:v.x</code></p>
</blockquote>
<p>Can someone help? My code is below:</p>
<pre><code>rm(list=ls())
library(ggplot2)
library(grid)
# Multiple plot function
multiplot <- function(..., plotlist=NULL, file, cols=1, layout=NULL) {
require(grid)
# Make a list from the ... arguments and plotlist
plots <- c(list(...), plotlist)
numPlots = length(plots)
# If layout is NULL, then use 'cols' to determine layout
if (is.null(layout)) {
# Make the panel
# ncol: Number of columns of plots
# nrow: Number of rows needed, calculated from # of cols
layout <- matrix(seq(1, cols * ceiling(numPlots/cols)),
ncol = cols, nrow = ceiling(numPlots/cols))
}
if (numPlots==1) {
print(plots[[1]])
} else {
# Set up the page
grid.newpage()
pushViewport(viewport(layout = grid.layout(nrow(layout), ncol(layout))))
# Make each plot, in the correct location
for (i in 1:numPlots) {
# Get the i,j matrix positions of the regions that contain this subplot
matchidx <- as.data.frame(which(layout == i, arr.ind = TRUE))
print(plots[[i]], vp = viewport(layout.pos.row = matchidx$row,
layout.pos.col = matchidx$col))
}
}
}
# Define beta binomial function
dbb <- function(x, N, u, v) {
beta(x+u, N-x+v)/beta(u,v)*(gamma(N+1)/(gamma(x+1)*gamma(N-x+1)))
}
#Prior parameters
a<-1
b<-1
#Data
N1<-100
X<-10
#New sample
N2 <- 100
# Generate prior, likelihood, posterior and posterior predictive distribution
v.theta <- seq(0,1,length=1000)
v.x <-seq(0,100,length=100)
v.prior <- dbeta(v.theta,a,b)
v.likelihood <- choose(N1,X)*(v.theta^X)*(1-v.theta)^(N1-X);
v.posterior <- dbeta(v.theta,a+X,b+N1-X)
v.posteriorPred <- dbb(v.x,N2,a+X,b+N1-X)
# Plot as a subplot
p1<-qplot(v.x,v.prior,geom = c("point", "line"))
p2<-qplot(v.x,v.likelihood,geom = c("point", "line"))
multiplot(p1, p2)
</code></pre>
| 74,081 |
<p><strong>The scenario:</strong></p>
<p>Consider that I have some 100 GB of raw data. When I extract features from it using the traditional, well-known approaches in my area, I get (say) <strong>100,00,000 instances</strong>. When I build supervised machine-learning models (using Decision tress and Bayesian Networks in particular) with this data, I get some <em>x</em> % classification accuracy.</p>
<p>As a part of my research, I am working on an approach which is expected to give better accuracy. With my approach, parsing that same 100 GB of data results in fewer instances - lets say 10 times lesser. Which means I now have <strong>10,00,000</strong> instances with me. </p>
<p><strong>The problem:</strong></p>
<p>In order to show that my approach works better, I need to do a proper comparison with the traditional approach. </p>
<p>I am not very clear that how should I evaluate my approach with the traditional approach. Given the same amount of raw data, the number of instances extracted by me are 10 times lesser. In order to compare the accuracy obtained by both the approaches, which is better? -</p>
<ul>
<li><strong>Keep the number of instances same.</strong> Take 10,00,000 instances of the data obtained from my algorithm, and <em>sample out</em> the same number of instances obtained from the traditional approach. Do a 70-30 split for training-testing data on each of them, and build my models. </li>
<li><strong>Keep the raw input data same.</strong> Take 100 GB of raw data and extract features with both approaches. The size of training (and testing) datasets in both the cases will have a <em>huge</em> difference. Then, I can do a 70-30 split for training-testing, and build my models. </li>
<li>Something else ??</li>
</ul>
<p><strong>updated to address comment by @ffriend</strong>
In case it matters, I am dealing with network traces (pcap files). For traditional approaches I refer the reader to <a href="http://escholarship.org/uc/item/1wn9n8kt.pdf" rel="nofollow">this</a> paper which is classic work employing that technique. For my approach, I can only link to <a href="http://dl.acm.org/citation.cfm?id=2611318" rel="nofollow">my preliminary work</a></p>
<p>p.s.: I don't feel the title of my question is very good. If someone can think of a better title, you are welcome to edit it.</p>
| 74,082 |
<p>How to Predict the Dependent variable in lmer? Do we have any Package or function already buit in R?</p>
| 74,083 |
<p>The likelihood could be defined by several ways, for instance :</p>
<ul>
<li><p>the function $L$ from $\Theta\times{\cal X}$ which maps $(\theta,x)$ to $L(\theta \mid x)$</p></li>
<li><p>the random function $L(\cdot \mid X)$</p></li>
<li><p>we could also consider that the likelihood is only the "observed" likelihood $L(\cdot \mid x^{\text{obs}})$</p></li>
<li><p>in practice the likelihood brings information on $\theta$ only up to a multiplicative constant, hence we could consider the likelihood as an equivalence class of functions rather than a function</p></li>
</ul>
<p>Another question occurs when considering change of parametrization: if $\phi=\theta^2$ is the new parameterization we commonly denote by $L(\phi \mid x)$ the likelihood on $\phi$ and this is not the evaluation of the previous function $L(\cdot \mid x)$ at $\theta^2$ but at $\sqrt{\theta}$. This is an abusive but useful notation which could cause difficulties to beginners if it is not emphasized.</p>
<p>What is your favorite rigorous definition of the likelihood ? </p>
<p>In addition how do you call $L(\theta \mid x)$ ? I usually say something like "the likelihood on $\theta$ when $x$ is observed".</p>
<p>EDIT: In view of some comments below, I realize I should have precised the context. I consider a statistical model given by a parametric family $\{f(\cdot \mid \theta), \theta \in \Theta\}$ of densities with respect to some dominating measure, with each $f(\cdot \mid \theta)$ defined on the observations space ${\cal X}$. Hence we define $L(\theta \mid x)=f(x \mid \theta)$ and the question is "what is $L$ ?" (the question is not about a general definition of the likelihood)</p>
| 36,735 |
<p>Does the pdf of an mvn variable even exist when there is high correlation?</p>
<p>I want to use an algorithm (actually it is the cross-entropy method for estimating a rare-event probability) that needs the pdf value of the mvn distribution, that is mvnpdf(x,mu,Sigma). </p>
<p>However, my Sigma is close to singular, meaning there is a high correlation between the variables in the vector, so it is of course difficult/impossible to find the inverse of Sigma. Is there any way to overcome this problem?</p>
<p>Is it not true that for instance a vector [a,b,c,d] with covariance matrix [1,1,0,0;1,1,0,0;0,0,1,1;0,0,1,1] (singular!) will behave in the exact same way as the vector [a,c] with covariance matrix [1,0;0,1] (now non-singular!), while ignoring b and c? Does this mean that I can approximate the pdf value of [a,b,c,d] by the pdf of [a,c]?</p>
<p>Sorry if this is answered before, I've really tried to search for it.</p>
<p>Thank you.</p>
<p>EDIT: The thing is that I want to simulate samples from a multivariate normal distribution, $x \sim N(\mu,\sigma)$, in order to find the probability that ${r(x) < 1}$, where $r(x)$ returns a positive real number based on the vector $x$. So I simulate $x_i \sim N(\mu,\Sigma), i = 1 ,..., M$, and I estimate the probability by the Monte Carlo estimate $p = (1/M)\sum_i I(r(x_i)<1)$. No problems so far. However, since ${r(x_i)<1}$ is a very rare event, I want to try importance sampling instead, simulating from the distr $N(\mu_2,\Sigma_2)$ so I need the pdf value to estimate $p = (1/M)\sum{I(r(x_i)<1) pdf_1(x_i)/pdf_2(x_i)}$.</p>
| 74,084 |
<p>I get from a specific experiment 3 outputs. Each is the information of some physical quantity in the direction x,y or z. The way we extract the information from those signals is by fitting.</p>
<p>In the data extracted from fitting, some parameters depend only on the direction of the physical quantity we want to measure, and some others depend on the whole system or on the scalar value of the physical quantity (common attributes). Statistically, it's preferred to do a simultaneous fit for all the directions. This is what my boss told me.</p>
<p>What he said sounds right, but technically, it's pretty hard to fit the 3 data sets together, because the function is already pretty complicated with 7 parameters in each direction, and combining the 3 together will just complicate stuff, and my fit doesn't converge when doing them simultaneously. I'm using Levenberg-Marquardt's algorithm.</p>
<p>The question is, is there anything I could do in the individual results of the fit, to obtain the result of a simultaneous fit? In other words, how can I avoid simultaneous fitting and obtain its statistical advantage?</p>
<p>And if simultaneous fitting is the only solution I have, what does taking 3 data sets to a simultaneous fit entail? how will that scale the fitting parameters (Chi^2, weighting, tolerance ...etc)?</p>
<p>I'm fitting using a program I wrote myself with C++.</p>
<p>Thank you for any efforts.</p>
| 36,736 |
<p>Hello and thanks for looking at my questions.</p>
<p>I'm sure this is a very simple question but I just cant seem to understand it</p>
<p>A committee of three is selected from a pool of five individuals : two females (A and B) and three males (C,D,E)</p>
<p>If the committee is selected at random, what is the probability that contains both females?</p>
<p>(i figured for this one it would be P=(2/5)*(1/4) because the first one selected has a 2/5 chance of being a women, and the second one has a 1/4 chance but I was told that was not correct)</p>
<p>and</p>
<p>If instead, the committee is selected by randomly selecting one of the two females, one of the three males, and then one of the remaining three individuals, what is the probability it contains both females</p>
| 74,085 |
<p>I would like to use R to solve a problem I have. I don't even know what to call a problem of this kind and I'm finding Googling difficult. My guess is that this kind of problem already has R packages to help solve it, but I'd very much appreciate help defining the problem.</p>
<p>I want to select the optimum combination of agents, given some constraints.</p>
<p>I have a large set of agents, each of which has four 'measures':</p>
<ol>
<li><strong>Category 1</strong>: can take 1 of 4 possible values</li>
<li><strong>Category 2</strong>: can take 1 of 20 possible values</li>
<li><strong>Cost</strong>: a continuous variable</li>
<li><strong>Expected Return</strong>: a continuous variable</li>
</ol>
<p>I wish to select a set of exactly 15 agents, to maximise <strong>Expected Return</strong>, with the following constraints:</p>
<ol>
<li>There must be a certain number of agents from each value of <strong>Category 1</strong></li>
<li>There is a maximum number of agents from each value of <strong>Category 2</strong></li>
<li>There is a maximum <strong>Cost</strong></li>
</ol>
<p>What kind of problem is this, specifically? Are you aware of any R packages that I could use / adapt to help me? I specify R only as I am very comfortable with the language.</p>
<p>Many thanks for your time.</p>
| 33,408 |
<p>This time, I have a more theoretical than computational predicament. I have a path model that I am interested in testing on a data set with two groups. It is a very simple two predictor model outlined below, and I am using the ML-SEM function on Mplus v.7 to analyze the data.</p>
<pre><code> y = x1 + x2
</code></pre>
<p>My main research question is (1.) whether my model works better between groups or within groups. I am also interested in (2.) the proportion of the variance accounted for between group as compared to within groups. Finally, I would like to know (3.) if x1 is a better between-group predictor and x2 is a better within-group predictor.</p>
<p>The model converges just fine and fits the data very well, but I do not know what type of information in the output I would use to answer these three research questions. What data do I use to derive the amount of variance accounted for by the model on each level? How can I compare the predictors functioning within and between groups?</p>
<p>Any pointers and suggestions would be very much appreciated.</p>
<p>Thanks!</p>
| 74,086 |
<p>I have set of time series for various days (same frequency and source). I need to choose a subset of it, and they together will forecast future values.</p>
<p>Currently I am simply using X out of Y time series with lowest AICc values when fitted into an ARIMA model.</p>
<p>I want to know which tests I could use to choose the subset. The subset used for forecasting could include time series that are most correlated, or have similar trends etc. Explanation using R code will be helpful.</p>
| 19,099 |
<p>I understand that at least one system for automatically modeling 3D objects from image data exists. Autodesk appears to have developed a good method. Does anyone know the basic structure and algorithms commonly used in such a system? </p>
<p>My technique would include the use of a multilevel neural network (NN). The inputs would be the pixel colors from multiple images of the object at hand. The output would be 3D computer model data. Sets of images paired with the desired models would be the training data for a genetic algorithm (GA). The GA would wire the NN so that it maps relations between the training data images and the desired models as best as possible. </p>
<p>Has this technique been implemented successfully in the past? </p>
<p>Thank you. </p>
| 74,087 |
<p>I'm looking for an R package (or a combination of packages) that would allow me to perform MCMC estimation of a GMM model, with a user-specified moments function.</p>
<p>I've looked at the CRAN Bayesian task-view, but I can't seem to find what I'm looking for, packages being either too general (eg. mcmc) or too specific (MCMCpack)... Ideally I would like to be able to implement <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3286612/" rel="nofollow">Stochastic GMM</a> or to use package <a href="http://www.r-inla.org/home" rel="nofollow">R-INLA</a> to do iterative nested Laplace approximation, again with a user-specified moments function.</p>
<p>Any help greatly appreciated!</p>
| 74,088 |
<p>Is the probability calculated by a logistic regression model (the one that is logit transformed) the fit of cumulative distribution function of successes of original data (ordered by the X variable)?</p>
<p><strong>EDIT:</strong> In other words - how to plot the probability distribution of the original data that you get when you fit a logistic regression model?</p>
<p>The motivation for the question was Jeff Leak's example of regression on the Raven's score in a game and whether they won or not (from Coursera's Data Analysis course). Admittedly, the problem is artificial (see @FrankHarrell's comment below). Here is his data with a mix of his and my code:</p>
<pre><code>download.file("http://dl.dropbox.com/u/7710864/data/ravensData.rda",
destfile="ravensData.rda", method="internal")
load("ravensData.rda")
plot(ravenWinNum~ravenScore, data=ravensData)
</code></pre>
<p><img src="http://i.stack.imgur.com/Cr5ka.png" alt="enter image description here"> </p>
<p>It doesn't seem like good material for logistic regression, but let's try anyway:</p>
<pre><code>logRegRavens <- glm(ravenWinNum ~ ravenScore, data=ravensData, family=binomial)
summary(logRegRavens)
# the beta is not significant
# sort table by ravenScore (X)
rav2 = ravensData[order(ravensData$ravenScore), ]
# plot CDF
plot(sort(ravensData$ravenScore), cumsum(rav2$ravenWinNum)/sum(rav2$ravenWinNum),
pch=19, col="blue", xlab="Score", ylab="Prob Ravens Win", ylim=c(0,1),
xlim=c(-10,50))
# overplot fitted values (Jeff's)
points(ravensData$ravenScore, logRegRavens$fitted, pch=19, col="red")
# overplot regression curve
curve(1/(1+exp(-(logRegRavens$coef[1]+logRegRavens$coef[2]*x))), -10, 50, add=T)
</code></pre>
<p>If I understand logistic regression correctly, R does a pretty bad job at finding the right coefficients in this case. </p>
<p><img src="http://i.stack.imgur.com/Cb6o8.png" alt="enter image description here"></p>
<ul>
<li>blue = original data to be fitted, I believe (CDF) </li>
<li>red = prediction from the model (fitted data = projection of original data onto regression curve)</li>
</ul>
<p><strong>SOLVED</strong><br>
- lowess seems to be a good non-parametric estimator of the original data = what is being fitted (thanks @gung). Seeing it allows us to choose the right model, which in this case would be adding squared term to the previous model (@gung)<br>
- Of course, the problem is pretty artificial and modelling it rather pointless in general (@FrankHarrell)<br>
- in regular logistic regression it's not CDF, but point probabilities - first pointed out by @FrankHarrell; also my embarrassing inability to calculate CDF pointed out by @gung.</p>
| 36,745 |
<p>I am working on my thesis. My main regression model is the following:</p>
<ol>
<li><p>$Y=x_1*{\rm Payment}+x_2*{\rm Country}+x_3*{\rm Industry}...$</p>
<p>All independent variables are dummy / binary variables. In a next step I divide my sample and construct the following two regression:</p></li>
<li><p>$Y=x_2*{\rm Country}+x_3*{\rm Industry}...$ => Here I only consider observations which had the value "1" regarding the "Payment"-independent variable</p></li>
<li><p>$Y=x_2*{\rm Country}+x_3*{\rm Industry}...$ => Here I only consider observations which had the value "0" regarding the "Payment"-independent variable.</p></li>
</ol>
<p>The independent variable <code>Payment</code> is quite important in my thesis and all remaining variables can be rather considered as control variables. <code>Payment</code> takes the value 1 if the M&A deal is paid in cash and 0 if it is paid in shares.</p>
<p>I received the following feedback from my supervisor and I am not quite sure how to incorporate it: </p>
<blockquote>
<p>...Can you really split the sample into cash and stock M&A?... If you are indeed correct with your hypothesis then different companies will go for cash than for stock as a payment method...Hence your two populations are NOT the same. In other words: There is a endogeneity problem. To solve it use the 2-stage Heckman instrumental variable estimation....Pay attention to the economic justification of your IVs and to their econometric testing (valid instrument)...</p>
</blockquote>
<p>When I look up the Heckman procedure (especially the female wage example) it seems that it is analysing Y. However I understand that I need to do this with my independent variable ("Payment").</p>
<p>Hence I am confused as I don't see the link between the female wage example and my case.</p>
| 74,089 |
<p>I want to simulate or calculate probabilities of combinations of group membership for different sample sizes (e.g., n= 3, 4, 5, 10, or 100) for two groups (of the same sample size). Each outcome could be male/female and young/old. The population is 50% male and 50% young.</p>
<p>What is the probability of getting: </p>
<p>all male in Group 1 AND all female in Group 2 </p>
<p>OR </p>
<p>all male in Group 2 AND all female in Group 1 </p>
<p>OR </p>
<p>all young in Group 1 AND all old in Group 2</p>
<p>OR </p>
<p>all young in Group 2 AND all old in Group 1</p>
<p>I would also like to be able to include additional categories (e.g., eats their vegetables, doesn't eat their vegetables) and be able to choose the percent of the population that falls into one category or the other. The categories can be independent of each other, but it would be better if there was the ability to make membership dependent (e.g., females eat their vegetables 70% of the time and males 50% of the time). It would also be better if this was not limited to categories with 2 types (e.g., be able to do 4th, 5th, or 6th grader).</p>
<p>My goal is to calculate the probability of getting completely unbalanced groups for different combinations of confounds and I can't figure out how to do this with R. This is an expansion on <a href="http://stats.stackexchange.com/questions/74350/is-randomization-reliable-with-small-samples">this question</a>, but the approach used there is slow for large sample sizes and sort of convoluted.</p>
<p><strong>Edit:</strong></p>
<p>Reworded question:
There are two groups that are independent samples from the same population of gradeschoolers, call them treatment and control. The population consists of 50% males, 50% females, 25% each in 3rd-6th grade. There are equal number of males and females in each grade. Also 75% of females eat vegetables, while only 50% of males eat vegetables regardless of schoolgrade. </p>
<p>For different sample sizes I want to calculate the chance of getting</p>
<p>All males in one group (treatment/control) while there are all females in the other</p>
<p>OR</p>
<p>All students of the same grade in one group while the second group consists of all students of the same grade but different than the first. For example all 3rd graders in the treatment group and all 6th graders in the control group.</p>
<p>OR</p>
<p>All vegetable eaters in one group and all non-vegetable eaters in the second group.</p>
| 74,090 |
<p>I'm trying to add a softmax layer to a neural network trained with backpropagation, so I'm trying to compute its gradient.</p>
<p>The softmax output is $h_j = \frac{e^{z_j}}{\sum{e^{z_i}}}$ where $j$ is the output neuron number.</p>
<p>If I derive it then I get</p>
<p>$\frac{\partial{h_j}}{\partial{z_j}}=h_j(1-h_j)$</p>
<p>Similar to logistic regression.
However this is wrong since my numerical gradient check fails.</p>
<p>What am I doing wrong? I had a thought that I need to compute the cross derivatives as well (i.e. $\frac{\partial{h_j}}{\partial{z_k}}$) but I'm not sure how to do this and keep the dimension of the gradient the same so it will fit for the back propagation process.</p>
<p>Thanks.</p>
| 74,091 |
<p>I have 8, 1-minute audio excerpts, all that feature the same music. Four of them were recorded by a middle school music ensemble (2 expressive, 2 unexpressive) and four by a high school music ensemble (2 expressive, 2 unexpressive). </p>
<p>I am getting participants in the MS ensemble, the HS ensemble, and a set of expert evaluators to listen to all 8 excerpts and assign a single rating.</p>
<p>Because all of these excerpts feature the same 1-minute piece of music (although by two different groups and under two different conditions - expressive and unexpressive), do I need to have 3 different audio presentation orders to help control for order effects?</p>
<p>I am going to average the 2 expressive and 2 unexpressive audio excerpt ratings for each group (MS, HS, Experts). My thought that was by averaging the ratings (to get scores for each group - MS Expressive, MS Unexpressive, HS Expressive, HS Unexpressive) that I wouldn't really need to do have separate orders.</p>
<p>Any help about counterbalancing and/or ways to avoid fatigue effects (since it is 8, 1-minute excerpts of the same piece of music, although recorded by two different music groups under two different conditions) would be most helpful. </p>
<p>Thanks for your help!</p>
| 74,092 |
<p>We have a daily report that takes the days p90 website latency data and compares it with the same day over the past four weeks and tells whether the p90 latency has gone up or down for a particular day. </p>
<p>I was also planning to build a weekly report that takes the average p90 latency delta for current and previous weeks and shows the delta.</p>
<p>I was thinking what will be the advantages of doing that and what kind of insights will the weekly report show which will not be present in a daily one?</p>
| 74,093 |
<p>calculating recall/precision from k-fold cross validation (or leave-one-out) can be performed either by averaging the recall/precision values obtained from different k folds or by combining the predictions and then calculate one value for each of the recall and precision. </p>
<p>My data is composed of 1000 samples belonging to two classes. the positives are only 50 while negatives are 950 samples. If we apply leave-one-out using the averaged k-fold cross validation approach. Then, we will notice that we have the precision and recall in 950 folds are not defined (NaN) because NP/(TP+FP) are all zeros (as only one negative sample is examined in this k-fold). I think, such NaN might be replaced by 1 because no meaning to set its value to be 0 (because the current sample is negative and it was correctly predicted to be negative). Then, we average the recall/precision from 1000-folds. the values will be different than obtaining the all the predictions and then calculating one precision/recall. </p>
<p>The same difference will happen when we apply k-fold cross validation on this data instead of leave-one-out (even if we produce balanced ratio of positives in each fold).</p>
<p>So, my question is: which approach is more accurate to calculate the precision/recall from k-fold cross validation (combined or average)? is the same situation for leave-one-out?</p>
<p>There are two relevant discussions :
<a href="http://stats.stackexchange.com/questions/34611/average-vs-combined-k-fold-cross-validation">Average vs. combined k-fold cross validation</a>
and
<a href="http://stats.stackexchange.com/questions/66864/averaging-precision-and-recall-when-using-cross-validation">Averaging precision and recall when using cross validation</a></p>
<p>However, the first one did not have clear answer for this point and the second discusses the averaging effect of Fmeasure (not in the precision/recall). both discussions did not discuss if leave-one-out has other preference.</p>
| 74,094 |
<p><em><strong>Can anyone provide a brief overview of the most popular training algorithms for perceptrons?</em></strong></p>
<p>I am currently training my perceptron using standard stochastic gradient descent (online gradient descent) with a fixed learning rate.</p>
<p>There seems to be hundreds of different algorithms which claim to perform better, but I have found it difficult to find out which one would be most promising to implement (with complexity and time to code up/understand being costs).</p>
<p>Also, I am using theano on python.</p>
| 74,095 |
<p>After reviewing related questions on Cross Validated and countless articles and discussions regarding the inappropriate use of <a href="http://en.wikipedia.org/wiki/Stepwise_regression" rel="nofollow">stepwise regression</a> for variable selection, I am still unable to find the answers that I am looking for in regards to building parsimonious, binary logistic regression models from datasets with 1000 (or more) potential predictor variables.</p>
<p>For some background information, I typically work with large datasets, 500k or more rows, and my interest is in building binary logistic regression models to predict whether an individual will pay (1) or not pay (0) their bill on a particular account without using stepwise logistic regression. Currently, stepwise logistic regression is hailed as the “perfect method” among other statisticians that I have worked with, and I would like to change that as I have witnessed many of its pitfalls firsthand. </p>
<p>I have recently dabbled in PCA (<code>proc varclus</code>) and random forest analyses (<code>randomForest</code>) with the latter being especially helpful; however, I am still seeking further direction on how to reduce the number of variables in my binary logistic models without using stepwise logistic regression. With that being said, any help (suggested articles or thoughts) is greatly appreciated. Thanks!</p>
| 74,096 |
<p>Apologies for the uninformative title. This is actually a fairly simple problem which I could easily do myself in matlab or perhaps stata, but the professor demands it be in SAS, which seems to have been designed for robots not humans.</p>
<p>Say I have a matrix of farm output data with rows of crops at the farm and field level. For example, farm 1 has 3 fields so there are three rows for farm 1: $F_{11}, F_{12}, F_{13}$</p>
<p>A crop column indicates the crop grown on each field of each farm.</p>
<p>$F_{11}=corn$</p>
<p>$F_{12}=beans$</p>
<p>$F_{13}=squash$</p>
<p>(The crops would actually be represented by a numerical value.)</p>
<p>I need to come up with a SAS script that separates corn farms from non corn farms. I.e., I need to create a new binary <code>cornfarm=0 or 1</code> variable. Corn farms are defined as any farm growing corn on any of its fields.</p>
<p>Corresponding pseudo-code would be something like this:</p>
<pre><code>for i=1 to farmtotal
for j=1 to farmfieldtotal
if F(i,j)=corn then cornfield(i,j)=1
next j
if sum(cornfield(i))>0 then cornfarm(i)=1
next i
</code></pre>
| 74,097 |
<p>My table of chi-square test for homogeneity has 3x5 cells, 3 samples to a likert scale item. Most cells in the table have a frequency count of less than two, so it is recommended to discard the column of which contains a cell with frequency of less than two. How will this affect my hypothesis if I discard the entire column just because a cell has a value of less than 2 or 0?</p>
| 31,251 |
<p>Is the likelihood ratio test ($-2 \log L$) basically like the partial F-test in that you are using it for logistic regression instead of linear regression?</p>
| 703 |
<p>This question is an extended version of <a href="http://stats.stackexchange.com/questions/18294/what-does-equal-a-priori-class-probabilities-mean">this one</a>.</p>
<p>As you can see here, two distributions are equal, I need to compute the parameters a,b,c,d and e. Could you show me a way to do that?</p>
<hr>
<p>Assume a two-class problem with equal a priori class probabilities and Gaussian class-conditional densities as follows:</p>
<p>$$p(x\mid w_1) = {\cal N}\left(\begin{bmatrix} 0 \\ 0 \end{bmatrix},\begin{bmatrix} a & c \\ c & b \end{bmatrix}\right)\quad\text{and}\quad p(x\mid w_2) =
{\cal N}\left(\begin{bmatrix} d \\ e \end{bmatrix},\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}\right)$$</p>
<p>where $ab-c^2=1$.</p>
| 74,098 |
<p>There are two main probabilistic approaches to novelty detection: parametric and non-parametric. The non-parametric approach assumes that the distribution or density function is derived from the training data, like kernel density estimation (e.g., Parzen window), while parametric approach assumes that the data comes from a known distribution.</p>
<p>I am not familiar with the parametric approach. Could anyone show me some well known algorithms? By the way, can MLE be considered as a kind of parametric approach (the density curve is known, and then we seek to find the parameter corresponding to the maximum value)?</p>
| 74,099 |
<p>Here is the head of my data set (<code>tjornres</code>): </p>
<pre><code> Fish.1 Fish.2 MORPHO DIET
1 1 2 0.03768 0.1559250
2 1 3 0.05609 0.7897060
3 1 4 0.03934 0.4638010
4 1 5 0.03363 0.1200480
5 1 6 0.05629 0.4390760
6 1 8 0.08366 0.1866750
7 1 9 0.04892 0.0988235
8 1 10 0.04427 0.2637140
</code></pre>
<p><code>MORPHO</code> and <code>DIET</code> refer to the morphological and diet distances between fish 1 and fish 2. My original data set has over 2400 pairs of fish. My goal is to resample this dataset by selecting only 435.
I would like to do this 999 times and get a distribution of the correlation coefficients <code>MORPHO~DIET</code>. </p>
<p>I went on and wrote this code: </p>
<pre><code>head(tjornres)
essayres = tjornres # copy of the data
R = 999 # the number of replicates
cor.values = numeric(R) # store the data
for (i in 1:R) { # loop
+ group1 = sample(essayres, size=435, replace=F)
+ group2 = sample(essayres, size=435, replace=F)
+ cor.values[i] = cor.test(group1,group2)$cor
+ }
</code></pre>
<p>I have a syntax error in this code. </p>
<p>Also if I run one resampling, <code>sample(essayres, size=435, replace=F)</code>, I get this error </p>
<pre><code>message: Error in `[.data.frame`(x, .Internal(sample(length(x), size, replace,
:cannot take a sample larger than the population when 'replace = FALSE'.
</code></pre>
<p>Does anyone know why this code is not working? Are there any other ways to resample (without replacement) ?
Thank you for your help, </p>
| 36,758 |
<p>It seems the Bonferroni method (dividing experimentwise alpha by # of comparisons) for choosing the p level to fix the experimentwise alpha (when doing many pairwise comparisons) is more conservative than just solving $1 - (1 - p)^k = .05$ to get the alpha to use for each of the $k$ pairwise comparisons. Why not just solve the equation?</p>
| 44,480 |
<p>Came across an interesting problem today. You are given a coin and x money, you double money if you get heads and lose half if tails on any toss.</p>
<ol>
<li>What is the expected value of your money in n tries</li>
<li>What is the probability of getting more than expected value in (1)</li>
</ol>
<p>This is how I approached it. The probability of heads and tails is same (1/2). Expected value after first toss = $1/2(2*x) + 1/2(1/2*x) = 5x/4$ So expected value is $5x/4$ after first toss. Similarly repeating second toss expectation on 5x/4, Expected value after second toss = $1/2(2*5x/4) + 1/2(1/2*5x/4) = 25x/16$</p>
<p>So you get a sequence of expected values: $5x/4$, $25x/16$, $125x/64$, ...</p>
<p>After $n$ tries, your expected value should be $(5^n/4^n) * x$.</p>
<p>If $n$ is large enough, your expected value should approach the mean of the distribution. So probability that value is greater than expected value should be $0.5$. I am not sure about this one.</p>
| 74,100 |
<p>I've performed a three-way repeated measures ANOVA; what post-hoc analyses are valid? </p>
<p>This is a fully balanced design (2x2x2) with one of the factors having a within-subjects repeated measure. I'm aware of multivariate approaches to repeated measures ANOVA in R, but my first instinct is to proceed with a simple aov() style of ANOVA:</p>
<pre><code>aov.repeated <- aov(DV ~ IV1 * IV2 * Time + Error(Subject/Time), data=data)
</code></pre>
<p>DV = response variable</p>
<p>IV1 = independent variable 1 (2 levels, A or B)</p>
<p>IV2 = independent variable 2 (2 levels, Yes or No)</p>
<p>IV3 = Time (2 levels, Before or After)</p>
<p>Subject = Subject ID (40 total subjects, 20 for each level of IV1: nA = 20, nB = 20)</p>
<pre><code>summary(aov.repeated)
Error: Subject
Df Sum Sq Mean Sq F value Pr(>F)
IV1 1 5969 5968.5 4.1302 0.049553 *
IV2 1 3445 3445.3 2.3842 0.131318
IV1:IV2 1 11400 11400.3 7.8890 0.007987 **
Residuals 36 52023 1445.1
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Error: Subject:Time
Df Sum Sq Mean Sq F value Pr(>F)
Time 1 149 148.5 0.1489 0.701906
IV1:Time 1 865 864.6 0.8666 0.358103
IV2:Time 1 10013 10012.8 10.0357 0.003125 **
IV1:IV2:Time 1 852 851.5 0.8535 0.361728
Residuals 36 35918 997.7
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
</code></pre>
<p>Alternatively, I was thinking about using the nlme package for a lme style ANOVA:</p>
<pre><code>aov.repeated2 <- lme(DV ~ IV1 * IV2 * Time, random = ~1|Subject/Time, data=data)
summary(aov.repeated2)
Fixed effects: DV ~ IV1 * IV2 * Time
Value Std.Error DF t-value p-value
(Intercept) 99.2 11.05173 36 8.975972 0.0000
IV1 19.7 15.62950 36 1.260437 0.2156
IV2 65.9 15.62950 36 4.216385 0.0002 ***
Time 38.2 14.12603 36 2.704228 0.0104 *
IV1:IV2 -60.8 22.10346 36 -2.750701 0.0092 **
IV1:Time -26.2 19.97722 36 -1.311494 0.1980
IV2:Time -57.8 19.97722 36 -2.893295 0.0064 **
IV1:IV2:Time 26.1 28.25206 36 0.923826 0.3617
</code></pre>
<p>My first instinct post-hoc of significant 2-way interactions with Tukey contrasts using glht() from multcomp package:</p>
<pre><code>data$IV1IV2int <- interaction(data$IV1, data$IV2)
data$IV2Timeint <- interaction(data$IV2, data$Time)
aov.IV1IV2int <- lme(DV ~ IV1IV2int, random = ~1|Subject/Time, data=data)
aov.IV2Timeint <- lme(DV ~ IV2Timeint, random = ~1|Subject/Time, data=data)
IV1IV2int.posthoc <- summary(glht(aov.IV1IV2int, linfct = mcp(IV1IV2int = "Tukey")))
IV2Timeint.posthoc <- summary(glht(aov.IV2Timeint, linfct = mcp(IV2Timeint = "Tukey")))
IV1IV2int.posthoc
#A.Yes - B.Yes == 0 0.94684
#B.No - B.Yes == 0 0.01095 *
#A.No - B.Yes == 0 0.98587 I don't care about this
#B.No - A.Yes == 0 0.05574 . I don't care about this
#A.No - A.Yes == 0 0.80785
#A.No - B.No == 0 0.00346 **
IV2Timeint.posthoc
#No.After - Yes.After == 0 0.0142 *
#Yes.Before - Yes.After == 0 0.0558 .
#No.Before - Yes.After == 0 0.5358 I don't care about this
#Yes.Before - No.After == 0 0.8144 I don't care about this
#No.Before - No.After == 0 0.1941
#No.Before - Yes.Before == 0 0.8616
</code></pre>
<p>The main problem I see with these post-hoc analyses are some comparisons that aren't useful for my hypotheses.</p>
<p>Any suggestions for an appropriate post-hoc analysis are greatly appreciated, thanks.</p>
<p><strong>Edit:</strong> <a href="http://stats.stackexchange.com/questions/5250/multiple-comparisons-on-a-mixed-effects-model">Relevant question and answer that points toward testing manual contrast matrices</a></p>
| 36,759 |
<p>I am trying to use the dirmult package in R to find the parameters of the dirichlet multinomial distribution. However When I run the code, I see that it appears to be minimizing the Likelihood rather than maximizing. Here is a sample trace:</p>
<pre><code>> foo <- dirmult(data)
Iteration 1: Log-likelihood value: -108256.762432151
Iteration 2: Log-likelihood value: -112024.171181739
Iteration 3: Log-likelihood value: -114816.893733128
Iteration 4: Log-likelihood value: -117178.454519049
Iteration 5: Log-likelihood value: -119081.542746927
Iteration 6: Log-likelihood value: -120547.115475672
Iteration 7: Log-likelihood value: -121631.658731926
Iteration 8: Log-likelihood value: -122408.210532819
Iteration 9: Log-likelihood value: -122950.170423675
Iteration 10: Log-likelihood value: -123321.325905601
Iteration 11: Log-likelihood value: -123572.241418431
Iteration 12: Log-likelihood value: -123740.558811607
Iteration 13: Log-likelihood value: -123853.029548595
Iteration 14: Log-likelihood value: -123928.062303726
Iteration 15: Log-likelihood value: -123978.091582544
Iteration 16: Log-likelihood value: -124011.444469511
Iteration 17: Log-likelihood value: -124033.679213468
Iteration 18: Log-likelihood value: -124048.502105592
Iteration 19: Log-likelihood value: -124058.383928014
Iteration 20: Log-likelihood value: -124064.971772997
</code></pre>
<p>Typically, this does not happen. On the dirmult example data set:</p>
<pre><code>> fit <- dirmult(us[[1]],epsilon=10^(-12))
Iteration 1: Log-likelihood value: -3291.68283455695
Iteration 2: Log-likelihood value: -3282.90699227135
Iteration 3: Log-likelihood value: -3277.28960275919
Iteration 4: Log-likelihood value: -3274.32118891593
Iteration 5: Log-likelihood value: -3273.14865155825
Iteration 6: Log-likelihood value: -3272.87180418868
Iteration 7: Log-likelihood value: -3272.84788154144
Iteration 8: Log-likelihood value: -3272.84761179787
Iteration 9: Log-likelihood value: -3272.84761175372
Iteration 10: Log-likelihood value: -3272.84761175372
Iteration 11: Log-likelihood value: -3272.84761175372
</code></pre>
<p>Why is this happening? Is there a way to fix it?</p>
| 74,101 |
<p>From Cassela and Berger's Statistical Inference,</p>
<blockquote>
<p>Theorem 7.5.1 (Lehmann-Scheff) Unbiased estimators based on complete
sufficient statistics are unique.</p>
</blockquote>
<p>I wonder if the condition is not necessarily strong? I.e. is it true that </p>
<blockquote>
<p>unbiased estimators based on complete statistics are unique.</p>
</blockquote>
<p>simply by the definition of complete statistics? (note: unique in the sense of a.s.)</p>
<p>Thanks.</p>
| 74,102 |
<p>When carrying out OLS multiple linear regression, rather than plot the residuals against fitted values, I plot the (internal) Studentized residuals against fitted values (ditto for covariates). These residuals are defined as:</p>
<p>\begin{equation}
e^*_i = \frac{e_i}{\sqrt{s^2 (1-h_{ii})}}
\end{equation}</p>
<p>where $e_i$ is the residual and $h_{ii}$ are the diagonal elements of the hat matrix. To get these studentized residuals in R, you can use the <code>rstandard</code> command. </p>
<p>What type of residuals do people routinely use in this context? For example, do you just stick with $e_i$ or do you use jackknife residuals, or something else entirely.</p>
<p>Note: I'm not that interested in papers that define a new type of residual that no-one ever uses.</p>
| 48,444 |
<p>I have a small dataset of count data that has a number of zero measurements. My overdispersion is fairly large. Due to this, I was drawn to using the glmmadmb package.</p>
<p>I understand how to compare the fit of the distributions (poisson, nbinom etc) using AICtab.</p>
<p>What I am struggling with is how to assess whether the model with the 'best' fit of distribution actually fits my data. On 'normal' glm's, this is where you would use</p>
<pre><code>qqnorm(resid(model))
qqline(resid(model))
</code></pre>
<p>I am aware, however, that the assumptions required to use this are not met in negative binomials etc etc.</p>
<p>Therefore, I am wondering if someone would point me in the right direction of finding an answer.... how would you validate your model choice in GLMMADMB?</p>
<p>Many thanks in advance for your help, I really appreciate your time and efforts.</p>
<p><strong>EDIT</strong>: 8th May 2014</p>
<p>My data set is a very very simple one….. I am testing a behaviour response to an audio playback stimuli. I have 10 animals that are played a control playback, and the same 10 animals played a treatment playback the following day (alternated to balance treatment order effects). Therefore, I have only one fixed effect (Audio Treatment) and one random effect (Individual Name). I feel that I am comfortable with <code>lmer</code> functions and have used <code>lme4</code> on several, much larger models, of both count and binomial. I am a little unsure of my understanding of <code>glmmadmb</code> models…. What I see the issue as being with this small model, is that 5 / 10 of the measured responses for control playback are 0 (with the other 5 / 10 being small numbers, and 10 / 10 to treatment playback being large numbers).</p>
<p>My dataset looks as follows:</p>
<pre><code>Individual Audio_Treatment Behaviour
A TREATMENT 60
B TREATMENT 29
C TREATMENT 32
D TREATMENT 40
E TREATMENT 3
F TREATMENT 25
G TREATMENT 21
H TREATMENT 50
I TREATMENT 11
J TREATMENT 43
A CONTROL 4
B CONTROL 0
C CONTROL 0
D CONTROL 0
E CONTROL 19
F CONTROL 10
G CONTROL 4
H CONTROL 0
I CONTROL 0
J CONTROL 3
</code></pre>
<p>And I have fitted the following model:</p>
<pre><code>modelNBIT<-glmmadmb(BEHAVIOUR~AUDIO_TREATMENT+(1|INDIVIDUAL),
family="nbinom1", zeroInflation = TRUE)
</code></pre>
<p>I had initially thought that truncated models (or hurdle models) might be more suited to this, however the truncated glmmadmb runs the following error....(perhaps due to overfitting)</p>
<pre><code>Parameters were estimated, but not standard errors were not:
the most likely problem is that the curvature at MLE was zero or negative
</code></pre>
<p>Going back to my original question – I was wondering if someone would be able to suggest a way to validate the model (in this precise case, modelNBIT)?</p>
<p>Many thanks again for your time and efforts.</p>
| 74,103 |
<p>I have a set of <strong>Observation Symbol Sequences</strong> which I have to <strong>test</strong> against a set of Trained HMM classifiers. I seem to understand the <a href="http://en.wikipedia.org/wiki/Log_probability" rel="nofollow">advantages</a> of using Log Probability over regular probabilities.</p>
<p>In the <strong>testing phase</strong> of a HMM classifier, I don't seem to get the <strong>motivation behind multiplying probabilities or adding log probabilities</strong> in determining the class of the observed observation test sequence.</p>
<p>Why do we have to multiply or add probabilities? Can't we just determine probability or log probability of a <strong>Single Observation Symbol Sequence</strong> using Forward algorithm?</p>
| 35,490 |
<p>Good morning</p>
<p>I searched regarding change in variance over time, but everything I saw was about relatively long time series. I have a series of 5 time points, equally spaced, but with different people missing at each time point. There are about 15 variables I am interested in, all medical things like levels of glucose and blood pressure and so on.</p>
<p>For modeling, I am using a multilevel model; but plotting the data shows that for some variables, the variance declines. Medically, this is a good thing. Several of the variables are bad if they are too low or too high. But how can I test this change in variance? And might this lead to a way of distinguishing the amount of change I am seeing from regression to the mean?</p>
| 74,104 |
<p>I have data for an application's installs per day and their ranking in the store for the following day. Each user/installation can be further broken down into two categories - those that installed it because of an ad, and those that installed it on their own (e.g., by word of mouth).</p>
<p>From this data, I want to know if increasing the number of ads would increase installations and thus ranking. I know rank is a terrible statistic to tie it to (since it's not an independent variable; e.g., it's tied to the performance of everyone else's applications in the store), but for the sake of comparison, I have already run a linear regression for the days where the number of ad-driven installs made up for <5% of all installs on a given day.</p>
<p>Given this, which statistical test should I run next to filter out the non-ad-driven installs' effect on the ranking? If I used the wrong metrics (e.g., I should be measuring change in rank against change in installs per day), please let me know.</p>
<p>(And before anyone asks, this is not homework. And on top of that, I know which tests to run after this step.)</p>
| 48,447 |
<p>I'm trying to use a multiple regression (in Excel) to determine the effectiveness of foreign aid on GDP growth. I've forgotten everything since college...</p>
<p>My plan is to run a regression using established determinants as a 'baseline'. Then run another adding in my independent variable of 'aid expenditures', looking at p-value and coefficient of 'aid expenditures'. Am I anywhere close?</p>
| 74,105 |
<p>I can understand what a handwritten digit is, but what is meant by "binary image"? Can someone explain it?</p>
<p>I am studying machine learning concepts. In doing so, I've come across many types of inputs. For example: binary, continuous, and discrete. The inputs are provided through MNIST handwritten image datasets. In that, I have read about datasets consisting of binary images.</p>
<p>I would also like to know about the difference between greyscale and binary images.</p>
| 29,168 |
<p>I'm trying to improve accuracy in a Naive Bayes classifier that uses a bunch of features. I have a hunch that removing some features may actually improve performance. My reasoning is for a particular feature the estimated PDFs across the classes may be different slightly because of the limited amount of learning data, not because they really are different. For example if a feature accross 2 classes had the same histogram except for some noise</p>
<pre><code>| |
||| |||
|||||||_ noise _|||||||| p(x1 | C1), or H1(x)
| |
||| |||
||||||||________|||||||| p(x1 | C2), or H2(x)
</code></pre>
<p>I'm already aware that when modelling p(x1 | C1) I could have chosen a larger bin-width to smooth out the noise, but let's say this was an "optimal" bandwidth in the example above. I want to identify these kinds of cases and remove the feature.</p>
<p>I looked at was Kullback–Leibler divergence. But, I can't see how to compute it because $KL(H2 || H1) = \sum H_2(i) log( \frac{H_2(i)}{H_1(i)}) $ is not defined for example histograms above because of the zeroes in one case where H2(x) is not zero. Or vise versa for KL(H1||H2).</p>
<p>Any suggestions, thoughts?
Thanks.</p>
| 74,106 |
<p>I have a data set which looks something like this (not real data):</p>
<pre><code>conc Resp
0 5
0.1 18
0.2 20
0.3 23
0.4 24
0.5 24.5
0 5
0.1 17
.. ..
</code></pre>
<p>which happens to fit perfectly to the Michaelis-Menten equation:</p>
<blockquote>
<p>Resp = max_value * conc / (conc_value_at_half_max + conc)</p>
</blockquote>
<p>Even though it is something else entirely importantly, the response increases quickly with "conc" and then reaches a ceiling or max value of sorts. </p>
<p>Anyway, I want to know how low I can go in "conc" before the value of "Resp" is not significantly lower than the max value. </p>
<p>Using a simple ANOVA accomplices this nicely, but I was thinking: "should I not be exploiting the fact that the structure of the data is so nicely explained by a known equation?" Is there such a way?</p>
<p>I am using minitab for this because it is easier, but would work in <code>R</code> all the time.</p>
| 74,107 |
<p>Suppose using an specific sampling method I have generated a sample but I now want the normalising factor to be able to calculate the probabilities. Can I consider the sum of the values as their normalising factor. Could you please also give a source for that?</p>
| 74,108 |
<p>The question emerged while reading Ch. 3 of <a href="http://www.gaussianprocess.org/gpml/" rel="nofollow">Rasmussen & Williams</a> . In the end of this chapter, the authors gave results for the problem of handwritten digits classification (16x16 greyscale pictures); features are 256 pixel intensities + bias. I was surprised that in such a high-dimensional problem, 'metric' methods, like Gaussian processes with squared exponential kernel, or SVM with the same kernel, behave quite nice without any dimension reduction preceeded.</p>
<p>Also, I heard sometimes that SVM is good for [essentially bag-of-word] text classification. Why aren't they suffering from the curse of dimensionality?</p>
| 74,109 |
<p>I am new to R and would like your help with lme formula for partially crossed random effect in a random-intercept, random-slope model. In the longitudinal data I have, each subject (barring some dropouts) was tested at 5 different occasions. The standardized tests were administered by 3 different examiners, with 2 of them present at all occasions, and the 3rd one administering tests only on the last two occasions. The subjects were randomly assigned to the examiners.</p>
<p>Mock data: </p>
<pre><code>data <- data.frame(subject=rep(A,5), time=1:5, examiner=c(2,1,2,2,3),
covariate=c(1,1.3,0.8,1,0.6), score=c(46,56,60,68,70))
</code></pre>
<p>I tested the following models:</p>
<pre><code>>model1 <- lme(score~time*covariate, random=~time|subject, method="REML",
na.action=na.omit, data=dat)
>model2 <- lme(score~time*covariate, random=list(examiner=~1,subject=~time),
method="REML", na.action=na.omit, data=dat)
>anova(model1, model2) #gives p<0.05 with better model fit for model2.
</code></pre>
<p>I would like to know if <code>model2</code> is the correct way to specify the partially
crossed random effect in the data I described.</p>
| 74,110 |
<p>I am new to machine learning and can't get my head around this problem. I have two patient datasets, the first ($D_1$) contains $Y,Z,X$ that convey blood-sample information and the second ($D_2$) contains $W,T,X$ that convey x-ray information. In both datasets, $X$ is the common diagnostic output. </p>
<p>Since there are two distinct datasets, I modelled a solution with two Naive Bayes models as below, where $X$ is the common dependent variable. ( I am aware I can use other techniques than Naive Bayes but that is not the aim of my question.)</p>
<ul>
<li>$P(X|Y,Z,D_1) \propto P(Y|X,D_1) \cdot P(Z|X,D_1) \cdot P(X,D_1)$ </li>
<li>$P(X|W,T,D_2) \propto P(W|X,D_2) \cdot P(T|X,D_2) \cdot P(X,D_2)$ </li>
</ul>
<p>I want to combine the posterior probabilities of X in these models: $P(X|Y,Z)$ and $P(X|W,T)$ to determine an overall outcome probability $P(X|Y,Z,W,T)$ (diagnostic outcome) of a new patient. </p>
<p><strong>How can I combine these probabilities?</strong> </p>
<p>Can it be done as below?</p>
<p>$P(X|Y,Z,W,T) \propto \frac{P(X|Y,Z) * P(X|W,T)}{P(X,D_{combined})}$ </p>
<p>If so, what is the relationship between the priors $P(X,D_1)$, $P(X,D_2)$ and $P(X,D_{combined})$?</p>
| 74,111 |
<p>I'm trying to figure out the practical meaning of less degrees of freedom regarding the results of t-test results. For example: If I have two dependent samples and would like to compare their means using a t-test. If I use both tests for dependent and independent samples I will have more degrees of freedom in the dependent test (i.e. for unknown mean and variance independent sample will have <em>n-2</em> df, and the dependent <em>n-1</em>).</p>
<p>Assuming that my basic knowledge is valid and the description above is correct, what will be the impact of using each test with less or more <em>df</em> on the results (p-values, probability to receive false positive/negative)?</p>
<p>Does any of this make sense?</p>
| 74,112 |
<p>I have a dataset of fraudulent orders from some business. Each order has a bunch of features such as order_amount, address, state, city, phone_number, and name. Obviously a criminal would not be using his/her real name when making a fraudulent order. So I was wondering if there was any sort of machine learning strategy to identify fake names. I assume there must be some sort of underlying structure to how fake names are selected - so understanding this structure could allow me to identify them. Unless the fake names are completely randomly selected. Any thoughts on how to do this?</p>
| 74,113 |
<p>I have a set of 1000 datapoints of measured concentrations that may include up to 300 values which are censored (below the detection limit that the lab could reliably measure). The range of detection limit values vary, such as <2, <3, <7 etc. My data is neither normal or log-normal, and I've used non-parametric tests to analyze it so far. </p>
<p>The information in the environmental-research literature about the use of kaplan-meier, or ROS estimators for real left-censored data primarily only compares overall statistics (i.e. mean, median, std. dev) between these different estimator methods.</p>
<p>I would like to use KM to generate <strong>individual values</strong> for those of my results which are below the detection limit. To date I've relied on whatever stats software may be available to me, but I cannot find this option presently. </p>
<p><em>Edit:</em> What are the steps to generate values for the left-censored data (using KM)?</p>
<p>Is there a programs/software that I could apply to my dataset and then impute values based on KM? Ultimately I am interested in using these generated values for further multivariate analysis of my dataset, and thus need new individual values (as opposed to overall mean, median etc).</p>
<p>Any comments would be helpful. Thank you.</p>
| 33,569 |
<p>I am doing a leave-one-subject-out for a mixed-effects model for a longitudinal data analysis, in which the model is fitted to all subjects minus one at a time, and the left-out subject becomes validation data. </p>
<p>The original model is formulated as:</p>
<pre><code>model<-lme(score~time*covariate,random=~time|subject, data=fulldata,...)
</code></pre>
<p>Then I create a loop to create training models from dataset leaving one-subject at a time, and use time & covariate values of the left-out subjects to collect predicted scores for left-out subjects. </p>
<p>The issue is predict(newmodel,left-out-subject-predictors,level(0:1)) gives output only for level 0, i.e. a population-level prediction, not a subject-level prediction.<br>
The question has been partially answered here: </p>
<p><a href="http://stats.stackexchange.com/questions/18971/cross-validation-for-mixed-models">Cross Validation for mixed models?</a></p>
<p>, wherein it is recommended to assume random effect=0 to get a level 1 prediction.
How may I specify out-of-sample random effect as zero when using predict function for left-out subject?</p>
| 74,114 |
<p>I have built and refined a regression model using the <code>ordinal</code> package in <code>R</code>. The measure is $0>1>2>3>4>5$ (Yes/No questions) and is repeated every 10 minutes for an hour (episode) within the same person, twice a week for up to 6 weeks, but average 3 weeks. I have 6000 such observations on 1000 hours for 150 people.</p>
<p>When I built a univariate model using number of questions $(0-5)$ answered as the dependent variable and say, Gender as the independent variable, with random intercepts for person and episode I interpreted the coefficients as "Difference in the proportional log-odds of one extra question being answered on average over all timepoints".</p>
<p>My question is: Is that valid, or have I made unsupportable assumptions due to the nature of the mixed effects model for which I am not accounting.</p>
<p>Incidentally I can completely recommend the "ordinal" package, although it is in development, and has minor difficulties with convergence for some unstandardised parameters.</p>
| 74,115 |
<p>The univariate exponential Hawkes process is a self-exciting point process with an event arrival rate of:</p>
<p>$ \lambda(t) = \mu + \sum\limits_{t_i<t}{\alpha e^{-\beta(t-t_i)}}$</p>
<p>where $ t_1,..t_n $ are the event arrival times.</p>
<p>The log likelihood function is</p>
<p>$ - t_n \mu + \frac{\alpha}{\beta} \sum{( e^{-\beta(t_n-t_i)}-1 )} + \sum\limits_{i<j}{\ln(\mu+\alpha e^{-\beta(t_j-t_i)})} $</p>
<p>which can be calculated recursively:</p>
<p>$ - t_n \mu + \frac{\alpha}{\beta} \sum{( e^{-\beta(t_n-t_i)}-1 )} + \sum{\ln(\mu+\alpha R(i))} $</p>
<p>$ R(i) = e^{-\beta(t_i-t_{i-1})} (1+R(i-1)) $ </p>
<p>$ R(1) = 0 $ </p>
<p>What numerical methods can I use to find the MLE? What is the simplest practical method to implement?</p>
| 36,777 |
<p>(To see why I wrote this, check the comments below my answer to <a href="http://stats.stackexchange.com/questions/24588/quantiles-vs-highests-posterior-density-intervals">this question</a>.)</p>
<h2>Type III errors and statistical decision theory</h2>
<p>Giving the right answer to the wrong question is sometimes called a Type III error. Statistical decision theory is a formalization of decision-making under uncertainty; it provides a conceptual framework that can help one avoid type III errors. The key element of the framework is called the <em>loss function</em>. It takes two arguments: the first is (the relevant subset of) the true state of the world (e.g., in parameter estimation problems, the true parameter value $\theta$); the second is an element in the set of possible actions (e.g., in parameter estimation problems, the estimate $\hat{\theta})$. The output models the loss associated with every possible action with respect to every possible true state of the world. For example, in parameter estimation problems, some well known loss functions are:</p>
<ul>
<li>the absolute error loss $L(\theta, \hat{\theta}) = |\theta - \hat{\theta}|$</li>
<li>the squared error loss $L(\theta, \hat{\theta}) = (\theta - \hat{\theta})^2$</li>
<li><a href="http://en.wikipedia.org/wiki/Hal_Varian">Hal Varian</a>'s LINEX loss $L(\theta, \hat{\theta}; k) = \exp(k(\theta - \hat{\theta})) - k(\theta - \hat{\theta}) - 1,\text{ } k \ne0$</li>
</ul>
<h2>Examining the answer to find the question</h2>
<p>There's a case one might attempt to make that type III errors can be avoided by focusing on formulating a correct loss function and proceeding through the rest of the decision-theoretic approach (not detailed here). That's not my brief – after all, statisticians are well equipped with many techniques and methods that work well even though they are not derived from such an approach. But the end result, it seems to me, is that the vast majority of statisticians don't know and don't care about statistical decision theory, and I think they're missing out. To those statisticians, I would argue that reason they might find statistical decision theory valuable in terms of avoiding Type III error is because it provides a framework in which to ask of any proposed data analysis procedure: <em>what loss function (if any) does the procedure cope with optimally?</em> That is, in what decision-making situation, exactly, does it provide the best answer?</p>
<h2>Posterior expected loss</h2>
<p>From a Bayesian perspective, the loss function is all we need. We can pretty much skip the rest of decision theory -- almost by definition, the best thing to do is to minimize posterior expected loss, that is, find the action $a$ that minimizes $\tilde{L}(a) = \int_{\Theta}L(\theta, a)p(\theta|D)d\theta$.</p>
<p>(And as for non-Bayesian perspectives? Well, it is a theorem of frequentist decision theory -- specifically, Wald's <a href="http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aoms/1177730345">Complete Class Theorem</a> -- that the <a href="http://en.wikipedia.org/wiki/Admissible_decision_rule">optimal action</a> will always be to <a href="http://en.wikipedia.org/wiki/Bayes_estimator">minimize Bayesian posterior expected loss</a> with respect to <em>some</em> (possibly improper) prior. The difficulty with this result is that it is an existence theorem gives no guidance as to which prior to use. But it fruitfully restricts the class of procedures that we can "invert" to figure out exactly which question it is that we're answering. In particular, the first step in inverting any non-Bayesian procedure is to figure out which (if any) Bayesian procedure it replicates or approximates.)</p>
<h2>Hey Cyan, you know this is a Q&A site, right?</h2>
<p>Which brings me – finally – to a statistical question. In Bayesian statistics, when providing interval estimates for univariate parameters, two common credible interval procedures are the quantile-based credible interval and the highest posterior density credible interval. What are the loss functions behind these procedures?</p>
| 74,116 |
<p>In "R - project" I am trying to estimate the panel data <code>lm</code> model with <code>plm</code> function. When I include 3 dummy variables into the regression it doesn't appear in the summary of the model, but when I estimate a simple <code>lm</code> model it appears.</p>
<p>Why is it so? What should I do to estimate the statistics for those dummy variables?</p>
| 36,779 |
<p>here is my situation. I am weighting a packet of material that has 10 individual units in it. In the end of the day I would like to know the average weight and variance of the individual units but the problem is that I cannot weight each unit individually since I would have to destroy the packet to get to the individual units. So in lieu of this, I am trying to make an inference of the individual units from what I know about the packets. I weighed 10 packets (hence I have 100 individual units). I was able to figure out the average weight of the units but am having trouble with the variance. Here is what I have done so far:</p>
<p>$$
\begin{split}
\bar{y}&=\frac{1}{10}\sum^{10}_{i=1}y_i\\
&=\frac{1}{10}\sum^{10}_{i=1} (x_{i,1}+x_{i,2}+...+x_{i,10})~~[since~y_i=x_{i,1}+x_{i,2}+...+x_{i,10}]\\
&=\frac{1}{10}\sum^{100}_{j=1}x_j\\
&=\frac{1}{10}(100~\bar{x})=10~\bar{x}
\end{split}
$$</p>
<p>thus we have the average of $x$, $\bar{x}=\frac{\bar{y}}{10}.$ But now my challenge is how to do I find variance of $x$ given the variance of $y$? Any suggestions? Thanks!</p>
<p>::::UPDATE::::</p>
<p>After some thought I came up with this reasoning:
$$
\begin{split}
\frac{1}{10}var(y)&=var(\bar{y})\\
&=var(10~\bar{x})\\
&=100~var(\bar{x})\\
&=100~\frac{1}{100}var(x)~~[assuming~that~all~x~are~i.i.d.]\\
&=var(x)
\end{split}
$$</p>
<p>thus we have $var(x)=\frac{1}{10}var(y).$ I am correct in that if we assume that all the individual units share the same common variance and are independent of each other, this result holds?</p>
| 74,117 |
<p>It seems to me that normalized ERR (<a href="http://don-metzler.net/papers/metzler-cikm09.pdf" rel="nofollow">Expected Reciprocal Ranking</a>) scores (ERR scores of your ranking algorithm divided by ERR score calculated for the ground truth ranking) are more useful than the unscaled ERR scores, but I have not seen normalized scores being reported in the literature. Is there a good reason that the ERR scores are reported in raw rather than normalized format?</p>
| 74,118 |
<p>The scenario is like this:</p>
<p>I have a cohort with 2000 people, half of them taking DRUG, the other half not taking it. I would like to check interactions between DRUG and the other variables in the model:</p>
<ul>
<li><p><strong>Method 1:</strong></p>
<p>Firstly I got a original model: <code>y1=a1*AGE+b1*BMI+c1*DRUG</code>,[<code>DRUG</code> is binary: yes-1, no-0]; I got a likelihood 1;</p>
<p>If I want to test the interaction of <code>AGE</code>, <code>BMI</code> and <code>DRUG</code>, I need another model: <code>y2=a2*AGE+b2*BMI+c2*DRUG+d*(DRUG*AGE)+e*(DRUG*BMI)</code>; I got a likelihood 2;</p>
<p>Then I compare the likelihood of these two models using chi-square test (df=2), and see whether the difference (likelihood 2 minus likelihood) is significant. </p></li>
<li><p><strong>Method 2:</strong></p>
<p>Stratify people into two groups according to DRUG status:</p>
<p>Group 1: for people taking DRUG (n=1000), model 1: <code>y1=a1*AGE+b1*BMI</code>, I got a likelihood 1 (L1);</p>
<p>Group 2: for people not taking DRUG (n=1000), model 2: <code>y2=a2*AGE+b2*BMI</code>, likelihood 2 (L2);</p>
<p>Then I use all the people (n-2000), model 3:y3=a3*AGE+b3*BMI+d*(DRUG*AGE)+e*(DRUG*BMI), likelihood 3 (L3);</p></li>
</ul>
<p>So in order to test the interactions, chi-square=L3/(L1*L2). But the question is: What is the degree of freedom (df)?</p>
<p>Can anyone help? I cannot get the answer.</p>
| 74,119 |
<p>The convergence theorem for Gibbs sampling states:</p>
<p>Given a random Vektor $X$ with $X_1,X_2,...X_K$ and the knowlegde about the conditional distribution of $X_k$ we can find the actual distribution using Gibbs Sampling infinitly often.</p>
<p>The exact theorem as stated by book (Neural Networks and Learning Machines): </p>
<blockquote>
<p>The random variable $X_k(n)$
converges in distribution to the true probabiluty distributions of
$X_k$ for k=1,2,...,K as n approaches infinity</p>
<p>$\lim_{n -> \infty}P(x^{(n)}_k \leq x | x_k(0)) = P_{x_k}(x) $ for $k
> = 1,2,...,K$</p>
<p>Where $P_{X_k}(x)$ is the marginal cummulative distribution function
of $X_k$</p>
</blockquote>
<p>While doing research on this, for a deeper understanding, I ran across <a href="http://stats.stackexchange.com/a/10216/31349">this</a> answer. Which explains quite well how to pick a single sample using the Method, but I am not able to extend/modify it to fit the convergence theorem, as the result of the given example is one sample (spell) and not a final/actual probability distribution.</p>
<p><strong>Therefore, how do I have to modify that example to fit the convergence theorem?</strong></p>
| 36,782 |
<p>I put this question because while reading the benefits of standardizing explanatory variables or not, I read <em>good but contrasting</em> opinions about standardizing when there are interaction in the model. </p>
<p>Some talk about how problems of collinearity are removed when standardizing (e.g. <a href="http://stats.stackexchange.com/questions/60476/collinearity-diagnostics-problematic-only-when-the-interaction-term-is-included#61022">Collinearity diagnostics problematic only when the interaction term is included</a>), which is basically the case of my GLMM. However, others claim that standard errors and p-values of interactions of standardized models are not reliable... (e.g.<a href="http://stats.stackexchange.com/questions/19216/variables-are-often-adjusted-e-g-standardised-before-making-a-model-when-is">Variables are often adjusted (e.g. standardised) before making a model - when is this a good idea, and when is it a bad one?</a> or <a href="http://quantpsy.org/interact/interactions.htm" rel="nofollow">http://quantpsy.org/interact/interactions.htm</a>)</p>
<p>So, any ideas on what is the right thing to do?</p>
| 49,325 |
<p>I am trying to do a box-cox transformation with swift. I have a dependent variable, annual foreign sales of companies (in US\$ thousands) which contains zeros, for a set of panel data. I have been advised to add a small amount, for example, 0.00001 to the annual foreign sales figures so that I can take the log, but I think box-cox transformation will produce a more appropriate constant than 0.00001. I have done a box-cox transformation on R with the codes below, but it has given me a very large lambda2 of 31162.8.</p>
<pre><code>library(geoR)
boxcoxfit(bornp$ForeignSales, lambda2 = TRUE)
#R output - Fitted parameters:
# lambda lambda2 beta sigmasq
# -1.023463e+00 3.116280e+04 9.770577e-01 7.140328e-11
</code></pre>
<p>My hunch is that the above value of lambda2 is very large, so I am not sure if I need to run the boxcoxfit with my independent variables like below:</p>
<pre><code>boxcoxfit(bornp$ForeignSales, bornp$family bornp$roa bornp$solvencyratio,lambda2=TRUE)
</code></pre>
<p>I am still trying to identify the best set of independent variables, so I am not sure if using the boxcoxfit with independent variables at this stage will work or is best.</p>
<p>Here's the description of the two lambda parameters from the help:</p>
<p><code>lambda </code> numerical value(s) for the transformation parameter $\lambda$. Used as the initial value<br>
<code> </code> in the function for parameter estimation. If not provided default values are as-<br>
<code> </code> sumed. If multiple values are passed the one with highest likelihood is used as<br>
<code> </code> initial value.<br>
<code>lambda2 </code> logical or numerical value(s) of the additional transformation (see DETAILS<br>
<code> </code> below). Defaults to <code>NULL</code>. If <code>TRUE</code> this parameter is also estimated and the initial<br>
<code> </code> value is set to the absolute value of the minimum data. A numerical value is<br>
<code> </code> provided it is used as the initial value. Multiple values are allowed as for<br>
<code> </code> lambda.</p>
<p>I would be very grateful for any advice on the above.</p>
| 74,120 |
<p>What is the expected length of a streak of heads or tails when flipping a coin? What distribution is this?</p>
<p>I'm pretty sure the answer is 2. But I don't know what distributions it is..</p>
<p>I did the following R code:</p>
<pre><code>l=10000
longest.stk = avg.stk = numeric(l)
for(i in 1:l){
x=sample(0:1, 1000, repl=T)
r = rle(x)
longest.stk[i] = max(r$lengths)
avg.stk[i] = mean(r$lengths) }
mean(longest.stk)
[1] 10.3001
mean(avg.stk)
[1] 2.000825
</code></pre>
| 74,121 |
<p>I have two spectra of the same astronomical object. The essential question is this: How can I calculate the relative shift between these spectra and get <strong>an accurate error estimate</strong> on that shift?</p>
<p>Some more details if you are still with me. Each spectrum will be an array with an x value (wavelength), y value (flux), and error. <strong>The wavelength shift is going to be sub-pixel.</strong> Assume that the pixels are regularly spaced and that there is only going to be a single shift applied to the entire spectrum. So the end answer will be something like: 0.35 +/- 0.25 pixels. </p>
<p>The two spectra are going to be a lot of featureless continuum punctuated by some rather complicated absorption features (dips) that do not model easily. I'd like to discover a method that directly compares the two spectra. </p>
<p>Everyone's first instinct is to do a cross-correlation, but with subpixel shifts, you're going to have to interpolate between the spectra (by smoothing first?) -- also, errors seem nasty to get right.</p>
<p>My current approach is to smooth the data by convolving with a gaussian kernel, then to spline the smoothed result, and compare the two splined spectra -- but I don't trust it (especially the errors).</p>
<p>Does anyone know of a way to do this properly?</p>
<p>Here is a short python program that will produce two toy spectra that are shifted by 0.4 pixels (written out in toy1.ascii and toy2.ascii) that you can play with. Even though this toy model uses a simple gaussian feature, assume that the actual data cannot be fit with a simple model. </p>
<pre><code>import numpy as np
import random as ra
import scipy.signal as ss
arraysize = 1000
fluxlevel = 100.0
noise = 2.0
signal_std = 15.0
signal_depth = 40.0
gaussian = lambda x: np.exp(-(mu-x)**2/ (2 * signal_std))
mu = 500.1
np.savetxt('toy1.ascii', zip(np.arange(arraysize), np.array([ra.normalvariate(fluxlevel, noise) for x in range(arraysize)] - gaussian(np.arange(arraysize)) * signal_depth), np.ones(arraysize) * noise))
mu = 500.5
np.savetxt('toy2.ascii', zip(np.arange(arraysize), np.array([ra.normalvariate(fluxlevel, noise) for x in range(arraysize)] - gaussian(np.arange(arraysize)) * signal_depth), np.ones(arraysize) * noise))
</code></pre>
| 74,122 |
<p>Random variables are usually denoted with upper-case letters. For example, there could be a random variable $X$. Now, because vectors are usually denoted with a bold lower-case letter (e.g. $\mathbf{z} = (z_0, \dots, z_{n})^{\mathsf{T}}$ and matrices with a bold upper-case letter (e.g. $\mathbf{Y}$), how should I denote a vector of random variables? I think $\mathbf{x} = (X_0, \dots, X_n)^\mathsf{T}$ looks a bit odd. On the other hand if I see $\mathbf{X}$ I would first think it is a matrix. What is the usual way to do this? Of course, I think it would be best to state my notation somewhere in the beginning of paper.</p>
| 74,123 |
<p>I am trying to run a discrete duration model for analyzing (monthly) unemployment using survey data. I have household-level data, and as such I would like to control for the household effects in my model. I thought to do this by either allowing for cluster effects in the estimation of the standard errors or by random effects (for households) - i.e., I think that fixed effects would not work because there are a lot of households and because of the incidental parameter problem. </p>
<p>My model will include both individual characteristics (e.g., age, school, occupation, since when the person has been unemployed - as they were asked retrospectively), and some other variables) as well as household characteristics (e.g. size, number of people unemployed).</p>
<p>Can anyone provide some comments on my proposed methodology? Are there any things I should be mindful of or are there any better ways of doing this?</p>
<p>Also I would highly appreciate any relevant references. </p>
| 74,124 |
<p>Suppose I have <code>N</code> correlation coefficient matrices, containing the Pearson correlation coefficients of <code>k</code> different time series of some quantity measured for every subject of an experiment.</p>
<p>The correlation matrices are then rescaled from the <code>[-1,1]</code> interval to <code>[0,1]</code> interval via a linear operation <code>C' = 0.5*(C+1.0)</code></p>
<p>The rescaling is done because I need to create a graph from those correlation coefficients and I need to set a threshold over which an edge between two nodes is added.</p>
<p>At the end I come up with <code>N</code> positive matrices and I want to extract from them some non-linear measures (i.e. graph-theoretical measures).</p>
<p>Is it correct to <strong>average</strong> the <code>N</code> matrices and then extract those measures from the averaged matrix?</p>
| 36,790 |
<p>Basically I'm trying to fit garch(1,1) model with arima order from auto.arima</p>
<pre><code>> assign(paste("spec.ret.fin.",colnames(base.name[1]),sep=""),
+ ugarchspec(variance.model = list(model = "fGARCH", garchOrder = c(1, 1),
+ submodel = "GARCH", external.regressors = NULL, variance.targeting = FALSE),
+ mean.model = list(armaOrder = c(2,3,4), include.mean = TRUE, archm = FALSE,
+ archpow = 1, arfima = FALSE, external.regressors = NULL, archex = FALSE),
+ distribution.model = "norm", start.pars = list(), fixed.pars = list()))
</code></pre>
<p>This gives the following result:</p>
<blockquote>
<p>spec.ret.fin.chn</p>
</blockquote>
<pre><code>*---------------------------------*
* GARCH Model Spec *
*---------------------------------*
Conditional Variance Dynamics
------------------------------------
GARCH Model : fGARCH(1,1)
fGARCH Sub-Model : GARCH
Variance Targeting : FALSE
Conditional Mean Dynamics
------------------------------------
Mean Model : ARFIMA(2,0,3)
Include Mean : TRUE
GARCH-in-Mean : FALSE
Conditional Distribution
------------------------------------
Distribution : norm
Includes Skew : FALSE
Includes Shape : FALSE
Includes Lambda : FALSE
</code></pre>
<p>But the same code with <code>arfima=TRUE</code> gives</p>
<blockquote>
<p>spec.ret.fin.chn</p>
</blockquote>
<pre><code>*---------------------------------*
* GARCH Model Spec *
*---------------------------------*
Conditional Variance Dynamics
------------------------------------
GARCH Model : fGARCH(1,1)
fGARCH Sub-Model : GARCH
Variance Targeting : FALSE
Conditional Mean Dynamics
------------------------------------
Mean Model : ARFIMA(2,d,3)
Include Mean : TRUE
GARCH-in-Mean : FALSE
Conditional Distribution
------------------------------------
Distribution : norm
Includes Skew : FALSE
Includes Shape : FALSE
Includes Lambda : FALSE
</code></pre>
<p>How does one replace that <code>d</code> with the integration order (d) of the arima?</p>
| 36,791 |
<p>Over at <a href="http://skeptics.stackexchange.com/a/11164/23">Skeptics.StackExchange</a>, an answer cites a study into electro-magnetic hypersensitivity:</p>
<ul>
<li>McCarty, Carrubba, Chesson, Frilot, Gonzalez-Toledo & Marino, <a href="http://www.national-toxic-encephalopathy-foundation.org/lsustudy.pdf" rel="nofollow">Electromagnetic Hypersensitivity: Evidence for a Novel Neurological Syndrome</a> International Journal of Neuroscience, 00, 1–7, 2011, DOI: 10.3109/00207454.2011.608139.</li>
</ul>
<p>I am dubious about some of the statistics used, and would appreciate some expertise in double-checking that they are used appropriately.</p>
<p>Figure 5a shows the results of a subject attempting to detect when an electromagnetic field generator was turned on.</p>
<p>Here is a simplified version:</p>
<pre><code> Actual: Yes No
Detected:
Yes 32 19
No 261 274
</code></pre>
<p>They claim to have used a chi-squared test, and found significance (p < 0.05, without stating what p is.)</p>
<blockquote>
<p>The frequencies of the somatic and behavioral responses in the presence and absence of the field were evaluated using the chi-square test (2 × 2 tables) or the Freeman–Halton extension of the Fisher exact probability test (2 × 3 tables; Freeman & Halton,
1951).</p>
</blockquote>
<p>I see several problems. </p>
<ul>
<li><p>They excluded some of the data - see Table 5b - where they left the device off for long periods. I cannot see the justification in separating that data.</p></li>
<li><p>They seem to be claiming the result is statistically significant when the actual device was on, but not when it wasn't. (I may be misreading this; it isn't clear.) That's not a result that the chi-squared test can give, is it? </p></li>
<li><p>When I have tried to reproduce this test with an <a href="http://graphpad.com/quickcalcs/contingency1/" rel="nofollow">on-line calculator</a> I have found it to be statistically insignificant.</p></li>
</ul>
<p><em>This is my real question:</em> Am I right in saying this?: <strong>A two-tailed, chi-squared test using Fisher's Exact Test is the right way to analyze this data AND it is NOT statistically significant.</strong></p>
| 36,794 |
<p>What does "return rate" (<code>tasa de retorno</code> in spanish) mean in population ecology models? In context of capture-mark-recapture models. I have found many articles on this, but no definition!</p>
<p>I've also found the term "recapture rate". What is it? Is it the same as "return rate"?</p>
| 40,307 |
<p>In fitting a two-component Student's t mixture distribution to some data (standardized GARCH residuals) one of the components has an estimated degree of freedom of 0.6.
This means that even the first moment of the mixture distribution would not exist.</p>
<p>The gamlss.mx package in R is used for estimation. gamlss.control, glim.control and MX.control seem not to support this kind of option -- or I couldn't find out how... </p>
<p>Is there a way to restrict the degree of freedom parameter estimates to be larger than, say, 3?</p>
| 74,125 |
<p>I am working on a model and I have calculated the EVPI, EVPPI and the EVSI.</p>
<p>I need to interpret the graph of the Expected Value of Sample Information (EVSI), what should I say? The graph consists of the sample sizes as the x-axis and the EVSI as the y-axis.</p>
<p>I know that the EVSI calculation is part of the process of finding an optimum sample size for a future study, but I don't know how to explain the graph results.</p>
| 35,424 |
<p>I fit the logistic regression model for gender and drink for the data ihd using the following command </p>
<pre><code>model<-glm(ihd~as.factor(gender)*as.factor(drink),Family='binomial',data=ihd)
</code></pre>
<p>My question is how can I get the estimated log odd-ratio for: </p>
<ol>
<li>gender in non-drinker stratum, and</li>
<li>gender in drinker stratum</li>
</ol>
| 74,126 |
<p>I want to apply LDA in my study. I want to use LDA to classify my data. My data is a $M\times 20$ matrix, it means there are $20$ features in my data. How can I apply LDA for my data?</p>
<p>And another question: Does LDA only work on data which have only two features ($M\times 2$ matrix)?</p>
| 74,127 |
<p>I am attempting to recreate the model in this paper:</p>
<p><a href="http://abdulpinjari.weebly.com/uploads/9/6/7/8/9678119/pinjari_etal_transportation_iatbr2009specialissue.pdf" rel="nofollow">Pinjari (2011)</a></p>
<p>in which the author uses a 4 equation system with discrete choice dependent variables in each of the 4 equations. Does anyone know if R has a package that performs these type of estimations? If not, are there some key words I am missing to find out how others have performed these types of estimations?</p>
| 74,128 |
<p>I know that one can easily get variance (unconditional) of a Garch (r,s) process
:
$\sigma^2= \frac { \alpha_0 } { (1- \Sigma_{i=1}^r \alpha_i - \Sigma_{j=1}^s \beta_j ) }$</p>
<p>However I am struggling to get an analytical expression for Unconditional variance when there is an ARMA part also in Garch..</p>
<p>that is ARMA(p,q)+ Garch (r,s)</p>
<p>I will be grateful for any help ..Thanks</p>
<p>Regards
Dinakar</p>
| 74,129 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.