Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4596 | 2 | null | 4580 | 6 | null | I am a biologist who models the effects of inter-annual climatic variation on population dynamics of several migratory species. My datasets are very large (spatially intensive data) so I run my R code using `multicore` on Amazon EC2 servers. If my task is particularly resource intensive, I will choose a High Memory Quadruple Extra Large instance which comes with 26 CPU units, 8 cores, and 68G of RAM. In this case I usually run 4-6 scripts simultaneously, each of which is working through a fairly large data set. For smaller tasks, I choose servers with 4-6 cores and about 20 gigs of RAM.
I launch these instances (usually spot instances because they are cheaper but can terminate anytime the current rate exceeds what I have chosen to pay), run the script for several hours, and then terminate the instance once my script has finished. As for the machine image (Amazon Machine Image), I took someone elses Ubuntu install, updated R, installed my packages, and saved that as my private AMI on my S3 storage space.
My personal machine is a dualcore macbook pro and it has a hard time forking multicore calls. Feel free to email if you have other questions.
| null | CC BY-SA 2.5 | null | 2010-11-16T19:10:24.867 | 2010-11-16T19:10:24.867 | null | null | 1451 | null |
4597 | 1 | null | null | 4 | 1111 | I am trying to detect text in a scanned document by examining variations in the lightness of the scan collapsed vertically. Here's a sample of the input I would receive, with the lightness plot of each vertical pixel strip superimposed:

Note: I've applied a Gaussian smoothing function to the data ~ 10 times, but it seems to be pretty wiggly to begin with. It is easy to see that the left margin is really wiggly (i.e., has many extrema).
Problem: I want to generate a set of critical points of the image.
I've resorted to computing the number of extrema of the function within an interval (using the derivative and its proximity to zero) and dividing that by the length of the interval, but that isn't easy on the computer. (I use Python, and I couldn't find many low-pass filters for the data.)
Thanks!
| How to find text blocks in a scanned document? | CC BY-SA 3.0 | null | 2010-11-16T20:41:34.210 | 2012-05-03T23:14:21.130 | 2012-05-03T23:14:21.130 | 4479 | 2002 | [
"python",
"image-processing"
]
|
4598 | 2 | null | 4597 | 1 | null | What about Moving Averages?
Edit: For calculating moving standard deviations, this is a quick and dirty way to do it in R:
```
n.x <- 1000
x <- cumsum(rnorm(n.x))
plot(x,type="l")
win <- 20
roll.sd <- as.vector(rep(NA,n.x))
for(i in 1:(n.x-win)){roll.sd[i] <- sd(x[i:(i+win)])}
```
I think quantmod has a build-in function for it. You could build, in a similar way, a moving average for first differences of the time series.
| null | CC BY-SA 2.5 | null | 2010-11-16T21:05:45.083 | 2010-11-17T12:15:25.700 | 2010-11-17T12:15:25.700 | 1766 | 1766 | null |
4599 | 2 | null | 3194 | 9 | null | Nobody has suggested a Bayesian approach yet? I know the question has been answered already, but what the heck. Below is for only a 3-sided die, but I'm guessing it's obvious how to fix it for $n=37$ sides.
First, in line with what @Glen_b said, a bayesian is not actually interested whether or not the die is exactly fair - it isn't. What (s)he cares about is whether it's close enough, whatever "enough" means in the context, say, within 5% of fair for each side.
If $p_1$, $p_2$, and $p_3$ represent the probabilites of rolling 1, 2, and 3, respectively, then we represent our prior knowledge about $p=(p_1,p_2,p_3)$ with a prior distribution, and to make the math easy we could choose a [Dirichlet distribution](http://en.wikipedia.org/wiki/Dirichlet_distribution). Note that $p_1+p_2+p_3=1$. For a non-informative prior we might pick prior parameters, say, $\alpha_0=(1,1,1)$.
If $X=(X_1,X_2,X_3)$ represents the observed counts of 1,2,3 then of course $X$ has a [multinomial distribution](http://en.wikipedia.org/wiki/Multinomial_distribution) with parameters $p=(p_1,p_2,p_3)$, and the theory says that the posterior is also a Dirichlet distribution with parameters $\alpha=(x_1+1,x_2+1,x_3+1)$. (Dirichlet is called a [conjugate prior](http://en.wikipedia.org/wiki/Conjugate_prior), here.)
We observe data, find the posterior with Bayes' rule, then ALL inference is based on the posterior. Want an estimate for $p$? Find the mean of the posterior. Want confidence intervals (no, rather [credible intervals](http://en.wikipedia.org/wiki/Credible_interval))? Calculate some areas under the posterior. For complicated problems in the real world we usually simulate from the posterior and get simulated estimates for all of the above.
Anyway, here's how (with R):
First, get some data. We roll the die 500 times.
```
set.seed(1)
y <- rmultinom(1, size = 500, prob = c(1,1,1))
```
(we're starting with a fair die; in practice these data would be observed.)
Next, we simulate 5000 observations of $p$ from the posterior and take a look at the results.
```
library(MCMCpack)
A <- MCmultinomdirichlet(y, alpha0 = c(1,1,1), mc = 5000)
plot(A)
summary(A)
```
Finally, let's estimate our posterior probability (after observing the data) that the die is within 0.05 of fair in each coordinate.
```
B <- as.matrix(A)
f <- function(x) all((x > 0.28)*(x < 0.38))
mean(apply(B, MARGIN = 1, FUN = f))
```
The result is about 0.9486 on my machine. (Not a surprise, really. We started with a fair die after all.)
Quick remark: it probably isn't reasonable for us to have used a non-informative prior in this example. Since there's even a question presumably the die appears approximately balanced in the first place, so it may be better to pick a prior that is concentrated closer to 1/3 in all coordinates. Above this would simply have made our estimated posterior probability of "close to fair" even higher.
| null | CC BY-SA 2.5 | null | 2010-11-16T21:24:33.410 | 2010-11-16T21:24:33.410 | null | null | null | null |
4600 | 1 | null | null | 3 | 3067 | I have run a factorial type test in a processing plant and have run a forward and backward step regression in R.
How can I use the regression results and the anova created from the regression to know what percent of the measured variation of the dependent variable was caused by the purposeful manipulation of the independent variables?
| Explaining variation in a dependent variable based on a factorial experiment | CC BY-SA 3.0 | null | 2010-11-16T22:11:41.093 | 2011-06-06T04:50:30.013 | 2011-06-06T04:50:30.013 | 183 | null | [
"regression",
"anova",
"interpretation"
]
|
4602 | 2 | null | 4595 | 3 | null | The [travelling salesman problem](http://en.wikipedia.org/wiki/Travelling_salesman_problem) is surely an archetypal hard optimization problem. To quote Wikipedia, "it is used as a benchmark for many optimization methods".
| null | CC BY-SA 2.5 | null | 2010-11-16T22:29:48.383 | 2010-11-16T22:29:48.383 | null | null | 449 | null |
4603 | 1 | 4614 | null | 5 | 3323 | Are there any open-source Java implementations of lasso or least angles regression?
Pure Java code would be best, but clean implementations in other languages would also be of interest. I am already aware of the existence of a variety of R packages than can do lasso/LAR fits.
Thanks.
| Java implementations of the lasso | CC BY-SA 2.5 | null | 2010-11-16T22:31:23.040 | 2011-06-07T00:09:52.463 | 2010-11-16T22:53:34.213 | 439 | 439 | [
"regression",
"lasso",
"java"
]
|
4604 | 1 | 4613 | null | 5 | 416 | I'm doing some some analysis of an arbitrary string of text, modelling it as a Markov chain where the state is simply the value of the previous character. Call the current character $c$ and the previous character $p$; then it is trivial to calculate $P(c\ |\ p)$ for the given sample text. However, if there are not many cases of a certain $(p, c)$ pair, or in general not many samples used. I know qualitatively that the fewer the samples, the higher the uncertainty/error in this conditional probability is. However, how can I quantify this?
| Uncertainty of conditional probability evaluated from sample | CC BY-SA 4.0 | null | 2010-11-16T22:57:41.137 | 2021-01-14T17:12:00.303 | 2021-01-14T17:12:00.303 | 1810 | 1810 | [
"markov-process",
"conditional-probability"
]
|
4605 | 2 | null | 4580 | 2 | null | I use snow and snowfall for course parallelization on HPC clusters and CUDA for fine data parallel processing. I'm in Epidemiology doing disease transmission modeling. So I use both.
| null | CC BY-SA 2.5 | null | 2010-11-17T01:34:10.317 | 2010-11-17T01:34:10.317 | null | null | 1364 | null |
4606 | 1 | null | null | 3 | 535 | I am thinking of using [this code](http://www.mathworks.com/matlabcentral/fileexchange/14034-kernel-density-estimator) in a Monte Carlo routine to generate Kernel Density Estimates for subsequent use in a Naive Bayes Classifier [(see this earlier post)](https://stats.stackexchange.com/questions/4298/use-of-kernel-density-estimate-in-naive-bayes-classifier).
The author of the code states on the above linked page that it will "recognise that the data you have provided is perfectly discrete and since discrete data does not need smoothing, the selected bandwidth should be zero" in the case of ties. However, I do not want this but can envisage that, given the parameters I intend to use, multiple repetitions in a large Monte Carlo routine will very likely produce ties in the generated data. To prevent this I am thinking of limiting the number of iterations in the MC routine and employing checks to ensure non-discrete generated data for use in the Kernel Density Estimate code, and then repeating. Perhaps this is more clearly explained thus: instead of a 100,000 iteration MC, which is highly likely to produce unwanted ties, and then one KDE, do a 100 iteration MC and one KDE and repeat this 100 and 1 routine 1000 times. This will result in 1000 separate but similar continuous KDEs which can then be averaged to produce a single "unified" continuous KDE.
Is this a valid approach? Are averaged KDEs a statistically sound methodology? The discussion section [here](http://en.wikipedia.org/wiki/Naive_Bayes_classifier) would seem to imply that even if there are deficiencies in this approach, they wouldn't necessarily be too troublesome.
| Averaged continuous Kernel Density Estimates in lieu of a discrete Kernel Density Estimate in Monte Carlo Proceedure | CC BY-SA 2.5 | null | 2010-11-17T02:04:27.900 | 2010-11-18T16:35:12.947 | 2017-04-13T12:44:44.530 | -1 | 226 | [
"kde",
"discrete-data",
"continuous-data",
"monte-carlo"
]
|
4607 | 2 | null | 2171 | 1 | null | Another good alternative is the protovis library [http://vis.stanford.edu/protovis/](http://vis.stanford.edu/protovis/"Protovis")
It is a very well crafted JavaScript library that can create some beautiful visualizations if you have the time and ability to write the modest amount of JavaScript code needed.
I also highly recommend Tableau [http://www.tableausoftware.com](http://www.tableausoftware.com). It is great for rapidly exploring data sets and creating many different visualizations.
Both products have roots at the Stanford Visualization Group.
| null | CC BY-SA 3.0 | null | 2010-11-17T02:44:28.000 | 2014-11-15T13:45:50.507 | 2014-11-15T13:45:50.507 | 22047 | 1246 | null |
4608 | 1 | 4621 | null | 37 | 67009 | I'm trying to implement basic gradient descent and I'm testing it with a hinge loss function i.e. $l_{\text{hinge}} = \max(0,1-y\ \boldsymbol{x}\cdot\boldsymbol{w})$. However, I'm confused about the gradient of the hinge loss. I'm under the impression that it is
$$
\frac{\partial }{\partial w}l_{\text{hinge}} =
\begin{cases}
-y\ \boldsymbol{x} &\text{if } y\ \boldsymbol{x}\cdot\boldsymbol{w} < 1 \\
0&\text{if } y\ \boldsymbol{x}\cdot\boldsymbol{w} \geq 1
\end{cases}
$$
But doesn't this return a matrix the same size as $\boldsymbol{x}$? I thought we were looking to return a vector of length $\boldsymbol{w}$? Clearly, I've got something confused somewhere. Can someone point in the right direction here?
I've included some basic code in case my description of the task was not clear
```
#Run standard gradient descent
gradient_descent<-function(fw, dfw, n, lr=0.01)
{
#Date to be used
x<-t(matrix(c(1,3,6,1,4,2,1,5,4,1,6,1), nrow=3))
y<-c(1,1,-1,-1)
w<-matrix(0, nrow=ncol(x))
print(sprintf("loss: %f,x.w: %s",sum(fw(w,x,y)),paste(x%*%w, collapse=',')))
#update the weights 'n' times
for (i in 1:n)
{
w<-w-lr*dfw(w,x,y)
print(sprintf("loss: %f,x.w: %s",sum(fw(w,x,y)),paste(x%*%w,collapse=',')))
}
}
#Hinge loss
hinge<-function(w,x,y) max(1-y%*%x%*%w, 0)
d_hinge<-function(w,x,y){ dw<-t(-y%*%x); dw[y%*%x%*%w>=1]<-0; dw}
gradient_descent(hinge, d_hinge, 100, lr=0.01)
```
Update:
While the answer below helped my understanding of the problem, the output of this algorithm is still incorrect for the given data. The loss function reduces by 0.25 each time but converge too fast and the resulting weights do not result in a good classification. Currently the output looks like
```
#y=1,1,-1,-1
"loss: 1.000000, x.w: 0,0,0,0"
"loss: 0.750000, x.w: 0.06,-0.1,-0.08,-0.21"
"loss: 0.500000, x.w: 0.12,-0.2,-0.16,-0.42"
"loss: 0.250000, x.w: 0.18,-0.3,-0.24,-0.63"
"loss: 0.000000, x.w: 0.24,-0.4,-0.32,-0.84"
"loss: 0.000000, x.w: 0.24,-0.4,-0.32,-0.84"
"loss: 0.000000, x.w: 0.24,-0.4,-0.32,-0.84"
...
```
| Gradient of Hinge loss | CC BY-SA 3.0 | null | 2010-11-17T03:15:58.023 | 2017-05-29T14:31:53.233 | 2014-05-03T15:02:22.587 | 27403 | 2023 | [
"loss-functions"
]
|
4609 | 1 | 9919 | null | 1 | 1285 | from what I can tell, PASW v.18 (the new version of SPSS) only gives you the p value of nonparametric tests. I am calculating Kruskal Wallis and Mann Whitney tests, and need to report the test statistic, not just p. Can someone please help?
Thanks!
| How do I get non-parametric test values (not just p) in PASW 18? | CC BY-SA 2.5 | null | 2010-11-17T03:19:10.763 | 2011-04-24T04:00:58.850 | null | null | 2025 | [
"nonparametric",
"spss"
]
|
4610 | 1 | null | null | 3 | 1696 |
Statistics were never my strong point and it's my first question, so please be gentle :) I'm doing some research using Computational Fluid Dynamics, CFD, to model the flow of an oil aerosol through a fibrous filter. The aerosol has a droplet size distribution that is log-normal. The existing code only allows the specification of a single size, I have to add the log-normal support.
What I need to be able to do is if someone describes an aerosol flow as having, say, 10,000 droplets, an average droplet size of 425nm and a standard dev of 15, be able to, within my code, calculate that there are:
aaa particles of size 100nm
bbb particles of size 200nm
ccc particles of size 300nm
ddd particles of size 400nm
...
...
...
iii particles of size 900nm
jjj particles of size 1000nm
when we are looking at particles in the range 100nm - 1000nm with 100nm step sizes.
In turn I need to push this information, the particle counts, along with a lot of other stuff into the CFD solver and see what happens to the droplets. The big questions is, how do I calculate the number of particles for each step size. As I said, my stat's is quite limited, so I don't even know if saying the average is 425nm... is the right way to frame the input data.
My Masters supervisor has told me one thing, using the pdf, but my reading of the definition of log-normal, yes on Wikipedia, leads me to think he is wrong. Any thoughts on how I can work this out are greatly appreciated.
Andrew
Updated:
Thanks for the answers, but this is where my minimal stats shines through. OK so we rephrase that we have a GM of, say 328nm, and a GSD of 14.8nm. I then look at the definition of the cumulative dist func. and see:
cdf = 1/2 + 1/2 * erf[(ln(x) - mu) / sqrt(2 * sigma^2)] - From wikipedia
Am I correct in that mu = GM = 328nm and sigma = GSD = 14.8nm? Then in turn, what do I use for my x value? If I need to get values at 100nm, 200nm, 300nm,....., 900nm, 1000nm, do I use these values, lets just drop the "nm" part for now, as X, one at a time, or do I use a ln/log of them or some other magic number based upon the value in question? I don't need to actually code the erf() as c++ has a function call for it, it's just how I calculate the actual value that I pass into the c++ function that is causing me angst.
Am I correct in that when I can calculate the input to the erf function that I calculate the cfd values which will be in the range 0 -> 1 which I then multiply by my sample size to get the number of particles below the point in question. I then just subtract successive values to get the particles in a range - correct/sort of correct/wrong???
Once again, thanks for the help,
Andrew
| Log normal distributions - particle sizes in an aerosol | CC BY-SA 2.5 | null | 2010-11-17T03:33:13.613 | 2010-11-18T01:22:50.987 | 2010-11-18T01:22:50.987 | null | null | [
"distributions",
"lognormal-distribution"
]
|
4611 | 2 | null | 4368 | 2 | null | Note that
$\text{Var}(\epsilon | x_i) = E[\epsilon^2 | x_i] - \left(E[\epsilon | x_i]\right)^2.$
Recall a typical Gauss-Markov assumption that $E[\epsilon | x_i]=0$. Hence,
$\text{Var}(\epsilon | x_i) = E[\epsilon^2 | x_i].$
Since we don't observe $\epsilon^2$, we have to use some estimate. The best estimate that we have is $\hat{e}^2$, where $\hat{e}$ is our residual. The variance is the expectation of the squared error and our best estimate of this is the squared residuals; hence, they are the outcome of our auxiliary regression.
Now, we suppose that $\sigma^2_i$ varies across units $i$. One way to check this is to determine whether the squared residuals are correlated with other stuff that varies across units, namely, the variables in the regression that we care about, plus their squares, and the interactions, as suggested by White. If the squared residuals can't be predicted by stuff that we observe that varies across units, then we cannot reject the null hypothesis that the expected squared residuals and don't vary across units, which suggests that the variance doesn't vary across units and the errors are homoskedastic.
| null | CC BY-SA 2.5 | null | 2010-11-17T04:00:28.513 | 2010-11-17T04:00:28.513 | null | null | 401 | null |
4612 | 1 | 4615 | null | 26 | 34462 | Which good econometrics textbooks would you recommend?
Edit: there are quite a few books out there, with varying levels of mathematical sophistication. It would be good to get some idea of how technical the book you're recommending is.
| Econometrics textbooks? | CC BY-SA 3.0 | null | 2010-11-17T07:28:49.070 | 2021-02-14T19:01:17.233 | 2015-11-02T01:19:35.897 | 22468 | 439 | [
"econometrics",
"references"
]
|
4613 | 2 | null | 4604 | 4 | null | You can use the [Hoeffding inequality](http://en.wikipedia.org/wiki/Hoeffding%27s_inequality):
$$ P(|\hat{p}_n-p|\geq t)\leq 2e^{-2nt^2}$$
($\hat{p}_n$ is your estimated probability).
For small $n$, Markov ineqality may be more efficient:
$$P(|\hat{p}_n-p|\geq t)\leq \frac{p(1-p)}{nt^2} $$.
For example, with the second inequality, if $n=50$ the probability that the difference between your estimated probability and the truth is greater than $0.25$ is lower than $\frac{1}{4*50*0.25^2}=0.08$ ($p(1-p)<1/4$ but it can be much smaller...). Is 8% chance to have difference larger than $0.25$ small enough for you ?
| null | CC BY-SA 2.5 | null | 2010-11-17T07:30:42.387 | 2010-11-23T14:37:19.163 | 2010-11-23T14:37:19.163 | 223 | 223 | null |
4614 | 2 | null | 4603 | 5 | null | About clean implementation in Python, there is the [scikit.learn](http://scikit-learn.sourceforge.net/) toolkit. The [L1/L2 regularization scheme](http://scikit-learn.sourceforge.net/modules/glm.html) (incl. elasticnet) works great with GLM (LARS and coordinate descent algorithms available). Don't know about Java implementation.
| null | CC BY-SA 2.5 | null | 2010-11-17T07:34:31.840 | 2010-11-17T07:34:31.840 | null | null | 930 | null |
4615 | 2 | null | 4612 | 14 | null | Definitively [Econometric Analysis](http://pages.stern.nyu.edu/~wgreene/Text/econometricanalysis.htm), by Greene. I'm not an econometrician, but I found this book very useful and well written.
| null | CC BY-SA 2.5 | null | 2010-11-17T07:40:10.787 | 2010-11-17T07:40:10.787 | null | null | 930 | null |
4616 | 2 | null | 4610 | 3 | null | Saying 'the average is 425nm' is probably not the best way to frame the input. You're better parameterising a [log-normal distribution](http://en.wikipedia.org/wiki/Log-normal_distribution) by its [geometric mean](http://en.wikipedia.org/wiki/Geometric_mean) and [geometric standard deviation](http://en.wikipedia.org/wiki/Geometric_standard_deviation). Having done so, calculating the probability of an observation between two given values is just a matter of taking the difference between the two corresponding values of the cumulative distribution function.
| null | CC BY-SA 2.5 | null | 2010-11-17T07:44:20.333 | 2010-11-17T07:44:20.333 | null | null | 449 | null |
4617 | 2 | null | 4612 | 9 | null | Depends on what level you're after. At a postgraduate level, the one i've most often seen referenced and recommended, and have therefore looked at most myself, is:
Wooldridge, Jeffrey M. Econometric Analysis of Cross Section and Panel Data. MIT Press, 2001. ISBN [9780262232197](http://en.wikipedia.org/w/index.php?title=Special%3ABookSources&isbn=9780262232197)
Most of what little I know about econometrics I learnt from this book. 776 pages without a single graph.
| null | CC BY-SA 2.5 | null | 2010-11-17T07:56:14.970 | 2010-11-17T07:56:14.970 | null | null | 449 | null |
4618 | 2 | null | 4612 | 9 | null | It depends on what you really want, (GMM, time series, panel...) but I can recommand those two books:
- Fumio Hayashi's "Econometrics" and
- Davidson and McKinnon "Econometric Theory and Methods".
For a course in econometric time series, Hamilton's ["Time Serie Analysis"](https://press.princeton.edu/books/hardcover/9780691042893/time-series-analysis) is great.
| null | CC BY-SA 4.0 | null | 2010-11-17T08:02:48.417 | 2021-02-14T17:18:42.730 | 2021-02-14T17:18:42.730 | 53690 | 2028 | null |
4619 | 1 | null | null | 2 | 253 | I have two surveys of two separate populations (I don't know that they are necessarily distinct, but they are from two different databases) that ask a similar set of questions. Some questions are basic demographics (e.g. age, income), while other questions are a bit more detailed or about their opinions (e.g. brand preferences, spending habits).
How do I prove statistically that the two populations are "the same," or at least comparable? I know that I can do a t-test for individual questions, but is there a way to establish similarity on more than one dimension?
The goal is to combine the surveys from these two populations into one series of survey data. For example, we may run survey A every six months, but we run survey B every month, except when we run survey A. I would then like to combine the results from survey A and survey B to have a monthly series of survey data.
| Establishing that the population sampled of two separate surveys is the same | CC BY-SA 2.5 | 0 | 2010-11-17T08:08:19.927 | 2010-11-18T13:06:58.253 | 2010-11-17T14:37:06.837 | 930 | 1195 | [
"psychometrics",
"survey"
]
|
4620 | 1 | null | null | 3 | 4977 | Say, for example, that I want to determine the market share or relative popularity of coffee houses within a certain population through a survey. What is the best way to write a question that accurately measures this?
Some issues that I am thinking of:
I don't want to ask a single choice question ("Which coffee house do you go to?"), because the coffee houses are not mutually exclusive. I may enjoy more than one equally often.
If I ask a multiple choice question, then I can't really get a real "market share," since the sum of the proportions of people who go to each coffee house sum to over 100%. I can only say that "x% of people in this population go to this coffee house."
Is it possible to ask a series of questions ("Which coffee house do you prefer the most?" "Which coffee house do you prefer the second most?", or "Rank the following coffee houses")?
Can I ask a multiple choice question then rebase the proportion to the total number of responses? For example, if I have 100 respondents, but they selected 200 coffee houses (because each respondent said they went to two coffee houses, maybe), can I calculate a frequency table based on 200, the number of selections, instead of the number of respondents?
| Determining market share from multiple choice questions on a survey | CC BY-SA 2.5 | null | 2010-11-17T08:36:48.267 | 2010-11-17T18:47:25.040 | null | null | 1195 | [
"survey"
]
|
4621 | 2 | null | 4608 | 43 | null | To get the gradient we differentiate the loss with respect to $i$th component of $w$.
Rewrite hinge loss in terms of $w$ as $f(g(w))$ where $f(z)=\max(0,1-y\ z)$ and $g(w)=\mathbf{x}\cdot \mathbf{w}$
Using chain rule we get
$$\frac{\partial}{\partial w_i} f(g(w))=\frac{\partial f}{\partial z} \frac{\partial g}{\partial w_i} $$
First derivative term is evaluated at $g(w)=x\cdot w$ becoming $-y$ when $\mathbf{x}\cdot w<1$, and 0 when $\mathbf{x}\cdot w>1$. Second derivative term becomes $x_i$. So in the end you get
$$
\frac{\partial f(g(w))}{\partial w_i} =
\begin{cases}
-y\ x_i &\text{if } y\ \mathbf{x}\cdot \mathbf{w} < 1 \\
0&\text{if } y\ \mathbf{x}\cdot \mathbf{w} > 1
\end{cases}
$$
Since $i$ ranges over the components of $x$, you can view the above as a vector quantity, and write $\frac{\partial}{\partial w}$ as shorthand for $(\frac{\partial}{\partial w_1},\frac{\partial}{\partial w_2},\ldots)$
| null | CC BY-SA 3.0 | null | 2010-11-17T09:25:37.863 | 2013-01-29T22:28:16.393 | 2013-01-29T22:28:16.393 | -1 | 511 | null |
4622 | 2 | null | 4612 | 6 | null | I would definitely recommend M. Verbeek's [A Guide to Modern Econometrics](https://rads.stackoverflow.com/amzn/click/com/0471899828).
Woolwridge is too wordy (and this long-windedness loses the reader's focus too early in the chapters). Greene (i'm referring to the 5th edition) often gets lost in minutiae: i.e. strives to catalog formulae that are orthogonal to the main subject of the chapter (good for references, but again, not ideal for learning).
i've not read the Hayashi (thought i suspect it's a bit outdated now). Hamilton, is really focused on...TSA so it's a bit off the mark for general econometrics.
| null | CC BY-SA 4.0 | null | 2010-11-17T09:41:47.610 | 2021-02-14T19:01:17.233 | 2021-02-14T19:01:17.233 | 603 | 603 | null |
4623 | 1 | null | null | 6 | 841 | I have a 3-dimensional sample $(X_k,Y_k,Z_k), k=1, \ldots, N$ which I suspect to be uniform on some parallelepiped in $R^3$ (i.e. a set of the form [a;b]X[c;d]X[e;f], where numbers a,b,c,d,e,f are unknown).
- How should I estimate numbers a, b, c, d, e, f? Obviously I can try MLE, but then my estimates are biased. Does unbiased estimates exist?
- How can I check that my sample is indeed uniform?
| How to check that a sample suits multi-dimensional uniform distribution? | CC BY-SA 2.5 | null | 2010-11-17T10:16:58.917 | 2010-11-17T13:14:30.247 | 2010-11-17T12:24:02.010 | 8 | null | [
"distributions",
"hypothesis-testing",
"estimation"
]
|
4624 | 1 | 4626 | null | 12 | 994 | I am fitting an L1-regularized linear regression to a very large dataset (with n>>p.) The variables are known in advance, but the observations arrive in small chunks. I would like to maintain the lasso fit after each chunk.
I can obviously re-fit the entire model after seeing each new set of observations. This, however, would be pretty inefficient given that there is a lot of data. The amount of new data that arrives at each step is very small, and the fit is unlikely to change much between steps.
Is there anything I can do to reduce the overall computational burden?
I was looking at the LARS algorithm of Efron et al., but would be happy to consider any other fitting method if it can be made to "warm-start" in the way described above.
Notes:
- I am mainly looking for an algorithm, but pointers to existing software packages that can do this may also prove insightful.
- In addition to the current lasso trajectories, the algorithm is of course welcome to keep other state.
>
Bradley Efron, Trevor Hastie, Iain
Johnstone and Robert Tibshirani,
Least Angle Regression, Annals
of Statistics (with discussion)
(2004) 32(2), 407--499.
| Updating the lasso fit with new observations | CC BY-SA 2.5 | null | 2010-11-17T10:33:27.813 | 2016-02-17T15:13:51.587 | 2010-11-17T15:53:54.630 | 439 | 439 | [
"regression",
"lasso"
]
|
4625 | 2 | null | 4609 | 4 | null | PASW 18 does give you the test statistic in addition to the p value.
For example, if you have selected Mann Whitney test, the output from SPSS will include a Test Statistics box that shows the mann Whitney U statistic
The same thing applies for the Kruskal Wallis test, although note that SPSS labels the statistic Chi-Square, rather than H.
| null | CC BY-SA 2.5 | null | 2010-11-17T10:47:04.823 | 2010-11-17T10:47:04.823 | null | null | 2030 | null |
4626 | 2 | null | 4624 | 7 | null | The lasso is fitted through LARS (an iterative process, that starts at some initial estimate $\beta^0$). By default $\beta^0=0_p$ but you can change this in most implementation (and replace it by the optimal $\beta^*_{old}$ you already have). The closest $\beta^*_{old}$ is to $\beta_{new}^*$, the smaller the number of LARS iteration you will have to step to get to $\beta_{new}^*$.
# EDIT:
Due to the comments from `user2763361` I add more details to my original answer.
From the comments below I gather that user2763361 suggests to complement my original answer to turn it into one that can be used directly (off the shelves) while also being very efficient.
To do the first part, I will illustrate the solution I propose step by step on a toy example. To satisfy the second part, I will do so using a recent, high quality interior point solver.
This is because, it is easier to obtain an high performance implementation of the solution I propose using a library that can solve the lasso problem by the interior point approach rather than trying to hack the LARS or simplex algorithm to start the optimization from a non-standard starting point (though that second venue is also possible).
Note that it is sometimes claimed (in older books) that the interior point approach to solving linear programs is slower than the simplex approach and that may have been true a long time ago but it's generally not true today and certainly not true for large scale problems (this is why most professional libraries like `cplex` use the interior point algorithm) and the question is at least implicitly about large scale problems.
Note also that the interior point solver I use fully handles sparse matrices so I don t think there will be a large performance gap with LARS (an original motivation for using LARS was that many popular LP solvers at the time were not handling sparse matrices well and these are a characteristic features of the LASSO problem).
A (very) good open source implementation of the interior point algorithm is `ipopt`, in the [COIN-OR](http://www.coin-or.org/) library. Another reason I will be using `ipopt` is that it has has an R interface, `ipoptr`. You will find more exhaustive installation guide [here](http://www.ucl.ac.uk/~uctpjyy/downloads/ipo), below I give the standard commands to install it in `ubuntu`.
in the `bash`, do:
```
sudo apt-get install gcc g++ gfortran subversion patch wget
svn co https://projects.coin-or.org/svn/Ipopt/stable/3.11 CoinIpopt
cd ~/CoinIpopt
./configure
make
make install
```
Then, as root, in `R` do (I assume `svn` has copied the subversion file in `~/` as it does by default):
```
install.packages("~/CoinIpopt/Ipopt/contrib/RInterface",repos=NULL,type="source")
```
From here, I'm giving a small example (mostly from the toy example given by Jelmer Ypma as part of his `R` wraper to `ipopt`):
```
library('ipoptr')
# Experiment parameters.
lambda <- 1 # Level of L1 regularization.
n <- 100 # Number of training examples.
e <- 1 # Std. dev. in noise of outputs.
beta <- c( 0, 0, 2, -4, 0, 0, -1, 3 ) # "True" regression coefficients.
# Set the random number generator seed.
ranseed <- 7
set.seed( ranseed )
# CREATE DATA SET.
# Generate the input vectors from the standard normal, and generate the
# responses from the regression with some additional noise. The variable
# "beta" is the set of true regression coefficients.
m <- length(beta) # Number of features.
A <- matrix( rnorm(n*m), nrow=n, ncol=m ) # The n x m matrix of examples.
noise <- rnorm(n, sd=e) # Noise in outputs.
y <- A %*% beta + noise # The outputs.
# DEFINE LASSO FUNCTIONS
# m, lambda, y, A are all defined in the ipoptr_environment
eval_f <- function(x) {
# separate x in two parts
w <- x[ 1:m ] # parameters
u <- x[ (m+1):(2*m) ]
return( sum( (y - A %*% w)^2 )/2 + lambda*sum(u) )
}
# ------------------------------------------------------------------
eval_grad_f <- function(x) {
w <- x[ 1:m ]
return( c( -t(A) %*% (y - A %*% w),
rep(lambda,m) ) )
}
# ------------------------------------------------------------------
eval_g <- function(x) {
# separate x in two parts
w <- x[ 1:m ] # parameters
u <- x[ (m+1):(2*m) ]
return( c( w + u, u - w ) )
}
eval_jac_g <- function(x) {
# return a vector of 1 and minus 1, since those are the values of the non-zero elements
return( c( rep( 1, 2*m ), rep( c(-1,1), m ) ) )
}
# ------------------------------------------------------------------
# rename lambda so it doesn't cause confusion with lambda in auxdata
eval_h <- function( x, obj_factor, hessian_lambda ) {
H <- t(A) %*% A
H <- unlist( lapply( 1:m, function(i) { H[i,1:i] } ) )
return( obj_factor * H )
}
eval_h_structure <- c( lapply( 1:m, function(x) { return( c(1:x) ) } ),
lapply( 1:m, function(x) { return( c() ) } ) )
# The starting point.
x0 = c( rep(0, m),
rep(1, m) )
# The constraint functions are bounded from below by zero.
constraint_lb = rep( 0, 2*m )
constraint_ub = rep( Inf, 2*m )
ipoptr_opts <- list( "jac_d_constant" = 'yes',
"hessian_constant" = 'yes',
"mu_strategy" = 'adaptive',
"max_iter" = 100,
"tol" = 1e-8 )
# Set up the auxiliary data.
auxdata <- new.env()
auxdata$m <- m
auxdata$A <- A
auxdata$y <- y
auxdata$lambda <- lambda
# COMPUTE SOLUTION WITH IPOPT.
# Compute the L1-regularized maximum likelihood estimator.
print( ipoptr( x0=x0,
eval_f=eval_f,
eval_grad_f=eval_grad_f,
eval_g=eval_g,
eval_jac_g=eval_jac_g,
eval_jac_g_structure=eval_jac_g_structure,
constraint_lb=constraint_lb,
constraint_ub=constraint_ub,
eval_h=eval_h,
eval_h_structure=eval_h_structure,
opts=ipoptr_opts,
ipoptr_environment=auxdata ) )
```
My point is, if you have new data in, you just need to
- update (not replace) the constraint matrix and objective function vector to account for the new observations.
- change the starting points of the interior point from
x0 = c( rep(0, m),
rep(1, m) )
to the vector of solution you found previously (before new data was added in). The logic here is as follows. Denote $\beta_{new}$ the new vector of coefficients (the ones corresponding to the data set after the update) and $\beta_{old}$ the original ones. Also denote $\beta_{init}$ the vector x0 in the code above (this is the usual start for the interior point method). Then the idea is that if:
$$|\beta_{init}-\beta_{new}|_1>|\beta_{new}-\beta_{old}|_1\quad(1)$$
then, one can get $\beta_{new}$ much faster by starting the interior point from
$\beta_{old}$ rather than the naive $\beta_{init}$. The gain will be all the more important when the dimensions of the data set ($n$ and $p$) are larger.
As for the conditions under which inequality (1) holds, they are:
- when $\lambda$ is large compared to $|\beta_{OLS}|_1$ (this is usually the case when $p$, the number of design variables is large compared to $n$, the number of observations)
- when the new observations are not pathologically influential, e.g. for example when they are consistent with the stochastic process that has generated the existing data.
- when the size of the update is small relative to the size of the existing data.
| null | CC BY-SA 3.0 | null | 2010-11-17T10:59:33.970 | 2016-02-17T15:13:51.587 | 2016-02-17T15:13:51.587 | 603 | 603 | null |
4628 | 2 | null | 4612 | 6 | null | I really like Kennedy's A Guide to Econometrics, which is unusual in its setup, since every topic is discussed on three different levels, first in a non-technical way, then going into details of application and finally going into theoretical details, although the theoretical parts are a bit superficial.
| null | CC BY-SA 2.5 | null | 2010-11-17T11:51:36.763 | 2010-11-17T11:51:36.763 | null | null | 1766 | null |
4629 | 2 | null | 4551 | 7 | null | In psychology, the cardinal sin (for me) is the use of principal components analysis to examine the hypothesised latent structure underlying a psychometric test.
Not testing for normality before using tests which assume this.
| null | CC BY-SA 2.5 | null | 2010-11-17T12:03:29.180 | 2010-11-17T12:03:29.180 | null | null | 656 | null |
4630 | 2 | null | 4623 | 2 | null |
- For the 1D continuous uniform distribution U(a,b) the uniformly minimum variance unbiased (UMVU) estimates of a and b can be obtained in closed form by a straightforward example of maximum spacing estimation. Can't see any reason that applying this separately for each dimension wouldn't give you UMVU estimates of all parameters of your multivariate uniform distribution
| null | CC BY-SA 2.5 | null | 2010-11-17T13:14:30.247 | 2010-11-17T13:14:30.247 | null | null | 449 | null |
4631 | 2 | null | 4620 | 1 | null | Frequency show the values that variables take in a sample.
In other words, shows the amount of individuals said that prefer coffeehouse A. This amount would be the frequency of coffeehouse A.
axis:
X:coffeehouses
Y:amount of individuals
| null | CC BY-SA 2.5 | null | 2010-11-17T13:42:36.827 | 2010-11-17T13:42:36.827 | null | null | 1746 | null |
4632 | 2 | null | 4612 | 3 | null | One at a somewhat lower level of mathematical sophistication than Wooldridge (less dense, more pictures), but a bit more up to date on some of the fast-moving areas:
Murray, Michael P. Econometrics: A Modern Introduction. Addison Wesley, 2006. 976 pp. ISBN [9780321113610](http://en.wikipedia.org/w/index.php?title=Special%3ABookSources&isbn=9780321113610)
Seems that it's not available for preview on the web and the publisher is out of stock, but you can view [pdfs of 11 web extensions](http://wps.aw.com/aw_murray_economtrcs_1/37/9551/2445250.cw/index.html) to get an idea of its style.
| null | CC BY-SA 2.5 | null | 2010-11-17T13:49:06.863 | 2010-11-17T13:49:06.863 | null | null | 449 | null |
4633 | 2 | null | 4610 | 5 | null | As @onestop writes, the GM and GSD are natural parameters for a lognormal distribution. However, they can be estimated from the arithmetic mean ($\mu$) and (usual) SD ($\sigma$) just by solving the [formulas](http://en.wikipedia.org/wiki/Log-normal_distribution)
$$\mu = \exp(\nu+ \tau^2/2) \text{ and}$$
$$\sigma^2 = \left( \exp(\tau^2)-1\right) \exp(2 \nu+ \tau^2) = \left( \exp(\tau^2)-1\right) \mu^2$$
for $\nu$ (the logarithm of the GM) and $\tau$ (the logarithm of the GSD). Evidently
$$\tau^2 = \log{\frac{\sigma^2 + \mu^2}{\mu^2}} \text{ and}$$
$$\nu = \frac{1}{2} \log{\frac{\mu^4}{\sigma^2 + \mu^2}}.$$
The distribution of the logarithms of the particle sizes is Normal with mean $\nu$ and variance $\tau^2$, reducing your problem to the elementary one of computing (or looking up) values of the cumulative normal distribution.
| null | CC BY-SA 2.5 | null | 2010-11-17T14:02:40.130 | 2010-11-17T14:02:40.130 | null | null | 919 | null |
4634 | 2 | null | 4612 | 4 | null | "[Applied Econometrics with R](http://www.springer.com/economics/econometrics/book/978-0-387-77316-2)" (Kleiber, Zeileis 2008) is a good introduction using R, and is accompanied by the [AER package](http://cran.r-project.org/web/packages/AER/index.html).
| null | CC BY-SA 2.5 | null | 2010-11-17T14:15:22.657 | 2010-11-17T14:15:22.657 | null | null | 5 | null |
4635 | 2 | null | 4612 | 9 | null | "[Mostly Harmless Econometrics: An Empiricist's Companion](http://rads.stackoverflow.com/amzn/click/0691120358)" (Angrist, Pischke 2008) is a less technical and entertaining summary of the field. I wouldn't describe it as a beginner book, but it's well worth reading once you understand the basics.
| null | CC BY-SA 2.5 | null | 2010-11-17T14:17:55.447 | 2010-11-17T16:30:37.287 | 2010-11-17T16:30:37.287 | 5 | 5 | null |
4636 | 2 | null | 4620 | 1 | null | You can ask consumers a question along the following lines:
>
Out of every 100 visits to a coffee house how many times do you visit each one of the following coffee house? Please ensure that the total adds up to 100.
A. Option 1
B. Option 2.. etc
You can then normalize to get percentage of times each consumer goes to each one of the coffee houses. A simplistic analysis would then assume that each consumer visits a coffee house the same number of times during a year and simply compute the average of the percentages across all consumers to get an estimate of market shares. A more sophisticated analysis would compute the weighted averages with the weights being the the number of times that a consumer goes to coffee shops. You can get the weights as well via a survey question by asking them how many times they visit coffee shops in a year etc.
There are other ways to estimate market shares but the above seems to be a simple yet reasonable approach.
| null | CC BY-SA 2.5 | null | 2010-11-17T14:21:11.723 | 2010-11-17T14:21:11.723 | 2020-06-11T14:32:37.003 | -1 | null | null |
4637 | 2 | null | 4620 | 3 | null | I'd suggest several Qs along these lines:
- Which is the one coffee house you go to most often?
- What other coffee houses do you visit more than once a month (say)? [Probe to negative, i.e. keep asking ".. and which others do you visit more than once a month?" ".. and which others?" until interviewee answers "none"/"that's it" / "no others" ]
- What other coffee houses can you recall having ever visited? [Probe to negative]
- [For each coffee house mentioned in (1) or (2)]: "How many times have you visited Cafe Y in the last month?"
Tabulating responses to (1) gives "X% of people said they go to Cafe Y most often" for each Y, with sum <=100% (can be <100% as some people never visit coffee houses).
Combining responses to (1) & (2) gives "X% of people go to Cafe Y more than once a month" for each Y, with sum > 100% but still with denominator N for each coffee house, where N is number of respondents. Similarly combining (1), (2) & (3) gives "X% report ever having visited Cafe Y".
Taking the mean of (4) for each coffee house gives you "Cafe Y was visited an average of m times in the preceeding month". Denominator is N again (remember to include those respondents who didn't mention Cafe Y as zeroes).
In principle you could refine things further, but asking more Qs that this may increase your respondent quit rate.
| null | CC BY-SA 2.5 | null | 2010-11-17T14:21:31.443 | 2010-11-17T14:21:31.443 | null | null | 449 | null |
4638 | 2 | null | 4619 | 3 | null | I think that for subject-specific characteristics, like demographic data, you can proceed the usual way (t-test, etc.). This will help showing that your samples don't differ according to these variables. About self-reported attitude data, if you have very few items, skip to step 2, otherwise step 1 might be appropriate.
1. Assessing measurement equivalence
Rather than saying that the two populations (or actually, samples) are "the same", I would say you have to show that your two questionnaires are assessing the same construct(s). This is what is done in cross-cultural surveys or international clinical trials where health-related quality of life is used as a secondary endpoint, for example. In each case, we have a set of items that purports to assess different dimensions, and we want to demonstrate whether we are measuring individuals in the same way irrespective of their country. When dealing with uni- or multidimensional scales, it is know as measurement invariance in psychometrics, that is you want to show that the factorial structure is comparable between the two groups. But, the same remark would apply as well if we were considering longitudinal data (I interpret your question as involving different samples at each time point). A multi-group confirmatory factor analysis is appropriate in this case. Standard references include:
- Meredith, W (1993). Measurement invariance, factor analysis, and factorial invariance. Psychometrika, 58, 525-543.
- Vandenberg, RJ and Lance, CE (2000). A review and synthesis of the measurement invariance literature: Suggestions, practices, and recommendations for organizational research. Organizational Research Methods, 3, 4-70.
In R, the [lavaan](http://cran.r-project.org/web/packages/lavaan/index.html) package provides facilities for that kind of analysis, but see the documentation: [lavaan: an R package for structural equation modeling and more](http://users.ugent.be/~yrosseel/lavaan/lavaan_usersguide_0.3-1.pdf) (§6.2). Otherwise, you have to resort on [Mplus](http://www.statmodel.com/) or a good software for SEMs. [Studying Measurement Invariance Using Confirmatory Factor Analysis](http://www.unc.edu/~rcm/psy236/measinv.pdf) provides illustration with [LISREL](http://www.ssicentral.com/lisrel/) syntax.
You may want to consider data from 6 to 12 months (to collect 1 or 2 waves for survey A). After that, I think you can just pool your data.
2. Assessing group comparability
Now, if you cannot define a clear construct common to those two questionnaires, or if you have so few items that it would make no sense to consider a scale, then you can rely on basic group statistics for each item (using e.g., t-test, trend test for ordinal data, tests for nominal data, etc.). In this case, you are essentially studying between-group differences. This basically tells you whether (aggregated) scores differ, but not whether items are perceived as having the same meaning (or underlying the same construct) across the two groups.
| null | CC BY-SA 2.5 | null | 2010-11-17T14:36:48.460 | 2010-11-17T18:22:13.840 | 2010-11-17T18:22:13.840 | 930 | 930 | null |
4639 | 1 | 4644 | null | 21 | 61614 | In R, the `drop1`command outputs something neat.
These two commands should get you some output:
`example(step)#-> swiss`
`drop1(lm1, test="F")`
Mine looks like this:
```
> drop1(lm1, test="F")
Single term deletions
Model:
Fertility ~ Agriculture + Examination + Education + Catholic +
Infant.Mortality
Df Sum of Sq RSS AIC F value Pr(F)
<none> 2105.0 190.69
Agriculture 1 307.72 2412.8 195.10 5.9934 0.018727 *
Examination 1 53.03 2158.1 189.86 1.0328 0.315462
Education 1 1162.56 3267.6 209.36 22.6432 2.431e-05 ***
Catholic 1 447.71 2552.8 197.75 8.7200 0.005190 **
Infant.Mortality 1 408.75 2513.8 197.03 7.9612 0.007336 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
What does all of this mean? I'm assuming that the "stars" help in deciding which input variables are to be kept.
Looking at the output above, I want to throw away the "Examination" variable and focus on the "Education" variable, is interpretation this correct?
Also, the AIC value, lower is better, yes?
Ed. Please note the Community Wiki answer below and add to it if you see fit, to clarify this output.
| Interpreting the drop1 output in R | CC BY-SA 2.5 | null | 2010-11-17T15:59:25.153 | 2019-08-01T17:16:21.440 | 2010-11-18T10:53:37.813 | 1994 | 1994 | [
"r",
"regression",
"self-study",
"stepwise-regression"
]
|
4640 | 1 | null | null | 11 | 38886 | In R, the `step` command is supposedly intended to help you select the input variables to your model, right?
The following comes from
`example(step)#-> swiss` &
`step(lm1)`
```
> step(lm1)
Start: AIC=190.69
Fertility ~ Agriculture + Examination + Education + Catholic +
Infant.Mortality
Df Sum of Sq RSS AIC
- Examination 1 53.03 2158.1 189.86
<none> 2105.0 190.69
- Agriculture 1 307.72 2412.8 195.10
- Infant.Mortality 1 408.75 2513.8 197.03
- Catholic 1 447.71 2552.8 197.75
- Education 1 1162.56 3267.6 209.36
Step: AIC=189.86
Fertility ~ Agriculture + Education + Catholic + Infant.Mortality
Df Sum of Sq RSS AIC
<none> 2158.1 189.86
- Agriculture 1 264.18 2422.2 193.29
- Infant.Mortality 1 409.81 2567.9 196.03
- Catholic 1 956.57 3114.6 205.10
- Education 1 2249.97 4408.0 221.43
Call:
lm(formula = Fertility ~ Agriculture + Education + Catholic + Infant.Mortality, data = swiss)
Coefficients:
(Intercept) Agriculture Education
62.1013 -0.1546 -0.9803
Catholic Infant.Mortality
0.1247 1.0784
```
Now, when I look at this, I guess the last Step table is the model which we should use? The last few lines include the "Call" function, which describes the actual model and what input variables it includes, and the "Coefficients" are the actual parameter estimates for these values, right? So this is the model I want, right?
I'm trying to extrapolate this to my project, where there are more variables.
| Interpreting the step output in R | CC BY-SA 3.0 | null | 2010-11-17T16:24:18.900 | 2017-03-08T11:22:22.217 | 2012-11-13T05:51:04.160 | 16705 | 1994 | [
"r",
"self-study",
"stepwise-regression"
]
|
4641 | 2 | null | 4639 | 13 | null | `drop1` gives you a comparison of models based on the AIC criterion, and when using the option `test="F"` you add a "type II ANOVA" to it, as explained in the help files. As long as you only have continuous variables, this table is exactly equivalent to `summary(lm1)`, as the F-values are just those T-values squared. P-values are exactly the same.
So what to do with it? Interprete it in exactly that way: it expresses in a way if the model without that term is "significantly" different from the model with that term. Mind the "" around significantly, as the significance here cannot be interpreted as most people think. (multi-testing problem and all...)
And regarding the AIC : the lower the better seems more like it. AIC is a value that goes for the model, not for the variable. So the best model from that output would be the one without the variable examination.
Mind you, the calculation of both AIC and the F statistic are different from the R functions `AIC(lm1)` resp. `anova(lm1)`. For `AIC()`, that information is given on the help pages of `extractAIC()`. For the `anova()` function, it's rather obvious that type I and type II SS are not the same.
I'm trying not to be rude, but if you don't understand what is explained in the help files there, you shouldn't be using the function in the first place. Stepwise regression is incredibly tricky, jeopardizing your p-values in a most profound manner. So again, do not base yourself on the p-values. Your model should reflect your hypothesis and not the other way around.
| null | CC BY-SA 2.5 | null | 2010-11-17T16:28:20.420 | 2010-11-17T17:07:15.003 | 2010-11-17T17:07:15.003 | 1124 | 1124 | null |
4642 | 1 | 4654 | null | 9 | 1890 | I am familiar with meta analysis and meta regression techniques (using the R package `metafor` from Viechtbauer), but I recently stumbled on a problem I can't easily solve. Say we have a disease that can go from mother to the unborn child, and it has been studied already a number of times. Mother and child were tested for the virus right after birth. As an unborn child can impossibly get the virus other than from the mother, one would expect crosstabulations like :
```
| neg kid | pos kid
mother neg | A | C=0
-----------|---------|--------
mother pos | B | D
```
Obviously using odds ratios (OR) gives errors as one would be dividing by 0. Same for relative risks :
$\frac{A/(A+B)}{0/(0+D)}$
Now the researchers want to test the (senseless) hypothesis whether infection of the child is related to the infection of the mother (which seems very, very obvious). I'm trying to reformulate the hypothesis and come up with something that makes sense, but I can't really find something.
To complicate things, some kids with negative moms actually are positive, probably due to infection in the first week. So I only have a number of studies where C = 0.
Anybody an idea on how to statistically summarize the data of different studies following such a pattern. Links to scientific papers are also more than welcome.
| Meta analysis on studies with 0-frequency cells | CC BY-SA 2.5 | null | 2010-11-17T17:17:21.297 | 2012-07-29T13:50:27.307 | 2010-11-17T23:12:37.027 | 1124 | 1124 | [
"meta-analysis",
"odds-ratio",
"relative-risk"
]
|
4643 | 1 | 4646 | null | 5 | 214 | I originally asked this on a machine learning site, but one of the responses made me think that maybe this site is more suitable.
Suppose you have two weighted coins, and every day you flip each one a number of times and record the total number of heads. So on the tenth day you might have flipped coin A 106 times, coin B 381 times, and recorded 137 heads. Supposing your goal is to figure out the weights of each coin, is it reasonable to just regress the number of heads on the number of flips for each coin? E.g, something along the lines of:
num_heads ~ num_flips_A + num_flips_B + intercept
However,it doesn't seem to make sense to have an intercept term in this scenario (it is negative for my data, which is confusing), so I tried subtracting out -1 from the formula, and it seemed to yield reasonable results. My first question is whether this approach is a good one.
Now, suppose that you suspect the existence of a third coin, C, that someone else is flipping unbeknownst to you, and the heads for that coin are getting mixed up in your count. The number of flips for this coin are not recorded, but you do not particularly care about its weight - it's more of a confounding factor. Then would it be reasonable to fit a similar regression, but constrain the intercept to be positive?
Thanks for any help
| Toy regression question with latent variables | CC BY-SA 2.5 | null | 2010-11-17T18:06:56.853 | 2010-11-18T14:25:57.537 | 2010-11-18T14:25:57.537 | 919 | 1777 | [
"regression",
"latent-variable"
]
|
4644 | 2 | null | 4639 | 6 | null | For reference, these are the values that are included in the table:
`Df` refers to [Degrees of freedom](http://en.wikipedia.org/wiki/Degrees_of_freedom_(statistics)), "the number of degrees of freedom is the number of values in the final calculation of a statistic that are free to vary."
The `Sum of Sq` column refers to the [sum of squares](http://en.wikipedia.org/wiki/Sum_of_squares) (or more precisely [sum of squared deviations](http://en.wikipedia.org/wiki/Squared_deviations)). In short this is a measure of the amount that each individual value deviates from the overall mean of those values.
`RSS` is the [Residual Sum of Squares](http://en.wikipedia.org/wiki/Residual_sum_of_squares). These are a measure of how much the predicted value of the dependent (or output) variable varies from the true value for each data point in the set (or more colloquially: each "line" in the data table).
`AIC` is the [Akaike information criterion](http://en.wikipedia.org/wiki/Akaike_information_criterion) which is generally regarded as "too complex to explain" but is, in short, a measure of the goodness of fit of an estimated statistical model. If you require further details, you will have to turn to dead trees with words on them (i.e., books). Or Wikipedia and the resources there.
The `F value` is used to perform what's called an [F-test](http://en.wikipedia.org/wiki/F-test) and from it is derived the `Pr(F)` value, which describes how likely (or Probable = Pr) that F value is. A Pr(F) value close to zero (indicated by `***`) is indicative of an input variable that is in some way important to include in a good model, that is, a model that does not include it is "significantly" different than the one that does.
All of these values are, in the context of the `drop1` command, calculated to compare the overall model (including all the input variables) with the model resulting from removing that one specific variable per each line in the output table.
Now, if this can be improved upon, please feel free to add to it or clarify any issues. My goal is only to clarify and provide a better "reverse lookup" reference from the output of an R command to the actual meaning of it.
| null | CC BY-SA 4.0 | null | 2010-11-17T18:07:33.793 | 2019-08-01T17:16:21.440 | 2019-08-01T17:16:21.440 | -1 | 1994 | null |
4645 | 2 | null | 4620 | 2 | null | I think there is a big difference (both practical and how you approach the problem) in popularity vis a vis market share. Since market share is more analytically challenging, I'll focus on that.
In my opinion, the best solution to this problem is going to involve a set of stated preference experiments, more formally known as [discrete choice modeling](http://en.wikipedia.org/wiki/Discrete_choice). DCM has applications in a variety of contexts all involving a consumer and some choice they need to make. That choice may be what coffee shop to go to, what computer to buy, or how to get to work in the morning. If you think about the important the important attributes that may impact consumer choice when picking a coffee shop to visit, you will be able to design experiments to capture this information. This could include price, offerings, proximity to work, brand loyalty, etc. You would want to design a set of experiemtns that tests each of these attributes independently of one another and then use one of the DCM techniques to develop coefficient estimates for those attributes. Turning that into a market share simulator is a relatively easy task, though model validation is always an issue.
In my work, the "seminal" reference text is by [Ben Akiva and Lerman](http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=8271)
Ken Train's book is also a great [resource](http://www.listinet.com/bibliografia-comuna/Cdu339-267B.pdf)
Finally, [Sawtooth Software](http://www.sawtoothsoftware.com/) is a fairly large player in this market and they have a variety of tools that can assist in the design and analysis of DCM. I don't use their tools that often however.
Another method that you may be able to adapt is the [Van Westendorp method](http://en.wikipedia.org/wiki/Van_Westendorp%27s_Price_Sensitivity_Meter). We recently used a set of questions to validate our DCM for a high end electronics study we performed. Van Westendorp was quick, easy, and provided instant feedback for our client. It certainly has it's drawbacks, but that's for another discussion.
| null | CC BY-SA 2.5 | null | 2010-11-17T18:47:25.040 | 2010-11-17T18:47:25.040 | null | null | 696 | null |
4646 | 2 | null | 4643 | 5 | null | Let's take the second question first: in most cases the presence of C makes the parameters unidentifiable and there's no way to estimate anything. Constraining the intercept to be positive won't help at all.
The first problem can be solved by Maximum Likelihood. After all, writing $p$ for the probability that a flip of A is heads, $q$ for the probability that B is heads, and $k_i$, $n_i$, and $m_i$ for the numbers of heads, number of flips of A, and number of flips of B in the the $i^\text{th}$ toss, respectively, we know
$$\Pr[k_i] = \sum_{j=0}^{k_i}{{{n_i}\choose{j}}{{m_i}\choose{k_i-j}}p^j(1-p)^{n-j}q^{k_i-j}(1-q)^{m_i-k_i-j}}$$
(taking ${r}\choose{s}$ to be zero whenever $r \lt s$ or $s \lt 0$). The product of these expressions (over $i$) is the likelihood to be maximized by varying $p$ and $q$ subject to $0 \le p, q \le 1$.
Maximizing this likelihood for large amounts of data does not look appetizing. If we assume both $p$ and $q$ are sufficiently far from $0$ and $1$ and that the numbers of coin flips are usually large enough that all of $n_i p$, $n_i(1-p)$, $m_i q$, and $m_i (1-q)$ exceed $5$ or so, we can apply the Normal approximation to the Binomial. This tells us that, to a good approximation, the distribution of $k_i$ is Normal with mean $p n_i + q m_i$ and variance $p(1-p) n_i + q(1-q) m_i$. Thus, weighted least squares regression (WLS) of $k$ on $(n,m)$
without any intercept term is justified. The weights are inversely proportional to these variances. However, the variances depend on the solution. Several approaches immediately come to mind:
- Use ML based on the Normal approximation. This likelihood is not as messy as the (more accurate) Binomial likelihood.
- Guess initial values for $(p, q)$, use these to compute the weights, and perform WLS to update the estimates of $(p, q)$. Replace the initial values with these estimates and iterate until convergence occurs (one hopes).
- Obtain Bayesian estimates for the parameters.
---
## Edit
Assuming, as suggested in a comment, that C is flipped approximately the same number of times (say $l$) in each trial, let's analyze this situation using the Normal approximation. Letting the expectation of C be $r$, the expectation of $k_i$ becomes $p n_i + q m_i + r l$ with variance $p(1-p)n_i + q(1-q)m_i + r(1-r)l$. We can consider playing the iterated WLS game by estimating $p$, $q$, and $\rho = r l$ with WLS (constraining the intercept to be nonnegative as originally suggested) and updating the previous values of the parameters with the new estimates $\hat{p}$, $\hat{q}$, and $\hat{\rho} / l$. Thus, when $l$ is known, it appears that $p$, $q$, and $r$ are indeed identifiable and that--assuming the iterative method works--there is an effective algorithm for finding them. Note, too, that the Normal approximation should work well provided the expectation of the $k_i$ is typically between 5 and $n_i + m_i + l - 5$; that is, some of $p$, $q$, and $r$ can be quite small or even zero and this still should work provided that neither heads nor tails is a rare occurrence among the totals.
When $l$ is not known or varies we can still estimate $r l$ provided we have a reasonable estimate for the contribution of C to the variance, $r(1-r)l$. "Reasonable" means that when we vary $l$ and $r$ within meaningful ranges, the changes in the total variances $p(1-p)n_i + q(1-q)m_i + r(1-r)l$ do not alter the parameter estimates appreciably. This would be the case, for instance, when the contribution of C to the total number of heads is consistently relatively small. (We don't really care about $r l$--it's a nuisance parameter--but if it is not accurately estimated then there must be some bias in the estimation of $p$ and $q$.)
| null | CC BY-SA 2.5 | null | 2010-11-17T20:56:40.703 | 2010-11-18T14:22:23.357 | 2010-11-18T14:22:23.357 | 919 | 919 | null |
4647 | 2 | null | 4642 | 6 | null | Usually 0's imply that you have to use exact methods instead of relying on asymptotical methods such as meta-analysis with odds ratios. If you are willing to assume that the study effect is fixed, an exact Maentel-Hanszel test is the way to go. For an exact random effects analysis, you have to use a binomial regression model with a random study effect. I have done both in a recent applied paper, but the methods section there would not be more helpful to you, as it essentially conveys this information.
Edit
This paper is not applied, but this is where I got the idea from when confronted with the same issue:
[1] Hans C. van Houwelingen, Lidia R. Arends, and Theo Stijnen. Advanced methods in meta-analysis: multivariate approach and meta-regression. Statistics in Medicine, 2002; 21:589–624
Here is the paper where I used this approach (it is not apparent in the abstract, but is mentioned in the methods section):
[2] Trivedi H, Nadella R, Szabo A. Hydration with sodium bicarbonate for the prevention of contrast-induced nephropathy: a meta-analysis of randomized controlled trials. Clin Nephrol. 2010 Oct;74(4):288-96.
| null | CC BY-SA 2.5 | null | 2010-11-17T21:12:39.227 | 2010-11-23T21:43:38.687 | 2010-11-23T21:43:38.687 | 279 | 279 | null |
4648 | 2 | null | 4640 | 2 | null | The part of the printout at the end is the model you are left with. You can also get it if you capture the value of the `step` function:
```
final.mod <- step(lm1)
final.mod
```
| null | CC BY-SA 2.5 | null | 2010-11-17T21:16:43.250 | 2010-11-17T21:16:43.250 | null | null | 279 | null |
4649 | 1 | null | null | 1 | 3351 | If the outcome of a market could be expressed as a probability it might be:
Outcome - Description - Probability as a %
- Up a lot 20% (a move of say more than 10%)
- Down a lot 20%
- Up a bit 20% (a move of between 0 and 10%)
- Down a bit 20%
- Sideways 20%
So the probability of any single outcome is 1/5 or 20%.
Please could some one educate me on the math of adding another and subsequent markets?
| How to combine probabilities? | CC BY-SA 2.5 | null | 2010-11-17T21:17:44.603 | 2010-11-17T22:22:02.307 | 2010-11-17T21:53:57.087 | null | null | [
"probability"
]
|
4650 | 2 | null | 4649 | 2 | null | OK, let me say something about the multinomial distribution since I brought it up.
Suppose I have 2 dice both with 5 faces each. Assuming that the dice are fair, the probability of one particular face turning up with one die is indeed $1/5$ or 20%.
Now, let's ask what happens when we throw two dice and ask ourselves what the probability is of having them both showing the same number. Two ones could come up with a probability of $1/25$, two twos could turn up with the same probability, etc... the total probabiliy would thus be $5 \times 1/25$ or $1/5$. A quicker way to see this, whatever comes out for the first die is setting the bar for the second die. So the second die has $1/5$ of coming out right.
Let's ask the same question for three dice. The probability will be $1/25$ by an analoguous reasoning.
The multinomial distribution allows us to compute much more. It can compute the probability of a particular number combination coming out if we don't care about the order in which the numbers come out. This probability for $n$ fair dice with $p$ sides each is
$$\mathcal{P}(n_1,\ldots,n_p)=\frac{n!}{n_1!n_2!\ldots n_p!} \frac{1}{p^n}$$
Take for instance the probability for throwing (1,1,2) with 3 5-sided dice, that is two ones and one two:
$$\mathcal{P}(2,1,0,0,0)=\frac{3!}{2!1!0!0!0!} \frac{1}{5^3} = \frac{3}{125}$$
---
What do you mean by market here? You mean something like the Dow Jones Industrial average?
If that's what you mean, then the way a probability would be expressed would be something like this. Denoting the Dow Jones Industrial Average in function of time as $D(t)$, we can ask what is the probability to have a rise of 10% over a certain time interval $\Delta t$. Say this probability is 5%. This would be expressed as:
$$ \mathbb{P}[D(t+\Delta t)-D(t) > 0.1 D(t)] = 0.05$$
Now, this is just notation. No actual information has been put in. If you're asking how can we model the Dow Jones Industrial Average, I'm afraid this is not something that can be given a quick answer to. I'd suggest you start reading up on the subject because it involves a lot of math. Maybe an easy start:
[http://en.wikipedia.org/wiki/Stock_market](http://en.wikipedia.org/wiki/Stock_market)
Expect the learning curve to be steep though.
The important thing to understand in your case though is that just because you can express something with % it doesn't mean it is a probability. It could correspond to a relative change, for instance, a relative change in the value of a stock. That is not a probability. But there is a probability associated to the occurance of that change.
| null | CC BY-SA 2.5 | null | 2010-11-17T21:36:42.957 | 2010-11-17T22:22:02.307 | 2010-11-17T22:22:02.307 | 2036 | 2036 | null |
4652 | 1 | null | null | 7 | 3559 | I'm trying to run a basic gradient descent algorithm with a absolute loss function. I can get it to converge to a good solution by it requires a much lower step size and more iterations than had I used square loss. Is this normal? Should I expect absolute loss to take a longer time to come to a good solution or potentially oscillate around a solution more than say squared loss?
| Gradient descent oscillating a lot. Have I chosen my step direction incorrectly? | CC BY-SA 2.5 | null | 2010-11-17T23:10:31.310 | 2010-11-19T05:14:16.130 | 2010-11-19T05:03:38.713 | 2023 | 2023 | [
"optimization",
"loss-functions"
]
|
4654 | 2 | null | 4642 | 5 | null | Seems to me this is one of the rare situations where it might well be better to meta-analyse risk differences rather than risk ratios or odds ratios. The risk difference $P(Kid_+ | Mum_+) - P(Kid_+|Mum_-)$ is estimated in each study by $D/(B+D) - C/(A+C)$. That should be finite in all studies even when $C=0$, so there should be no problem meta-analysing it.
I agree it seems pretty pointless to consider testing the hypothesis that this risk difference is zero. But it's meaningful to estimate how large it is, i.e. how much more likely a kid is to have the virus when their mum has it than when their mums doesn't.
| null | CC BY-SA 2.5 | null | 2010-11-18T00:25:36.700 | 2010-11-18T00:36:46.683 | 2010-11-18T00:36:46.683 | 449 | 449 | null |
4655 | 1 | 4657 | null | 10 | 3082 | Background
notation: RV= random variable, $\mu=$ mean $m=$ median
Jensen's Inequality considers the relationship between the mean of a function of an RV and the function of the mean of an RV.
If $f(x)$ strictly convex:
$$\mu (f(x)) > f(\mu (x))\mathrm{\hspace{20mm}(1)}$$
Conversely if -f(x) is strictly convex:
$$\mu (f(x)) < f(\mu (x))$$
An analogous property of the median has been presented ([Merkle et al 2005](http://dx.doi.org/10.1016/j.spl.2004.11.010), [pdf](http://milanmerkle.com/documents/radovi/SPL-71.pdf)).
motivation
I have a nonlinear [function](http://www.esajournals.org/doi/full/10.1890/0012-9615%282001%29071%5B0557%3AAMFSVD%5D2.0.CO%3B2) of positive random variables.
In practice, I find that the function of the medians provides a much better estimate of the median of the function than does the estimate of the mean of the function from the function of the means. I am interested in learning the conditions for which this is true.
question
Under what conditions will the function of a median be closer to the median of a function than the mean of a function is to a function of the mean?
Specifically for what types of $f(x)$ and $x$ is
$$\mu (f(x)) - f(\mu (x)) > m (f(x)) - f(m (x))$$
simulation results
I used an empirical approach (the one I know) to investigate this question for a function of a single variable:
Interestingly, for $x>0$,
$$m(x^2)\simeq m(x)^2$$
```
set.seed(1)
x<-cbind(rlnorm(100, 1), rbeta(100, 1, 5), rgamma(100,0.5,0.5))
quad <- function(x)x^2
median.x <- apply(x,2,quantile,0.5)
mean.x <- apply(x,2,mean)
colMeans(quad(x))
quad(mean.x)
apply(quad(x), 2, quantile, 0.5)
quad(median.x)
```
For a slightly more complicated function, my proposal (equation 1) is true
```
miscfn <- function(x) 1 + x + x^log(x^2) - exp(-2(x)*5^x
colMeans(miscfn(x))
miscfn(mean.x)
apply(miscfn(x), 2, quantile, 0.5)
miscfn(median.x)
abs(apply(miscfn(x),2,mean)-miscfn(mean.x)) > abs(apply(miscfn(x), 2, quantile, 0.5) - miscfn(median.x))
```
However, before I begin to use this observation in my work, I would like to know more about its conditions.
References
[Merkle et al 2005 Jensen's inequality for medians. Statistics & Probability Letters, Volume 71, Issue 3, 1 March 2005, Pages 277-281](http://dx.doi.org/10.1016/j.spl.2004.11.010)
| Is there a relationship between the median of a function of random variables and the function of the median of random variables? | CC BY-SA 2.5 | null | 2010-11-18T00:37:37.360 | 2010-12-01T05:56:23.050 | 2010-11-18T15:59:28.850 | 1381 | 1381 | [
"probability",
"mean",
"random-variable",
"mathematical-statistics",
"median"
]
|
4656 | 2 | null | 4652 | 5 | null | When you say 'a absolute loss function', do you mean you're using [least absolute deviations](http://en.wikipedia.org/wiki/Least_absolute_deviations) (LAD) instead of the more usual ordinary least squares (OLS)? As that wikipedia article says, although LAD is more robust to outliers than OLS it can be unstable and even have multiple solutions, so it doesn't seem that surprising if it's harder to find the minimum of the objective function even when there's only one.
If you're trying this because you're after some sort of [robust regression](http://en.wikipedia.org/wiki/Robust_regression), I think there are several more attractive alternatives than LAD.
| null | CC BY-SA 2.5 | null | 2010-11-18T00:49:28.300 | 2010-11-18T00:49:28.300 | null | null | 449 | null |
4657 | 2 | null | 4655 | 5 | null | Let the cdf of $x$ be denoted by $F_X(x)$. Thus, the median of $X$ denoted by $m_x$ satisfies:
$F_X(m_x)=0.5$
Consider $Y = X^2$. Thus, the cdf of $Y$ is given by:
$P(Y \le y) = P(X^2 \le y)$
In other words, the cdf of $Y$ is given by:
$F_Y(y) = F_X(\sqrt{y}) - F_X(-\sqrt{y})$
The median for $Y$ denoted by $m_Y$ satisfies:
$F_Y(m_y)=0.5$
In other words, it should satisfy:
$F_X(\sqrt{m_y}) - F_X(-\sqrt{m_y}) = 0.5$
If $m_y = (m_x)^2$ then it must be that:
$F_X(m_x) - F_X(-m_x) = 0.5$
The above with the first equation suggests that the relationship $m(x^2) = m(x)^2$ will only hold if $F_X(-m_x) = 0$. Thus, the relationship holds only if the support of $X$ is positive.
The examples you examined in your code have a positive support and hence you find that $m(x^2) = m(x)^2$. If you try a uniform distribution (e.g., U(-1,1)) you will find that $m(x^2) \ne m(x)^2$
| null | CC BY-SA 2.5 | null | 2010-11-18T01:09:48.170 | 2010-11-18T01:09:48.170 | null | null | null | null |
4658 | 1 | 4664 | null | 13 | 5869 | I have two set of data that are roughly centered around zero but I suspect that they have different tails.
I know a few tests to compare the distribution to a normal distribution, but I would like to compare directly the two distributions.
Is there a simple test to compare the fatness of tail of 2 distributions?
Thanks
fRed
| Comparison of the tails of two sample distributions | CC BY-SA 2.5 | null | 2010-11-18T01:52:10.523 | 2019-09-14T01:10:45.783 | 2019-09-14T01:10:45.783 | 7290 | 1709 | [
"hypothesis-testing",
"distributions",
"kurtosis",
"fat-tails"
]
|
4659 | 1 | 4684 | null | 45 | 40623 | I'm more of a programmer than a statistician, so I hope this question isn't too naive.
It happens in sampling program executions at random times. If I take N=10 random-time samples of the program's state, I could see function Foo being executed on, for example, I=3 of those samples. I'm interested in what that tells me about the actual fraction of time F that Foo is in execution.
I understand that I is binomially distributed with mean F*N. I also know that, given I and N, F follows a beta distribution. In fact I've verified by program the relationship between those two distributions, which is
```
cdfBeta(I, N-I+1, F) + cdfBinomial(N, F, I-1) = 1
```
The problem is I don't have an intuitive feel for the relationship. I can't "picture" why it works.
EDIT: All the answers were challenging, especially @whuber's, which I still need to grok, but bringing in order statistics was very helpful. Nevertheless I've realized I should have asked a more basic question: Given I and N, what is the distribution for F? Everyone has pointed out that it's Beta, which I knew. I finally figured out from Wikipedia ([Conjugate prior](http://en.wikipedia.org/wiki/Conjugate_prior)) that it appears to be `Beta(I+1, N-I+1)`. After exploring it with a program, it appears to be the right answer. So, I would like to know if I'm wrong. And, I'm still confused about the relationship between the two cdfs shown above, why they sum to 1, and if they even have anything to do with what I really wanted to know.
| Relationship between Binomial and Beta distributions | CC BY-SA 2.5 | null | 2010-11-18T02:51:42.343 | 2020-05-13T06:45:12.307 | 2010-11-20T14:59:45.877 | 1270 | 1270 | [
"binomial-distribution",
"beta-binomial-distribution",
"beta-distribution"
]
|
4660 | 2 | null | 4658 | 2 | null | The Chi Square test (Goodness-of-Fit test) will be very good at comparing the tails of two distributions since it is structured to compare two distributions by buckets of values (graphically represented by a histogram). And, the tails will consist in the far most buckets.
Even though this test focuses on the whole distribution, not just the tail you can readily observe how much of the Chi Square value or divergence is derived by the difference in the tails's fatness.
Watch that the derived histogram may actually give you visually much more information regarding the respective fatness of the tails than any test related statistical significance. It is one thing to state that tails fatness are statistically different. It is another to visually observe it. They say a picture is worth a thousand words. Sometimes it is also worth a thousand numbers (it makes sense given that graphs encapsulate all the numbers).
| null | CC BY-SA 2.5 | null | 2010-11-18T03:06:39.537 | 2010-11-18T19:16:24.867 | 2010-11-18T19:16:24.867 | 1329 | 1329 | null |
4661 | 2 | null | 4658 | 2 | null | How about fitting the [generalized lambda distribution](http://tolstoy.newcastle.edu.au/~rking/gld/) and bootstrapping confidence intervals on the 3rd and 4th parameters?
| null | CC BY-SA 2.5 | null | 2010-11-18T03:47:12.950 | 2010-11-18T03:47:12.950 | null | null | 364 | null |
4662 | 2 | null | 4640 | 7 | null | The last step table is indeed the end result of the "stepwise regression". The caveat here is that usually you don't want to use this approach when there is a principled way to approach your model specification. The call is the lm call which would produce the equation used in the final step. Coefficients are the actual parameter estimates. It is notable that because you did not define a scope or direction parameter step defaulted to a 'backwards' step approach, in which variable terms are evaluated for dropping at each step, at each step if dropping the selected variable decreases the AIC it is removed from the model and the entire process repeats until it becomes the case that no single variable can be dropped. In your example at the final step Fertility ~ Agriculture + Education + Catholic + Infant.Mortality produced an AIC of 189.86, and dropping any one of those variables did not result in a lower AIC (indicative of a better model fit).
| null | CC BY-SA 3.0 | null | 2010-11-18T06:46:07.077 | 2017-03-08T11:22:22.217 | 2017-03-08T11:22:22.217 | 138249 | 196 | null |
4663 | 1 | 4790 | null | 41 | 20236 | Least-angle regression and the lasso tend to produce very similar regularization paths (identical except when a coefficient crosses zero.)
They both can be efficiently fit by virtually identical algorithms.
Is there ever any practical reason to prefer one method over the other?
| Least-angle regression vs. lasso | CC BY-SA 2.5 | null | 2010-11-18T07:28:22.207 | 2019-05-24T13:57:33.993 | null | null | 439 | [
"regression",
"lasso"
]
|
4664 | 2 | null | 4658 | 6 | null | This question seems to belong to the same family as [this earlier one about testing whether two samples have the same skew](https://stats.stackexchange.com/q/1853/449), so [you may like to read my answer to that](https://stats.stackexchange.com/questions/1853/testing-two-independent-samples-for-null-of-same-skew/1953#1953). I believe that L-moments would be useful here too for the same reasons (specifically L-skewnesskurtosis in this case).
| null | CC BY-SA 2.5 | null | 2010-11-18T08:46:46.627 | 2010-11-18T12:45:27.967 | 2017-04-13T12:44:25.243 | -1 | 449 | null |
4666 | 2 | null | 4663 | -1 | null | In some contexts a regularized version of the least squares solution may be preferable. The LASSO (least absolute shrinkage and selection operator) algorithm, for example, finds a least-squares solution with the constraint that | β | 1, the L1-norm of the parameter vector, is no greater than a given value. Equivalently, it may solve an unconstrained minimization of the least-squares penalty with α | β | 1 added, where α is a constant (this is the Lagrangian form of the constrained problem.) This problem may be solved using quadratic programming or more general convex optimization methods, as well as by specific algorithms such as the least angle regression algorithm. The L1-regularized formulation is useful in some contexts due to its tendency to prefer solutions with fewer nonzero parameter values, effectively reducing the number of variables upon which the given solution is dependent.[11] For this reason, the LASSO and its variants are fundamental to the field of compressed sensing.
| null | CC BY-SA 2.5 | null | 2010-11-18T09:46:57.010 | 2010-11-18T09:46:57.010 | null | null | 1808 | null |
4667 | 1 | 4672 | null | 6 | 222 | Through a project I am now working on (which I won't link to so to not have this an ad question), I came to realize how difficult it is to find R resources, not in English.
Thus my question is - what resources do you know of, recommend, for learning R in non-English languages? (tutorials, blogs, wiki's, forums, and so on)
| R resources in non-English languages | CC BY-SA 2.5 | null | 2010-11-18T09:59:09.933 | 2011-07-02T20:04:11.583 | 2011-07-02T20:04:11.583 | null | 253 | [
"r",
"references"
]
|
4669 | 2 | null | 4667 | 3 | null | All RSS feeds I follow are in English actually, so I'll just point to tutorials available in French, or made by French researchers.
Apart from the [Contributed Documentation](http://cran.r-project.org/other-docs.html) on CRAN, I often browse the R website hosted at the [bioinformatics lab](http://pbil.univ-lyon1.fr/R/enseignement.html) in Lyon (France); it is mostly in French, but it also includes english material. I also like [Philippe Besse](http://www.math.univ-toulouse.fr/~besse/enseignement.html) resources (SAS + R).
| null | CC BY-SA 2.5 | null | 2010-11-18T10:10:55.083 | 2010-11-18T10:10:55.083 | null | null | 930 | null |
4670 | 2 | null | 4667 | 4 | null | There doesn't appear to be much in Russian, but here is a couple of links:
- http://herba.msu.ru/shipunov/software/r/r-ru.htm contains pointers to a number of Russian-language R resources;
- http://voliadis.ru/taxonomy/term/18 is a blog with some R content.
| null | CC BY-SA 2.5 | null | 2010-11-18T10:36:54.880 | 2010-11-18T10:36:54.880 | null | null | 439 | null |
4671 | 1 | 4673 | null | 1 | 620 | I am using an automatic model selection procedure, "step". The model of depart (the biggest possible) is a polynom, say of the 4th degree.
```
Depart<-lm(y~x+I(x^2)+I(x^3)+I(x^4))
Final<-step(Depart)
```
I need to transform the Final model to a corresponding function. How can i do this?
| R: How to create a function from a model? | CC BY-SA 2.5 | null | 2010-11-18T10:46:34.230 | 2010-11-18T12:00:38.503 | null | null | 2043 | [
"r",
"stepwise-regression"
]
|
4672 | 2 | null | 4667 | 5 | null | In german:
- A short introduction to R very short, covers only the basics of R programming
- http://de.wikibooks.org/wiki/GNU_R teaches the basics of R programmming in detail and also contains some examples of producing graphics and statistics.
- cran.r-project.org/doc/contrib/Sawitzki-Einfuehrung.pdf a lengthy introduction into statistics with R with a smaller focus on programming.
| null | CC BY-SA 2.5 | null | 2010-11-18T10:57:43.987 | 2010-11-18T10:57:43.987 | null | null | 264 | null |
4673 | 2 | null | 4671 | 8 | null | Do you mean something like this:
`f<-function(newdata)predict(Final,data.frame(x=newdata))`
?
| null | CC BY-SA 2.5 | null | 2010-11-18T12:00:38.503 | 2010-11-18T12:00:38.503 | null | null | 439 | null |
4674 | 2 | null | 4667 | 3 | null | Here is a german blog with some posts on R:
[http://blog.berndweiss.net/tag/r/](http://blog.berndweiss.net/tag/r/)
Recently started, with no posts on R yet, but focused on open data, is this blog:
[http://blog.zeit.de/open-data](http://blog.zeit.de/open-data)
| null | CC BY-SA 2.5 | null | 2010-11-18T12:14:32.640 | 2010-11-18T12:14:32.640 | null | null | 573 | null |
4675 | 2 | null | 4667 | 4 | null | Some german blog entries:
[http://www.schockwellenreiter.de/blog/tag/r/](http://www.schockwellenreiter.de/blog/tag/r/)
and
[http://markheckmann.wordpress.com/category/r-r-code/](http://markheckmann.wordpress.com/category/r-r-code/)
edit: and one more:
[http://wagezudenken.blogspot.com/](http://wagezudenken.blogspot.com/)
| null | CC BY-SA 3.0 | null | 2010-11-18T12:42:26.417 | 2011-07-01T22:17:04.383 | 2011-07-01T22:17:04.383 | 1050 | 1050 | null |
4676 | 2 | null | 4619 | 2 | null | To add to chl's answer, another step you can take to ensure your data is representative of the population as a whole is to compare both samples to a third party data set. In the United States, there is the [American Community Survey](http://factfinder.census.gov/servlet/DatasetMainPageServlet?_program=ACS&_submenuId=datasets_2&_lang=en) which I often use to compare the data I work with to the population for a given region. The other thing to consider is whether or not your data should match up with ACS data (or whatever you are comparing to). For example, the data I work with often relates to estimating travel demand and so I am interested only in the "traveling" population for a region. The ACS data samples from a broader range than the "traveling" population, so some differences may be expected.
| null | CC BY-SA 2.5 | null | 2010-11-18T13:06:58.253 | 2010-11-18T13:06:58.253 | null | null | 696 | null |
4678 | 2 | null | 4454 | 4 | null | As you and Matt Parker both noted, there can be a big difference in the preparation of a survey script that is digested by your client and how you prepare the script for your programmers. In a professional setting, "client" friendly scripts generally win focus and the programmers are left to put the pieces together as they go along. However, there are some things you can do to make your programmer happy. The scripts we generally develop are Word based so my tips are related to Word, but should be fairly transferable to other platforms. A few tips or suggestions:
- Include "preamble" for your script that outlines what is conveyed in the document. This may include question text, question answers, branching logic, programmer notes, etc. How you differentiate that in the document is a style preference, but for example all of our notes to the programmer are in <> tags and reviewers are instructed to ignore them for their purposes.
- To the extent you can, try and list any intermediate variables or calculations that are needed for future questions.
- Refer to questions by some identifiable name and not something generic such as Q3, Q7, Q19. Not only does this prevent you from having to change a bunch of references downstream when you decide to move or add a question, but is just generally more meaningful to say something like "Branch over this question if purpose is vacation" then "Branch over this question if Q4.A is XX".
- Try and include the programmer in the questionnaire design phase. If this is a complicated survey, having the programmer on board as a sounding board for different techniques may open your eyes to something you would have otherwise overlooked.
- Finally, tell the programmer up front they have the liberty to change flow / etc as it makes sense in a programming context. As long as they are able to develop the final product as the client envisions it, any intermediate changes they make to what they program will come out in the wash.
| null | CC BY-SA 2.5 | null | 2010-11-18T13:20:41.933 | 2010-11-18T13:20:41.933 | null | null | 696 | null |
4679 | 2 | null | 4659 | 14 | null | Look at the pdf of Binomial as a function of $x$: $$f(x) = {n\choose{x}}p^{x}(1-p)^{n-x}$$ and the pdf of Beta as a function of $p$: $$g(p)=\frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}p^{a-1}(1-p)^{b-1}$$
You probably can see that with an appropriate (integer) choice for $a$ and $b$ these are the same. As far as I can tell, that's all there is to this relationship: the way $p$ enters into the binomial pdf just happens to be called a Beta distribution.
| null | CC BY-SA 2.5 | null | 2010-11-18T14:03:15.770 | 2010-11-18T14:03:15.770 | null | null | 279 | null |
4680 | 2 | null | 4454 | 4 | null | I wrote a post some time ago about how to use Google spreadsheets + google forms + R for easily collecting and sharing data. It might prove useful to you or others:
[http://www.r-statistics.com/2010/03/google-spreadsheets-google-forms-r-easily-collecting-and-importing-data-for-analysis/](http://www.r-statistics.com/2010/03/google-spreadsheets-google-forms-r-easily-collecting-and-importing-data-for-analysis/)
| null | CC BY-SA 2.5 | null | 2010-11-18T14:18:18.587 | 2010-11-18T14:18:18.587 | null | null | 253 | null |
4681 | 2 | null | 2988 | 4 | null | Following on from the post by Stephan Kolassa (I can't add this as a comment), I have some alternative code for a simulation. This uses the same basic structure, but is exploded a bit more, so perhaps it is a little easier to read. It also is based on the code by [Kleinman and Horton](http://sas-and-r.blogspot.com/2009/06/example-72-simulate-data-from-logistic.html) to simulate the logistic regression.
nn is the number in the sample. The covariate should be continuously normally distributed, and standardized to mean 0 and sd 1. We use rnorm(nn) to generate this. We select an odds ratio and store it in odds.ratio. We also pick a number for the intercept. Choice of this number governs what proportion of the sample experience the "event" (e.g. 0.1, 0.4, 0.5). You have to play around with this number until you get the right proportion. The following code gives you a proportion of 0.1 with a sample size of 950 and an OR of 1.5:
```
nn <- 950
runs <- 10000
intercept <- log(9)
odds.ratio <- 1.5
beta <- log(odds.ratio)
proportion <- replicate(
n = runs,
expr = {
xtest <- rnorm(nn)
linpred <- intercept + (xtest * beta)
prob <- exp(linpred)/(1 + exp(linpred))
runis <- runif(length(xtest),0,1)
ytest <- ifelse(runis < prob,1,0)
prop <- length(which(ytest <= 0.5))/length(ytest)
}
)
summary(proportion)
```
summary(proportion) confirms that the proportion is ~ 0.1
Then using the same variables, the power is calculated over 10000 runs:
```
result <- replicate(
n = runs,
expr = {
xtest <- rnorm(nn)
linpred <- intercept + (xtest * beta)
prob <- exp(linpred)/(1 + exp(linpred))
runis <- runif(length(xtest),0,1)
ytest <- ifelse(runis < prob,1,0)
summary(model <- glm(ytest ~ xtest, family = "binomial"))$coefficients[2,4] < .05
}
)
print(sum(result)/runs)
```
I think that this code is correct - I checked it against the examples given in Hsieh, 1998 (table 2), and it seems to agree with the three examples given there. I also tested it against the example on p 342 - 343 of Hosmer and Lemeshow, where it found a power of 0.75 (compared to 0.8 in Hosmer and Lemeshow). So it may be that in some circumstances this approach underestimates power. However, when I've run the same example in this [on-line calculator](http://biostat.hitchcock.org/MeasurementError/Analytics/PowerCalculationsforLogisticRegression.asp), I've found that it agrees with me and not the result in Hosmer and Lemeshow.
If anyone can tell us why this is the case, I'd be interested to know.
| null | CC BY-SA 2.5 | null | 2010-11-18T14:47:29.653 | 2010-12-03T11:40:53.517 | 2010-12-03T11:40:53.517 | 1991 | 1991 | null |
4684 | 2 | null | 4659 | 44 | null | Consider the order statistics $x_{[0]} \le x_{[1]} \le \cdots \le x_{[n]}$ of $n+1$ independent draws from a uniform distribution. Because [order statistics have Beta distributions](http://en.wikipedia.org/wiki/Order_statistic#The_order_statistics_of_the_uniform_distribution), the chance that $x_{[k]}$ does not exceed $p$ is given by the Beta integral
$$\Pr[x_{[k]} \le p] = \frac{1}{B(k+1, n-k+1)} \int_0^p{x^k(1-x)^{n-k}dx}.$$
(Why is this? Here is a non-rigorous but memorable demonstration. The chance that $x_{[k]}$ lies between $p$ and $p + dp$ is the chance that out of $n+1$ uniform values, $k$ of them lie between $0$ and $p$, at least one of them lies between $p$ and $p + dp$, and the remainder lie between $p + dp$ and $1$. To first order in the infinitesimal $dp$ we only need to consider the case where exactly one value (namely, $x_{[k]}$ itself) lies between $p$ and $p + dp$ and therefore $n - k$ values exceed $p + dp$. Because all values are independent and uniform, this probability is proportional to $p^k (dp) (1 - p - dp)^{n-k}$. To first order in $dp$ this equals $p^k(1-p)^{n-k}dp$, precisely the integrand of the Beta distribution. The term $\frac{1}{B(k+1, n-k+1)}$ can be computed directly from this argument as the multinomial coefficient ${n+1}\choose{k,1, n-k}$ or derived indirectly as the normalizing constant of the integral.)
By definition, the event $x_{[k]} \le p$ is that the $k+1^\text{st}$ value does not exceed $p$. Equivalently, at least $k+1$ of the values do not exceed $p$: this simple (and I hope obvious) assertion provides the intuition you seek. The probability of the equivalent statement is given by the Binomial distribution,
$$\Pr[\text{at least }k+1\text{ of the }x_i \le p] = \sum_{j=k+1}^{n+1}{{n+1}\choose{j}} p^j (1-p)^{n+1-j}.$$
In summary, the Beta integral breaks the calculation of an event into a series of calculations: finding at least $k+1$ values in the range $[0, p]$, whose probability we normally would compute with a Binomial cdf, is broken down into mutually exclusive cases where exactly $k$ values are in the range $[0, x]$ and 1 value is in the range $[x, x+dx]$ for all possible $x$, $0 \le x \lt p$, and $dx$ is an infinitesimal length. Summing over all such "windows" $[x, x+dx]$--that is, integrating--must give the same probability as the Binomial cdf.

| null | CC BY-SA 2.5 | null | 2010-11-18T15:51:22.313 | 2010-11-18T23:03:39.207 | 2010-11-18T23:03:39.207 | 919 | 919 | null |
4685 | 1 | 4728 | null | 4 | 6894 | What is the difference between soft and hard expectation maximization?
EDIT: ok, i've found out this paper: [http://ttic.uchicago.edu/~dmcallester/ttic101-07/lectures/em/em.pdf](http://ttic.uchicago.edu/~dmcallester/ttic101-07/lectures/em/em.pdf)
that explain quite well the situations
| Soft and Hard EM (Expectation Maximization) | CC BY-SA 2.5 | null | 2010-11-18T16:19:43.160 | 2013-11-21T22:27:54.580 | 2010-11-19T11:20:52.973 | 2046 | 2046 | [
"fitting",
"expectation-maximization",
"unsupervised-learning"
]
|
4686 | 1 | null | null | 5 | 307 | Suppose that I have a population, each represented by a bit $b_i$ for $i \in \{1,\ldots, n\}$. I would like to compute an estimate $\hat{B}$ of the parameter $B = \sum_{i=1}^nb_i$ so that with high probability, the error $|\hat{B}-B| \leq k$ for some fixed $k$. However, I have to pay a cost $c_i$ to sample bit $b_i$, and this cost may be different for each $i$. I want to find the minimum-cost sample that satisfies my accuracy constraint. Clearly, uniform sampling is not necessarily optimal.
Has this been studied? Is there a known optimal solution specifying the probability $p_i$ that I should sample each bit $b_i$ to compute $\hat{B}$?
| Sampling with non-uniform costs | CC BY-SA 2.5 | null | 2010-11-18T16:48:34.630 | 2010-11-22T15:43:17.257 | 2010-11-18T23:06:13.260 | 919 | null | [
"sample-size",
"sampling"
]
|
4687 | 1 | 5693 | null | 5 | 2436 | I am trying to get a deeper understanding of the various types of Bayesian networks.
Most of the literature/lectures I've come across use discrete random variables exclusively and only mention continuous random variables in passing.
It seems if you want to mix discrete and continuous variables in a hybrid network, then you have a few different scenarios to handle:
Child Is Parent(s) Are
----------------------------------------
Discrete All Discrete
Discrete All Continuous
Discrete Hybrid (Discrete and Continuous)
Continuous All Discrete
Continuous All Continuous
Continuous Hybrid (Discrete and Continuous)
I understand how to specify conditional probabilities (i.e. $p(N=n_i|P)$) for the cases where the parents (P) are all discrete. The conditional probabilities are essentially a lookup table that maps a condition to either a probability mass function (when child is discrete) or probability density function (when child is continuous).
The other cases are numbing my brain. How does one go about specifying conditional probabilities of a given node when any of its parents are continuous?
Thanks
Edit Terminology has been (hopefully) corrected based on feedback.
| Specifying conditional probabilities in hybrid Bayesian networks | CC BY-SA 2.5 | null | 2010-11-18T17:32:43.533 | 2016-05-01T20:18:57.380 | 2016-05-01T20:18:57.380 | 7290 | 1474 | [
"bayesian",
"random-variable",
"graphical-model",
"conditional-probability",
"prior"
]
|
4688 | 2 | null | 4603 | 2 | null | I've just come across [mlpy](https://mlpy.fbk.eu/), which also has an implementation of the lasso (in Python.)
| null | CC BY-SA 2.5 | null | 2010-11-18T17:39:55.700 | 2010-11-18T17:39:55.700 | null | null | 439 | null |
4689 | 1 | 4690 | null | 27 | 19170 | What are the differences between generative and discriminative (discriminant) models (in the context of Bayesian learning and inference)?
and what it is concerned with prediction, decision theory or unsupervised learning?
| Generative vs discriminative models (in Bayesian context) | CC BY-SA 3.0 | null | 2010-11-18T18:16:48.990 | 2017-06-26T09:17:14.060 | 2017-06-26T09:17:14.060 | 3277 | 2046 | [
"bayesian",
"predictive-models",
"unsupervised-learning"
]
|
4690 | 2 | null | 4689 | 39 | null | Both are used in supervised learning where you want to learn a rule that maps input x to output y, given a number of training examples of the form $\{(x_i,y_i)\}$. A generative model (e.g., naive Bayes) explicitly models the joint probability distribution $p(x,y)$ and then uses the Bayes rule to compute $p(y|x)$. On the other hand, a discriminative model (e.g., logistic regression) directly models $p(y|x)$.
Some people argue that the discriminative model is better in the sense that it directly models the quantity you care about $(y)$, so you don't have to spend your modeling efforts on the input x (you need to compute $p(x|y)$ as well in a generative model). However, the generative model has its own advantages such as the capability of dealing with missing data, etc. For some comparison, you can take a look at this paper: [On Discriminative vs. Generative classifiers: A comparison of logistic regression and naive Bayes](http://ai.stanford.edu/~ang/papers/nips01-discriminativegenerative.pdf)
There can be cases when one model is better than the other (e.g., discriminative models usually tend to do better if you have lots of data; generative models may be better if you have some extra unlabeled data). In fact, there exists hybird models too that try to bring in the best of both worlds. See this paper for an example: [Principled hybrids of generative and discriminative models](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.89.8245&rep=rep1&type=pdf)
| null | CC BY-SA 3.0 | null | 2010-11-18T19:01:08.403 | 2017-06-26T09:09:48.467 | 2017-06-26T09:09:48.467 | 73177 | 881 | null |
4691 | 1 | null | null | 4 | 7977 | My teacher is asking whether it is possible to look at the Cronbachs's alpha when looking at the internal reliability of an ordinal scale. She thinks, because you use the mean of it, it is not possible, but I've seen it in previous research as well. I think it is possible, but I cannot give a technical explanation for this. Who can help me?
| Internal reliability for an ordinal scale | CC BY-SA 3.0 | null | 2010-11-18T20:40:16.053 | 2022-05-18T11:20:05.217 | 2011-10-04T07:02:14.510 | 930 | null | [
"self-study",
"reliability",
"scales",
"psychometrics"
]
|
4693 | 2 | null | 4686 | 0 | null | If the costs $c_i$ are known a priori, it seems like a greedy sampling would give you some guarantees. That is, sample the $n-2k$ bits in order of increasing cost. This gives a $k$-error guarantee on $B$ with probability $1$ in the obvious way. I am curious if this strategy is the limit strategy of some sane sequence of strategies that provide a guarantee with probability $1-\epsilon$.
If the algorithm is to be deterministic, and the $c_i$ are set by an adversary, I do not think you can do better than this.
| null | CC BY-SA 2.5 | null | 2010-11-18T21:14:38.350 | 2010-11-20T05:09:34.580 | 2010-11-20T05:09:34.580 | 795 | 795 | null |
4694 | 1 | 4705 | null | 2 | 1507 | [Here](http://www.ambion.com/techlib/tn/95/954.html) is an example of hierarchical clustering of genes in the microarray data using the weighted pair gene method in `Spotfire`. I am not sure how to do this in `R`. In the `hclust` function, I see `ward", "single", "complete", "average", "mcquitty", "median" or "centroid"` as the methods.
Also, lets say I have performed hierarchical clustering and found groups of genes using `cuttree` method. I wanted to plot the expression of genes in a group across columns (which may represent treatment, time, etc.). And I want to do this for all the groups separately. In a way similar to the [Mfuzz](http://www.bioconductor.org/packages/release/bioc/html/Mfuzz.html) package's way of showing clusters.
Can any one please help me?
TIA for any pointers.
| How to do weighted pair hierarchical clustering in R? | CC BY-SA 2.5 | null | 2010-11-18T21:49:01.410 | 2010-11-19T07:43:58.327 | null | null | 1307 | [
"r",
"clustering",
"microarray"
]
|
4695 | 1 | null | null | 6 | 484 | Suppose I have time series observations from distributions drawn from some population. That is, I observe $X_{t,i}$ for $t=1,2,...,T,$ and $i=1,2,...,n$, where I believe that $X_{t,i}$ have pdf $f(\theta_i)$. (I have some idea about the distribution of the $\theta_i$, but that may not be important here.) I have some sample statistic which is a good estimator of $\theta_i$ given some observations.
However, there is the suspicion that, in fact, the $\theta_i$ are not stationary, rather the observations come from $f(\theta_{t,i})$, where the $\theta_{t,i}$ are changing slowly over time. How can I test this, either by a formal hypothesis test or an 'eyeball' test? The amount of data available in the time domain is not so great (i.e. $T$ is not so large), thus partitioning the time domain and computing the sample estimate on each partition would only be advisable for a small number (say 5) of partitions (because otherwise the standard error of the estimate is too great). However, the number of series, $n$, is largeish, say 10,000.
I realize there are a number of gaps in this question, e.g. how the $\theta_{t,i}$ might be varying with time, the standard error of the parameter estimator, etc. However, any hints would be appreciated.
To be concrete, one could think of the $X_{t,i}$ as being normally distributed with mean $\theta$ and standard deviation $1$, and the sample statistic is the sample mean.
| How to test for parameter stationarity? | CC BY-SA 2.5 | null | 2010-11-18T22:11:04.460 | 2010-11-21T04:25:37.277 | null | null | 795 | [
"time-series",
"estimation",
"stationarity"
]
|
4696 | 2 | null | 4691 | 6 | null | From a practical perspective, I don't see any obvious reason to not use Cronbach's alpha with ordinal items (e.g., Likert-type items), as is commonly done in most of the studies. It is a lower bound for reliability, and is essentially used as an indicator of internal consistency of a test or questionnaire. The usual assumptions pertaining to a correct interpretation of its value are as follows: (i) no residual correlations, (ii) items have identical loadings, and (iii) the scale is unidimensional. In fact, the sole case where alpha will be essentially the same as reliability is the case of uniformly high factor loadings, no error covariances, and unidimensional instrument (1).
However, we can speak of an ordinal reliability alpha. For instance, Zumbo et coll. (2) use a polychoric correlation matrix input to calculate alpha parallel to Cronbach. Their simulation studies lead them to conclude that ordinal reliability alpha provides "consistently suitable estimates of the theoretical reliability, regardless of the magnitude of the theoretical reliability, the number of scale points, and the skewness of the scale point distributions. In contrast, coefficient alpha is in general a negatively biased
estimate of reliability" for ordinal data (p. 21). Ordinal reliability alpha will normally be higher than the corresponding Cronbach’s alpha.
Otherwise, the usual Cronbach's $\alpha$ is influenced by the number of items in the test and interitem correlations (for a fixed sample size $N=300$, even with modest--albeit perfect--correlation between items, e.g. $\rho = 0.35$, Cronbach’s $\alpha$ would still be at 0.943 with 30 items, and 0.910 with 20 items). There're subtle issues with Cronbach's $\alpha$ and departure from the unidimensionality assumption (systematic errors can greatly inflate the estimate of alpha, especially with large sample sizes) or the presence of inconsistent responses (random responses may inflate Cronbach’s alpha when their mean differ from that of the true responses). If the variables being tested are all dichotomous, Cronbach’s alpha is the same as Kuder-Richardson coefficient (3).
Of note, there are alternative ways to estimate the reliability of test scores, see e.g., Zinbarg et al. (4).
A good review is
>
Bruce Thompson. Score Reliability.
Contemporary Thinking on Reliability
issues. Sage Publications, 2003.
References
- T Raykov. Scale reliability, Cronbach’s coefficient alpha, and violations of essential tau-equivalence for fixed congeneric components. Multivariate Behavioral Research, 32: 329-254, 1997.
- B D Zumbo, A M Gadermann, and C Zeisser. Ordinal versions of coefficients alpha and theta for likert rating scales. Journal of Modern Applied Statistical Methods, 6: 21-29, 2007.
- G F Kuder and M W Richardson. The theory of the estimation of test reliability. Psychometrika, 2: 151-160, 1937.
- R E Zinbarg, W Revelle, I Yovel, and W Li. Cronbach’s $\alpha$, Revelle’s $\beta$, and McDonald’s $\omega_h$: Their relations with each other and two alternative conceptualizations of reliability. Psychometrika, 70(1): 123-133, 2005.
| null | CC BY-SA 4.0 | null | 2010-11-18T22:15:45.467 | 2022-05-18T11:20:05.217 | 2022-05-18T11:20:05.217 | 79696 | 930 | null |
4697 | 2 | null | 4686 | 4 | null | Methods to find a solution are well known, but this is a messy problem. A tiny example reveals much, so consider the case $n = 2$. Let the cost of sampling bit 1 be $c_1 = 1$ and the cost of sampling bit 2 be $c_2 = c$. Without any loss of generality assume this is the expensive bit: $c \ge 1$.
Either we sample both bits at a cost of $1 + c$ because we have to in order to keep the error low, or else we will sample bit 2 with probability $\pi$ and bit 1 with probability $1 - \pi$. Let's assume the value of $k$ is large enough that we won't be compelled to sample both bits.
An unbiased estimator is $\hat{B} = b_1 / (1 - \pi)$ if we sample bit 1 and $\hat{B} = b_2 / \pi$ if we sample bit 2. (This is the [Horvitz-Thompson estimator](http://www.amstat.org/sections/srms/Proceedings/papers/1988_082.pdf).)
The error rate depends on the state of the population. I interpret the problem to require that the expected error size be assured of not exceeding the limit $k$ *no matter what the state of the population may be.* We cannot remove the word "expected" here, because (except for nearly exhaustive samples), the maximum error size can be arbitrarily close to 1 for large populations.
There are $2^2 = 4$ possible states, which can be fully enumerated in this small problem:
$$\eqalign{
\text{Prob.} &b_1 &b_2 &B &\text{Observation} &\hat{B} &\text{Error} \cr
1 - \pi &0 &0 &0 &0 &0 &0\cr
\pi &0 &0 &0 &0 &0 &0\cr
1 - \pi &0 &1 &1 &0 &0 &-1\cr
\pi &0 &1 &1 &1 &1/\pi &1/\pi - 1\cr
1 - \pi &1 &0 &1 &1 &1/(1-\pi) &1/(1-\pi) - 1\cr
\pi &1 &0 &1 &0 &0 &-1\cr
1 - \pi &1 &1 &2 &1 &1/(1-\pi) &1/(1-\pi) - 2\cr
\pi &1 &1 &2 &1 &1/\pi &1/\pi - 2
}$$
Taking expectations for each possible state $(b_1, b_2)$ condenses this into the following:
$$\eqalign{
b_1 &b_2 &\text{Error distribution} &\mathbb{E}[|\text{Error}|]\cr
0 &0 &(0, 0) &0\cr
0 &1 &(-1, 1/\pi-1) &2(1 - \pi)\cr
1 &0 &(1/(1-\pi)-1, -1) &2\pi \cr
1 &1 &(1/(1-\pi) - 2, 1/\pi - 2) &2 - 4\pi
}$$
In computing the expected absolute error I have assumed $\pi \le 1/2$: we will favor sampling the cheaper bit whenever possible.
Suppose, for example, $k = 3/2$. That is, we aim to find a sampling scheme that keeps the absolute error to $3/2$ or less with "high probability" while minimizing the expected cost. (I realize this choice of $k$ is artificial because we might attempt to improve the estimator--at risk of biasing it slightly--by constraining its estimates to 0, 1, or 2; but the purpose here is to look ahead to a situation with large $n$, where such improvements will be unlikely. The mathematical patterns are important in this example, not its (lack of) realism.) Evidently we would like to minimize the chance of paying for the expensive bit; that is, to make $\pi$ as small as possible. The final column in the previous table constrains $\pi$; it implies that
$$2(1-\pi) \le k,\quad 2\pi \le k,\quad 2 - 4\pi \le k.$$
For $k \ge 1$ all constraints can be satisfied provided
$$\max(1-k/2, 1/2 - k/4) \le \pi \le k/2.$$
Because the expected cost is
$$\mathbb{E}[\text{Cost}] = 1 + (c-1)\pi,$$
the unique cost-minimizing solution for $k=3/2$ is $\pi = 1/4$: regardless of the differences in expenses, we should sample the cheap bit with probability $3/4$ and the expensive bit with probability $1/4$, for an expected cost of $1 + (c-1)/4$.
This example reveals many things, including
- There can be solutions cheaper than simple random sampling (which in this case would select each bit with probability $1/2$ for an expected cost of $1 + (c-1)/2$).
- Finding a solution involves an optimization with an exponential number of (increasingly complicated) constraints in $n$.
- The selection probabilities will depend on the value of $k$.
- We cannot guarantee a fixed cost; all we can hope for--because randomization is essential--is an optimal expected cost.
- As always, the optimal sample size will depend on $k$ (the limit on the amount of error).
As a practical matter, I think most people would have more information than contained in this abstract problem. Even if they didn't, if $n$ were large and a substantial sample size were contemplated, it would make sense to devote part of the sampling budget to the purpose of modeling a relationship between the costs and the values (the $c_i$ and the $b_i$). With such a model in hand one could greatly simplify the analysis and identify an optimal or near-optimal program to spend the remaining sampling budget (or even, in some cases, to establish that the targeted error rate is unlikely to be achieved). For this reason, and because the exponential growth in the constraints is troublesome, I am reluctant to pursue a more detailed analysis of this problem.
| null | CC BY-SA 2.5 | null | 2010-11-18T22:38:15.733 | 2010-11-18T22:44:36.383 | 2010-11-18T22:44:36.383 | 919 | 919 | null |
4698 | 1 | 5024 | null | 3 | 203 | I have two models $M_1$ and $M_2$ that I am using to try and compare to observed data $D$. $M_1$ is an $n_1$-dimensional model, and $M_2$ is an $n_2$-dimensional problem. The Bayes factor $K$ to compare the models can be calculated using:
$K = P(D|M_1)/P(D|M_2) $
assuming no prior preference for either model. The numerator and denominator can be written as
$P(D|M_i) = \int P(D|\mathbf{w},M_i) P(\textbf{w}|M_i) d\mathbf{w}$
where $\mathbf{w}$ is the parameter vector, so the integral is over parameter space.
Now say that due to e.g. computational constraints, one can only compute $M_1$ and $M_2$ for a finite number of random samples of the parameter vector $\mathbf{w}$, where the number of samples is given by $s_1$ and $s_2$. Would it be acceptable to then say that the integral above becomes a summation over the random samples, and assuming the random samples are uniformly distributed through parameter space, $P(\textbf{w}|M_i)$ becomes $1/s_i$, so that:
$P(D|M_i) = \sum_{j=1}^{s_i} P(D|\mathbf{w_j},M_i) / s_i$
and so what is being compared in the Bayes factor $K$ is the ratio of the average probability over all the samples for each model?
| Bayesian model comparison for randomly sampled sets of models | CC BY-SA 2.5 | null | 2010-11-18T22:54:30.957 | 2010-11-30T09:33:50.900 | null | null | 2052 | [
"probability",
"bayesian",
"modeling"
]
|
4699 | 2 | null | 4695 | 2 | null | This problem is encountered in quality control/[statistical process control](http://en.wikipedia.org/wiki/Statistical_process_control) settings. There's a large literature, as you have hinted, because different parameters as estimated in various ways from different forms of sampling different distributions can be expected to vary in different ways. The purpose is to detect that variation on-line as soon as possible after it occurs without triggering too many false detections along the way. Consider using a control chart ([1](http://p://en.wikipedia.org/wiki/Control_chart), [2](http://www.itl.nist.gov/div898/handbook/pmc/section3/pmc31.htm)). In your concrete situation a good choice is a combined Shewhart-CUSUM control chart.
| null | CC BY-SA 2.5 | null | 2010-11-18T23:18:18.273 | 2010-11-18T23:18:18.273 | null | null | 919 | null |
4700 | 1 | 4702 | null | 414 | 831780 | In simple terms, how would you explain (perhaps with simple examples) the difference between fixed effect, random effect and mixed effect models?
| What is the difference between fixed effect, random effect and mixed effect models? | CC BY-SA 2.5 | null | 2010-11-19T00:03:28.163 | 2023-03-06T12:29:41.600 | 2010-11-19T07:58:26.983 | 930 | 1991 | [
"mixed-model",
"random-effects-model",
"definition",
"fixed-effects-model"
]
|
4701 | 2 | null | 4659 | 5 | null | As you noted, the Beta distribution describes the distribution of the trial probability parameter $F$, while the binomial distribution describes the distribution of the outcome parameter $I$. Rewriting your question, what you asked about was why
$$P(F \le \frac {i+1} n)+P(I \le fn-1)=1$$
$$P(Fn \le i+1)+P(I+1 \le fn)=1$$
$$P(Fn \le i+1)=P(fn<I+1)$$
That is, the likelihood that the observation plus one is greater than the expectation of the observation is the same as the likelihood that the observation plus one is greater than the expectation of the observation.
I admit that this may not help intuit the original formulation of the problem, but maybe it helps to at least see how the two distributions use the same underlying model of repeated Bernoulli trials to describe the behavior of different parameters.
| null | CC BY-SA 2.5 | null | 2010-11-19T01:22:33.283 | 2010-11-19T03:52:36.887 | 2010-11-19T03:52:36.887 | 2456 | 2456 | null |
4702 | 2 | null | 4700 | 220 | null | Statistician Andrew Gelman [says that the terms 'fixed effect' and 'random effect' have variable meanings](http://www.stat.columbia.edu/%7Ecook/movabletype/archives/2005/01/why_i_dont_use.html) depending on who uses them. Perhaps you can pick out which one of the 5 definitions applies to your case. In general it may be better to either look for equations which describe the probability model the authors are using (when reading) or write out the full probability model you want to use (when writing).
>
Here we outline five definitions that we have seen:
Fixed effects are constant across individuals, and random effects vary. For example, in a growth study, a model with random intercepts $a_i$ and fixed slope $b$ corresponds to parallel lines for different individuals $i$, or the model $y_{it} = a_i + b t$. Kreft and De Leeuw (1998) thus distinguish between fixed and random coefficients.
Effects are fixed if they are interesting in themselves or random if there is interest in the underlying population. Searle, Casella, and McCulloch (1992, Section 1.4) explore this distinction in depth.
“When a sample exhausts the population, the corresponding variable is fixed; when the sample is a small (i.e., negligible) part of the population the corresponding variable is random.” (Green and Tukey, 1960)
“If an effect is assumed to be a realized value of a random variable, it is called a random effect.” (LaMotte, 1983)
Fixed effects are estimated using least squares (or, more generally, maximum likelihood) and random effects are estimated with shrinkage (“linear unbiased prediction” in the terminology of Robinson, 1991). This definition is standard in the multilevel modeling literature (see, for example, Snijders and Bosker, 1999, Section 4.2) and in econometrics.
[Gelman, 2004, Analysis of variance—why it is more important than ever. The Annals of Statistics.]
| null | CC BY-SA 3.0 | null | 2010-11-19T01:40:26.773 | 2016-09-27T04:40:45.413 | 2020-06-11T14:32:37.003 | -1 | 1146 | null |
4703 | 2 | null | 4700 | 48 | null | Fixed effect: Something the experimenter directly manipulates and is often repeatable, e.g., drug administration - one group gets drug, one group gets placebo.
Random effect: Source of random variation / experimental units e.g., individuals drawn (at random) from a population for a clinical trial.
Random effects estimates the variability
Mixed effect: Includes both, the fixed effect in these cases are estimating the population level coefficients, while the random effects can account for individual differences in response to an effect, e.g., each person receives both the drug and placebo on different occasions, the fixed effect estimates the effect of drug, the random effects terms would allow for each person to respond to the drug differently.
General categories of mixed effects - repeated measures, longitudinal, hierarchical, split-plot.
| null | CC BY-SA 2.5 | null | 2010-11-19T04:11:03.943 | 2010-11-19T04:11:03.943 | null | null | 966 | null |
4704 | 2 | null | 4652 | 2 | null | This is possibly a consequence of a known deficiency of steepest descent algorithms in general. Using a [conjugate gradient algorithm](http://en.wikipedia.org/wiki/Conjugate_gradient_method) may improve convergence.
| null | CC BY-SA 2.5 | null | 2010-11-19T05:14:16.130 | 2010-11-19T05:14:16.130 | null | null | 795 | null |
4705 | 2 | null | 4694 | 2 | null | About your first question, it seems that the `mcquitty` option corresponds to WPGMA clustering, while `average` is for UPGMA. It is just by looking at the [source code](http://svn.r-project.org/R/trunk/src/library/stats/R/hclust.R), so it is worth to double check it. But it also referred as is in the `upgma()` function from the [phangorn](http://cran.r-project.org/web/packages/phangorn/) package.
About your second question, I think you just have to subset your genes by the group labels found after `cutree`, and then plot expression profiles as usual.
| null | CC BY-SA 2.5 | null | 2010-11-19T07:43:58.327 | 2010-11-19T07:43:58.327 | null | null | 930 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.