Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4706 | 2 | null | 4700 | 8 | null | Not really a formal definition, but I like the following slides: [Mixed models and why sociolinguists should use them](http://www.danielezrajohnson.com/johnson_panel.pdf), from Daniel Ezra Johnson. A brief recap' is offered on slide 4. Although it mostly focused on psycholinguistic studies, it is very useful as a first step.
| null | CC BY-SA 4.0 | null | 2010-11-19T07:50:50.317 | 2022-02-12T22:20:23.720 | 2022-02-12T22:20:23.720 | 919 | 930 | null |
4707 | 1 | null | null | 1 | 1957 | As we know, we can get the same eigenvector if we apply PCA to the same data. But, is it possible that we get the same eigenvectors after we apply PCA to two totally different data sets (still same dimension)?
| Can two different data sets get the same eigenvector in PCA? | CC BY-SA 2.5 | null | 2010-11-19T08:52:54.817 | 2011-12-05T08:31:49.897 | 2010-11-26T17:43:38.080 | 919 | null | [
"pca"
]
|
4708 | 1 | null | null | 12 | 22584 | I am looking for a method to use to test for equality of two cumulative density functions.
| Comparing two cumulative density functions | CC BY-SA 2.5 | null | 2010-11-19T08:58:02.683 | 2010-11-19T13:29:47.970 | 2010-11-19T09:10:19.747 | 159 | null | [
"distributions",
"hypothesis-testing"
]
|
4709 | 2 | null | 4708 | 3 | null | Plot their inverses against one another, i.e. make a quantile-quantile plot:
[http://en.wikipedia.org/wiki/Q-Q_plot](http://en.wikipedia.org/wiki/Q-Q_plot)
| null | CC BY-SA 2.5 | null | 2010-11-19T09:01:45.890 | 2010-11-19T09:01:45.890 | null | null | 2036 | null |
4710 | 2 | null | 4708 | 3 | null | Take a look at the [Kolmogorov–Smirnov test](http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test) ([ks.test](http://sekhon.berkeley.edu/stats/html/ks.test.html) in R.)
| null | CC BY-SA 2.5 | null | 2010-11-19T09:05:05.633 | 2010-11-19T09:05:05.633 | null | null | 439 | null |
4711 | 2 | null | 4707 | 6 | null | Any pure rotation of the dataset would also give the same set of principal components, in which case the answer would be trivially "yes", depending on the definition of "totally different" (arguably if two datasets have the same principal components then they can't be "totally" different).
| null | CC BY-SA 3.0 | null | 2010-11-19T09:08:43.633 | 2011-12-05T08:31:49.897 | 2011-12-05T08:31:49.897 | 930 | 887 | null |
4712 | 2 | null | 4551 | 34 | null | Something I see a surprising amount in conference papers and even journals is making multiple comparisons (e.g. of bivariate correlations) and then reporting all the p<.05s as "significant" (ignoring the rightness or wrongness of that for the moment).
I know what you mean about psychology graduates, as well- I've finished a PhD in psychology and I'm still only just learning really. It's quite bad, I think psychology needs to take quantitative data analysis more seriously if we're going to use it (which, clearly, we should)
| null | CC BY-SA 2.5 | null | 2010-11-19T09:49:52.633 | 2010-11-19T09:49:52.633 | null | null | 199 | null |
4713 | 1 | 4720 | null | 31 | 16510 | I have used the following r code to estimate the confidence intervals of a binomial proportion because I understand that that substitutes for a "power calculation" when designing receiver operating characteristic curve designs looking at detection of diseases in a population.
n is 150, and the disease, we believe, is 25% prevalent in the population. I have calculated the values for 75% sensitivity and 90% specificity (because that's what people seem to do).
```
binom.test(c(29,9), p=0.75, alternative=c("t"), conf.level=0.95)
binom.test(c(100, 12), p=0.90, alternative=c("t"), conf.level=0.95)
```
I have also visited this site:
[http://statpages.org/confint.html](http://statpages.org/confint.html)
Which is a java page which calculates binomial confidence intervals, and it gives the same answer.
Anyway, after that lengthy set-up, I want to ask why the confidence intervals are not symmetric, e.g. sensitivity is
```
95 percent confidence interval:
0.5975876 0.8855583
sample estimate probability: 0.7631579
```
Sorry if this is a basic question, but everywhere I look seems to suggest that they will be symmetric, and a colleague of mine seems to think they will be too.
| Binomial confidence interval estimation - why is it not symmetric? | CC BY-SA 4.0 | null | 2010-11-19T10:01:37.860 | 2019-12-17T05:01:49.850 | 2019-12-17T05:01:49.850 | 92235 | 199 | [
"confidence-interval",
"binomial-distribution"
]
|
4714 | 2 | null | 4713 | 7 | null | Binomial distribution is just not symmetric, yet this fact emerges especially for $p$ near $0$ or $1$ and for small $n$; most people use it for $p\approx 0.5$ and so the confusion.
| null | CC BY-SA 2.5 | null | 2010-11-19T10:12:28.887 | 2010-11-19T11:01:47.507 | 2010-11-19T11:01:47.507 | 930 | null | null |
4716 | 2 | null | 4713 | 27 | null | To see why it should not be symmetric, think of the situation where $p=0.9$ and you get 9 successes in 10 trials. Then $\hat{p}=0.9$ and the 95% CI for $p$ is [0.554, 0.997]. The upper limit cannot be greater than 1 obviously, so most of the uncertainty must fall to the left of $\hat{p}$.
| null | CC BY-SA 2.5 | null | 2010-11-19T11:13:46.303 | 2010-11-19T11:13:46.303 | null | null | 159 | null |
4717 | 1 | null | null | 68 | 132198 | In the literature on hierarchical/multilevel models I have often read about "nested models" and "non-nested models", but what does this mean? Could anyone maybe give me some examples or tell me about the mathematical implications of this phrasing?
| What is the difference between a "nested" and a "non-nested" model? | CC BY-SA 3.0 | null | 2010-11-19T11:32:02.027 | 2017-04-15T22:15:37.157 | 2017-04-15T20:49:28.490 | 28666 | 1082 | [
"hypothesis-testing",
"terminology",
"nested-models",
"nested-data"
]
|
4719 | 2 | null | 4717 | 39 | null | Nested versus non-nested can mean a whole lot of things. You have nested designs versus crossed designs (see eg [this explanation](http://www.psychstat.missouristate.edu/introbook/sbk21m.htm)). You have nested models in model comparison. Nested means here that all terms of a smaller model occur in a larger model. This is a necessary condition for using most model comparison tests like likelihood ratio tests.
In the context of multilevel models I think it's better to speak of nested and non-nested factors. The difference is in how the different factors are related to one another. In a nested design, the levels of one factor only make sense within the levels of another factor.
Say you want to measure the oxygen production of leaves. You sample a number of tree species, and on every tree you sample some leaves on the bottom, in the middle and on top of the tree. This is a nested design. The difference for leaves in a different position only makes sense within one tree species. So comparing bottom leaves, middle leaves and top leaves over all trees is senseless. Or said differently: leaf position should not be modelled as a main effect.
Non-nested factors is a combination of two factors that are not related. Say you study patients, and are interested in the difference of age and gender. So you have a factor ageclass and a factor gender that are not related. You should model both age and gender as a main effect, and you can take a look at the interaction if necessary.
The difference is not always that clear. If in my first example the tree species are closely related in form and physiology, you could consider leaf position also as a valid main effect. In many cases, the choice for a nested design versus a non-nested design is more a decision of the researcher than a true fact.
| null | CC BY-SA 2.5 | null | 2010-11-19T12:01:19.740 | 2010-11-19T12:01:19.740 | null | null | 1124 | null |
4720 | 2 | null | 4713 | 22 | null | They're believed to be symmetric because quite often a normal approximation is used. This one works well enough in case p lies around 0.5. `binom.test` on the other hand reports "exact" Clopper-Pearson intervals, which are based on the F distribution (see [here](http://people.ee.duke.edu/~kst/ECE257/confbin.pdf) for the exact formulas of both approaches). If we would implement the Clopper-Pearson interval in R it would be something like (see note):
```
Clopper.Pearson <- function(x, n, conf.level){
alpha <- (1 - conf.level) / 2
QF.l <- qf(1 - alpha, 2*n - 2*x + 2, 2*x)
QF.u <- qf(1 - alpha, 2*x + 2, 2*n - 2*x)
ll <- if (x == 0){
0
} else { x / ( x + (n-x+1)*QF.l ) }
uu <- if (x == 0){
0
} else { (x+1)*QF.u / ( n - x + (x+1)*QF.u ) }
return(c(ll, uu))
}
```
You see both in the link and in the implementation that the formula for the upper and the lower limit are completely different. The only case of a symmetric confidence interval is when p=0.5. Using the formulas from the link and taking into account that in this case $n = 2\times x$ it's easy to derive yourself how it comes.
I personally understood it better looking at the confidence intervals based on a logistic approach. Binomial data is generally modeled using a logit link function, defined as:
$${\rm logit}(x) = \log\! \bigg( \frac{x}{1-x} \bigg)$$
This link function "maps" the error term in a logistic regression to a normal distribution. As a consequence, confidence intervals in the logistic framework are symmetric around the logit values, much like in the classic linear regression framework. The logit transformation is used exactly to allow for using the whole normality-based theory around the linear regression.
After doing the inverse transformation:
$${\rm logit}^{-1}(x) = \frac{e^x}{1+e^{x}}$$
You get an asymmetric interval again. Now these confidence intervals are actually biased. Their coverage is not what you would expect, especially at the boundaries of the binomial distribution. Yet, as an illustration they show you why it is logic that a binomial distribution has asymmetric confidence intervals.
An example in R:
```
logit <- function(x){ log(x/(1-x)) }
inv.logit <- function(x){ exp(x)/(1+exp(x)) }
x <- c(0.2, 0.5, 0.8)
lx <- logit(x)
upper <- lx + 2
lower <- lx - 2
logxtab <- cbind(lx, upper, lower)
logxtab # the confidence intervals are symmetric by construction
xtab <- inv.logit(logxtab)
xtab # back transformation gives asymmetric confidence intervals
```
note : In fact, R uses the beta distribution, but this is completely equivalent and computationally a bit more efficient. The implementation in R is thus different from what I show here, but it gives exactly the same result.
| null | CC BY-SA 3.0 | null | 2010-11-19T12:44:10.127 | 2017-07-31T15:27:39.443 | 2017-07-31T15:27:39.443 | 7290 | 1124 | null |
4721 | 2 | null | 4708 | 4 | null | The QQ-plot and the Kolmogorov-Smirnov test are two widely used options. A QQ-plot requires some level of expertise, as the decision is based on your own judgement. See also the answers to [this question](https://stats.stackexchange.com/questions/2492/normality-testing-essentially-useless/2498#2498) for more discussion about both tests. I use there the Shapiro-Wilks test for normality, which can be seen as a parametric counterpart of the KS test in case the comparison is made with a normal distribution.
For reference, I'd like to point out the book [Comparing Distributions from prof. dr. Olivier Thas.](http://www.springer.com/statistics/book/978-0-387-92709-1) This gives a thorough overview of parametric, semi-parametric and non-parametric approaches to the topic.
| null | CC BY-SA 2.5 | null | 2010-11-19T12:53:56.033 | 2010-11-19T12:53:56.033 | 2017-04-13T12:44:37.583 | -1 | 1124 | null |
4722 | 2 | null | 4708 | 1 | null | Lately I've been playing with comparing distributions by computing the difference between their empirical CDFs and then bootstrapping intervals on this difference. Differences between the distributions in location, scale, and each tail all have different and rather noticeable effects on the DECDF function.
| null | CC BY-SA 2.5 | null | 2010-11-19T13:16:41.517 | 2010-11-19T13:16:41.517 | null | null | 364 | null |
4723 | 2 | null | 4708 | 5 | null | Might be worth looking at some variant of the [Anderson-Darling](http://en.wikipedia.org/wiki/Anderson%E2%80%93Darling_test) or [Cramer-von Mises](http://en.wikipedia.org/wiki/Cram%C3%A9r%E2%80%93von-Mises_criterion) statistics. The latter is essentially a weighted least-squares distance between two CDFs.
| null | CC BY-SA 2.5 | null | 2010-11-19T13:29:47.970 | 2010-11-19T13:29:47.970 | null | null | 887 | null |
4724 | 2 | null | 4717 | 13 | null | Nested vs non-nested models come up in conjoint analysis and [IIA](http://en.wikipedia.org/wiki/Independence_of_irrelevant_alternatives). Consider the "red bus blue bus problem". You have a population where 50% of people take a car to work and the other 50% take the red bus. What happens if you add a blue bus which has the same specifications as the red bus to the equation? A [multinomial logit](http://en.wikipedia.org/wiki/Multinomial_logit) model will predict 33% share for all three modes. We intuitively know this is not correct as the red bus and blue bus are more similar to one another than to the car and will thus take more share from one another before taking share from the car. That is where a nesting structure comes in, which is typically specified as a lambda coefficient on the similar alternatives.
Ben Akiva has put together a nice set of slides outlining the theory on this [here](http://www.google.com/url?sa=t&source=web&cd=13&ved=0CCMQFjACOAo&url=http%3A%2F%2Focw.mit.edu%2Fcourses%2Fcivil-and-environmental-engineering%2F1-201j-transportation-systems-analysis-demand-and-economics-fall-2008%2Flecture-notes%2FMIT1_201JF08_lec04.pdf&rct=j&q=red%20blue%20blue%20bus%20problem%20nested%20logit&ei=rnvmTOimD8X6lwf0z6DiCw&usg=AFQjCNG2vENYDIIw0Q6WtIF3eU65uB1mXg&sig2=QNCugsgvgq4wezmhR1VYeA&cad=rja). He begins talking about nested logit around slide 23.
| null | CC BY-SA 2.5 | null | 2010-11-19T13:33:31.053 | 2010-11-26T15:10:17.447 | 2010-11-26T15:10:17.447 | 696 | 696 | null |
4725 | 2 | null | 4713 | 9 | null | @Joris mentioned the symmetric or "asymptotic" interval, that is most likely the one you are expecting. @Joris also mentioned the "exact" Clopper-Pearson intervals and gave you a reference which looks very nice. There is another confidence interval for proportions which you will likely encounter (note it is also not symmetric), the "Wilson" interval which is a type of asymptotic interval based on inverting the score test. The endpoints of the interval solve (in $p$) the equation
$$
(\hat{p} - p)/\sqrt{p(1-p)}=\pm z_{\alpha/2}
$$
Anyway, you can get all three in R with the following:
```
library(Hmisc)
binconf(29, 38, method = "asymptotic")
binconf(29, 38, method = "exact")
binconf(29, 38, method = "wilson")
```
Note that method "wilson" is the same confidence interval used by prop.test without Yates' continuity correction:
```
prop.test(29, 38, correct = FALSE)
```
See [here](https://home.comcast.net/~lthompson221/) for Laura Thompson's free SPLUS + R manual which accompanies Agresti's Categorical Data Analysis in which these issues are discussed in great detail.
| null | CC BY-SA 2.5 | null | 2010-11-19T13:37:37.487 | 2010-11-19T13:37:37.487 | null | null | null | null |
4726 | 2 | null | 4707 | 2 | null | An eigenvector is just a statistic. You can get copies of almost any useful statistic from a wide variety of different data, as long as they are similar in the relevant manner.
The eigenvector represents a particular linear relationship between the variables. As long as you use variables with the same units and what appears to be the same linear relationship, after normalization, you will get the same eigenvector.
| null | CC BY-SA 2.5 | null | 2010-11-19T14:12:05.453 | 2010-11-19T14:12:05.453 | null | null | 2456 | null |
4727 | 1 | null | null | 5 | 420 | I am interested in comparing a non-linear model with up to 12 parameters to many datasets. However, each instance of the model takes a significant amount of time to compute (~1 hour), so I am pre-computing instances of the model for various parameter values and then comparing these to all the different datasets.
There are various ways to sample parameter space - so far, I've come across regular grids (impossible here), sparse grids + interpolation, Monte-Carlo random sampling, and there are probably others. Which approach would be optimal for a fixed amount of computing resources, and therefore a fixed number of model instances?
| Non-linear model fitting in many dimensions | CC BY-SA 2.5 | null | 2010-11-19T14:30:36.020 | 2013-02-07T03:11:53.160 | null | null | 2052 | [
"modeling"
]
|
4728 | 2 | null | 4685 | 5 | null | Based on the paper you linked to, I would argue that the term EM usually refers to the "soft" version. The key distinction seems to be that instead of taking an expectation in the E-step, the "hard" version finds a mode.
A good explanation of the distinction is available in chapters 20-22 of David Mackay's book (which is [available online](http://www.inference.phy.cam.ac.uk/mackay/itila/)).
| null | CC BY-SA 2.5 | null | 2010-11-19T14:31:20.210 | 2010-11-19T14:31:20.210 | null | null | 495 | null |
4729 | 1 | 4746 | null | 6 | 483 | I am about to do a laboratory experiment in the scientific field of soil ecology and hydrology. Beforehand I want to make sure not to make any crucial mistakes, and therefore I would appreciate any hints and comments from your side. The main issue is how to deal with high natural variability (~25%) and a rather small sample size (total max=20).
Short description of the experiment:
We will put soil cores in cylinders and keep them under different moisture scenarios. Three different kind of Carbon forms will be measured for about one month.
We want to know if the variables change within groups and between groups.
The experimental design that was proposed as follows:
There will be 2 different soil types, 2 different treatments + one control, and each treatment should be replicated three times. The total number of cylinders is thus 18 cylinders.
The variability in field measurements can be as high as 25% within one group. Due to pracitcal reasons there cannot be more than 20 cylinders in total.
My questions:
- Would it make more sense to have only one treatment and one control, but each one is replicated five times?
- Will I be able to draw any reliable conclusions from this kind of experiment under these conditions?
- How should I set the parameters to make some power calculations using e.g. G*Power 3? Which test should I choose? What should I set the effect size to and what should the numbers for df be?
- How should I analyze the data after the experiment is done? Should I use ANOVA? Can I use a mixed effect model?
| How to setup a laboratory experiment in Ecological Research under high natural variability | CC BY-SA 3.0 | null | 2010-11-19T14:48:02.893 | 2016-08-12T17:32:40.960 | 2016-08-12T17:32:40.960 | 22468 | 2063 | [
"anova",
"mixed-model",
"experiment-design",
"statistical-power",
"degrees-of-freedom"
]
|
4730 | 2 | null | 4686 | 4 | null | Despite promising not to, I have thought about this problem further. This approach differs enough from the previous one I outlined that it seems worthwhile posting it as a separate reply.
---
Both @Aniko and @shabbychef are right: you need to "almost exhaust the population" with "greedy sampling." But there's a twist--on occasion you can get away with a small sample.
Let's first change the notation (only slightly) to provide a clear interpretation of the constraint in the question. Assume that a (small) threshold error probability $p$ and a maximum error size $\epsilon$ (in place of $k$, which will have uses elsewhere) have been specified, so that we require
$$\Pr[|\hat{B}-B| \leq \epsilon] \ge 1 - p$$
regardless of the (unknown) values of the $b_i$ (the "state" of the population).
Let $c_i$ be the cost of sampling element $i$ in the population. Suppose $A$ is a subset of the population that is sampled with probability $\pi_A$, at a cost of $c(A) = \sum_{i \in A}{c_i}$. Let $m$ be the number of elements of $A$ (written $m = |A|$) and let $k$ be the number of them that are 1's. This information tells us that $B$ surely lies between $k$ and $k + n - m$ no matter what the state of the population may be. Provided only that
$$k + n - m - \epsilon \le \hat{B} \le k + \epsilon,$$
we are assured that the error associated with the sample $A$ cannot exceed $\epsilon$. This is not possible whenever $m \lt n - 2 \epsilon$. Let's say that such a subset is "small" (with respect to $\epsilon$ and $n$) and otherwise is "large."
Here is perhaps the only subtlety: when a sample is small we still have a chance of not making an error, provided we use a randomized estimator. An example of the best ones I can find is
$$\hat{B} = k + (2j-1)\epsilon \text{ with probability } \frac{1}{l},\ j=1,2,\ldots, l$$
where $l = \lceil{\frac{n-m}{2 \epsilon}\rceil}$. No matter what the values of the unsampled data are, this procedure has at least a chance of $2/l$ of being within $\epsilon$ of the correct total $B$. Using such an estimator, the probability of an unacceptable error is bounded by the expected chance that the randomized estimate will have too great an error:
$$\Pr[|\hat{B}-B| \gt \epsilon] \le \sum_{A}{\pi_A(1 - \frac{1}{\lceil{\frac{n-|A|}{2 \epsilon}\rceil}})}.$$
(The coefficient when $|A| = n$ appears to be undefined but actually is zero; the sum really needs to extend only over the small subsets where randomization is actually needed.)
We have obtained a linear program for the sample probabilities $\pi_A$; to wit,
Minimize the expected cost $\sum_{A}{\pi_A c(A)}$
subject to
- $\sum_{A}{\pi_A(1 - \frac{1}{\lceil{(n-|A|)/(2 \epsilon)\rceil}})} \le p,$
- $\sum_{A}{\pi_A} = 1,$
- $\pi_A \ge 0$ for all subsets $A$.
This is a simple linear program (but with $2^n$ variables), easy to set up and easy to solve provided the population has about 16 or fewer bits. When some of the costs are the same, the number of variables can be substantially reduced. With larger populations, approximate methods would be needed to obtain a solution. Generally, the solution cannot include any small samples with appreciable probability: most of the probability must be concentrated on large samples. Among those, it will select the cheapest (which can be found with the greedy algorithm). These heuristics allow for simple, rapid approximations to good solutions.
---
The solutions can be interesting. Here are some examples. As an abbreviation, let $f(c, \epsilon, p)$ indicate a solution for cost vector $c = (c_1, c_2, \ldots, c_n)$ and problem constraints $\epsilon$ and $p$.
- $f((1,1), 1/2, 1/20)$ samples each element with probability 9/20 and obtains no data with probability 1/10. The expected cost is 0.9.
Why is this? If we sample element 1 we observe $b_1$ and estimate $\hat{B}$ = $b_1 + 1/2$. This is certainly within $1/2$ of the correct value. When we take no sample we estimate that $B$ equals $1/2$ with 50% probability and otherwise estimate that $B$ equals $3/2$. No matter what the population is, this guessing will return the correct answer (within an error of $1/2$) 50% of the time. Thus we make an error greater than $1/2$ only 50% of $1/10$ of the time, which meets the targeted error rate of $1/20$.
- $f((1,1,5,5), 1/2, 1/20)$ elects to sample both cheap bits no matter what. In addition, there is a 45% chance it will also include bit 3 (but not bit 4) and a 45% chance it will also include bit 4 (but not bit 3). The reasoning is similar to the previous situation.
- $f((1,1,5,5), 1/4, 1/20)$ samples the entire population with a $33/35$ probability and otherwise obtains no data with $2/35$ probability.
- $f((1,2,3,4), 1/2, 1/20)$ samples bits 1, 2, and 3 with $14/15$ probability and otherwise obtains no data. The expected cost equals $84/15$ = 5.6.
| null | CC BY-SA 2.5 | null | 2010-11-19T15:56:00.497 | 2010-11-22T15:43:17.257 | 2010-11-22T15:43:17.257 | 919 | 919 | null |
4731 | 2 | null | 4727 | 1 | null | My understanding is that the optimal methodology is going to depend on the surface texture of parametrization errors. That is, how often do parameters reflect or manifest as interactions? If each parameter is an individual value that has its own distinct minima independent of all other parameters, model fitting should proceed easily with sparse grids and interpolation. If instead there are lots of local minima you might find that spare grids and interpolation will seldom fall on the same values. Given the time required to fit your model I doubt Monte-Carlo random sampling will be the optimal approach. Another approach which you haven't considered yet are genetic algorithms, but again convergence to a single answer may be difficult.
| null | CC BY-SA 2.5 | null | 2010-11-19T17:03:45.997 | 2010-11-19T17:03:45.997 | null | null | 196 | null |
4733 | 1 | null | null | 3 | 256 | Can you suggest a tutorial or book chapter on basics of data analysis with exponentially distributed data / exponential noise,
at undergraduate level?
By "basics" I mean:
- is a given set of data exponentially distributed?
- should one generally use median instead of mean for exponential data?
Should one simply trim the top 10% (for some value of 10)?
- least-squares curve fitting, where I
believe errors to be exponential?
(Ok, I'll be made to write out "it's not that simple" 100 times.)
| Basics of data analysis with exponential data/noise? | CC BY-SA 2.5 | null | 2010-11-19T17:53:49.017 | 2010-12-19T19:44:26.317 | 2010-11-19T18:07:59.837 | null | 557 | [
"robust",
"references"
]
|
4734 | 1 | 4760 | null | 4 | 5493 | I know what you're thinking, this is a duplicate of "[What are the differences between Factor Analysis and Principal Component Analysis](https://stats.stackexchange.com/questions/1576/what-are-the-differences-between-factor-analysis-and-principal-component-analysis)", but it isn't really.
That other question deals with Confirmatory Factor Analysis.
Either way, I would like to know what the difference is :)
Thanks!
| What is the difference between Exploratory Factor Analysis and Principal Components Analysis (PCA)? | CC BY-SA 2.5 | null | 2010-11-19T18:55:37.910 | 2010-12-13T04:50:07.943 | 2017-04-13T12:44:56.303 | -1 | 74 | [
"pca",
"factor-analysis"
]
|
4735 | 1 | 4738 | null | 17 | 11693 | These terms get thrown around together a lot, but I would like to know what you think the differences are, if any.
Thanks
| What are the differences among latent semantic analysis (LSA), latent semantic indexing (LSI), and singular value decomposition (SVD)? | CC BY-SA 3.0 | null | 2010-11-19T19:01:58.203 | 2012-07-16T09:24:10.440 | 2012-07-16T09:24:10.440 | null | 74 | [
"pca",
"text-mining",
"svd"
]
|
4736 | 2 | null | 4733 | 2 | null | One of the basic techniques for working with skewed positive data is to analyse it on the log-scale. However its appropriateness depends on what you are really trying to achieve.
| null | CC BY-SA 2.5 | null | 2010-11-19T19:28:52.357 | 2010-11-19T19:28:52.357 | null | null | 279 | null |
4737 | 1 | 4740 | null | 4 | 10331 | I have a class with a set of descriptive statistic functions (mean, median, kurtosis, etc...).
Now I need to include weight (array) into my equations. My first thought was to just create a weighted version of the functions where the additional weight array is passed.
However, I was wondering if there is a way to alter the data (array) so that the same functions can be used.
Question:
Is there a way to weight the data before calculating the descriptive statistics - so that the existing functions can be used.
If this is possible then I could add an additional function to weight the data before it is passed into the descriptive functions.
| How to add weight to data in descriptive statistics? | CC BY-SA 4.0 | null | 2010-11-19T19:33:02.400 | 2019-12-11T17:31:13.773 | 2019-12-11T17:31:13.773 | 18417 | 2060 | [
"descriptive-statistics",
"algorithms",
"weights"
]
|
4738 | 2 | null | 4735 | 13 | null | LSA and LSI are mostly used synonymously, with the information retrieval community usually referring to it as LSI. LSA/LSI uses SVD to decompose the term-document matrix A into a term-concept matrix U, a singular value matrix S, and a concept-document matrix V in the form: A = USV'. The wikipedia page has a detailed description of [latent semantic indexing](http://en.wikipedia.org/wiki/Latent_semantic_indexing).
| null | CC BY-SA 2.5 | null | 2010-11-19T20:00:45.343 | 2010-11-19T20:00:45.343 | null | null | 881 | null |
4739 | 2 | null | 4659 | 1 | null | In Bayesian land, the Beta distribution is the conjugate prior for the p parameter of the Binomial distribution.
| null | CC BY-SA 2.5 | null | 2010-11-19T20:26:32.220 | 2010-11-19T20:26:32.220 | null | null | 1860 | null |
4740 | 2 | null | 4737 | 5 | null | It isn't entirely clear from your question what sort of 'weight' you are talking about. But I imagine it is a simple matter of wanting to count certain observations more than than others...
If you wanted to, and your weights were integer values (or you can find the lowest common denominator to multiply by that will give you integer values), you could simply expand your data out to match the weights. That is, a data point with a weight of 2 could be represented twice in your data array. This is fine for descriptives such as mean, median, kurtosis, and skew. However, it may be be problematic for calculations such as a sample estimate of the standard deviation where N matters and the difference between a raw N-1 and a N-1 where N is representative of the restructured array might be meaningful.
The only shortcut I can think of that might apply is to multiply your array by the weights and then analyze those results. For the mean, to renormalize the value, you will need to divide the result by (sum)Weights/N_original.
However, for the median, kurtosis, and skew you won't be able to (readily) use this technique and I think you will have to revert to altering your function to produce new data arrays.
Good luck.
| null | CC BY-SA 2.5 | null | 2010-11-19T20:34:51.880 | 2010-11-19T20:34:51.880 | null | null | 196 | null |
4741 | 1 | 4743 | null | 2 | 1764 | I do a counting experiment where I count observations as a function of two float parameters $x_1$ and $x_2$. This leads to a two-dimensional histogram where each bin corresponds to the number of observations with $x_1$ and $x_2$ in some range.
I now see a lot of bins with zero counts in them (even though more counts are in principle possible) and wonder how to best describe the uncertainty in these bins.
Knowing that I had $N$ observations which can fall into $k$ bins, I could assign an "expected rate" $n=\frac{N}{k}$ per bin and calculate the uncertainty with Poisson statistic. I however also know that my underlying process does not distribute events flat in $x_1$ and $x_2$ (but with a distribution I would like to extract from the measurement), how can I consistently assign an uncertainty in zero-count bins?
I do not want to rebin my data or drop these zero bins to not loose information.
| Uncertainty on zero counts for binned result | CC BY-SA 2.5 | null | 2010-11-19T20:35:06.833 | 2010-11-20T04:59:03.040 | null | null | 56 | [
"confidence-interval",
"poisson-distribution"
]
|
4742 | 2 | null | 4737 | 7 | null | Weights can arise in data analysis through various mechanisms, each of which requires its own formulas:
- A dataset with many duplicate results can be summarized by listing each unique result together with its frequency of occurrence. This is the definition @drknexus assumes in order to provide a definite answer (after recognizing that other definitions are possible).
- When datasets represent averages or other statistics, their values have known (or at least pre-estimated) levels of uncertainty. The weights can represent those levels. (Typically the appropriate weight to use is the inverse of the variance.) These are incorporated in methods like weighted least squares regression.
- Many datasets obtained through observational studies in the social and biological sciences arise from complex sampling schemes in which units/subjects have differing chances of being selected. The appropriate weights to use in estimates are usually the inverses of the selection probabilities, as in the Hansen-Hurwitz Estimator and the Horvitz-Thompson Estimator.
- Various robust methods, such as IRLS regression, iteratively reweight data in order to de-emphasize atypical values. These weights can enter into formulas in ways that differ from (1) - (3) above.
Thus, you need first to decide what your weights mean and what the purpose of computing the weighted statistics might be.
| null | CC BY-SA 2.5 | null | 2010-11-19T21:49:43.513 | 2010-11-19T21:49:43.513 | null | null | 919 | null |
4743 | 2 | null | 4741 | 2 | null | Nothing wrong with what you've done so far. The null model includes only a constant, i.e. a flat event rate. Fitting more complex [Poisson regression](http://en.wikipedia.org/wiki/Poisson_regression) models will allow the expected value to vary. It's hard to tell what forms the more complex models should take as you've told us nothing about the source of the data, how much data you have, or the question(s) you wish to answer. Binning the data may help suggest what models are appropriate, but you may be better keeping continuous covariates continuous and fitting some smoother form (splines, fractional polynomials, local regression...).
| null | CC BY-SA 2.5 | null | 2010-11-19T21:59:27.513 | 2010-11-19T21:59:27.513 | null | null | 449 | null |
4744 | 2 | null | 4713 | 10 | null | There are symmetric confidence intervals for the Binomial distribution: asymmetry is not forced on us, despite all the reasons already mentioned. The symmetric intervals are usually considered inferior in that
- Although they are numerically symmetric, they are not symmetric in probability: that is, their one-tailed coverages differ from each other. This--a necessary consequence of the possible asymmetry of the Binomial distribution--is the crux of the matter.
- Often one endpoint has to be unrealistic (less than 0 or greater than 1), as @Rob Hyndman points out.
Having said that, I suspect that numerically symmetric CIs might have some good properties, such as tending to be shorter than the probabilistically symmetric ones in some circumstances.
| null | CC BY-SA 2.5 | null | 2010-11-19T22:03:50.017 | 2010-11-19T22:03:50.017 | null | null | 919 | null |
4746 | 2 | null | 4729 | 3 | null | Whether you have a reasonable chance of obtaining (i.e. power to obtain) reliable conclusions depends on how big the effects are you wish to be able to detect. With such small numbers they'll have to be very large. Clearly having fewer treatments and more replications per treatment will give you at least a bit more power, or equivalently the aiblity to detect somewhat smaller effects with the same power.
To put some rough numbers on that, let's ignore the soil types for simplicity (including them will make things more gloomy) and do some standard power calculations two-sample for 2-sample t-tests. If you compare one treatment vs control with 10 in each group (i.e. 20 in total) you'll have 80% power to detect a difference between treatment and control of 1.25 standard deviations (SDs). With two treatments + control, 6 in each group (18 in total), you have 80% power to detect a difference of 1.4 SDs between both treatments combined and control, or 1.6 SDs between either treatment by itself and control (or between the two treatments). It may well be sensible use a log-transform (or perhaps some other transform) your data prior to analysis, in which case the SDs are the SDs of the transformed variables.
In the social sciences, [an effect of around 0.8 SDs or over would often be considered "large"](http://en.wikipedia.org/wiki/Effect_size#.22Small.22.2C_.22medium.22.2C_.22large.22), and designing a study to detect to have decent power only to detect a bigger effect than this might be politely described as "optimistic". But remember that the SD here is the SD of the residual, unexplained variation. You can reduce this by either (1) making your experimental units more uniform or (2) explaining more of the variation by other means.
- The lower the uncontrolled variability the higher the power you'll have to detect effects due to the factors open to experimental manipulation. You say "variability in field measurements can be as high as 25% within one group". But this is a laboratory experiment; is there a reason the variability need be this high in the lab? Can you homogenise your soil before you start the experiment? I guess this may destroy the soil structure though?
- Can you take baseline measurements before the treatments are applied? Using these to explain some of the inate variability between units by either analysing change since baseline or (better) adding them to the model as covariates (.e. ANCOVA) may help a lot.
Sorry I haven't mentioned G*Power 3 but i've never heard of it and from a quick look the link you gave it looks considerably more sophisticated, and therefore complicated, than is necessary here.
| null | CC BY-SA 2.5 | null | 2010-11-19T22:16:04.327 | 2010-11-19T23:09:37.230 | 2010-11-19T23:09:37.230 | 449 | 449 | null |
4750 | 2 | null | 4655 | 5 | null | It seems to me that if $f$ is strictly monotonic, $m \circ f=f \circ m$, and the question reduces to $\mu\circ f>f\circ\mu$, which is covered by Jensen's inequality. So strict convexity and strict monotonicity together would be a sufficient condition.
| null | CC BY-SA 2.5 | null | 2010-11-20T03:29:37.537 | 2010-12-01T05:56:23.050 | 2010-12-01T05:56:23.050 | 2456 | 2456 | null |
4751 | 2 | null | 4741 | 1 | null | Try running a nonparametric smoother (also called a kernel density estimate) over your data to estimate the expected value (and therefore proportion
If you do have covariate data, see how the nonparametric smooth compares to the regression model that onestop recommends. A parametric model is usually a lot less wiggly than a simple smoothed estimate.
| null | CC BY-SA 2.5 | null | 2010-11-20T04:59:03.040 | 2010-11-20T04:59:03.040 | null | null | 5792 | null |
4752 | 2 | null | 4735 | 8 | null | Notably while LSA and LSI use SVD to do their magic, there is a computationally and conceptually simpler method called HAL (Hyperspace Analogue to Language) that sifts through text keeping track of preceding and subsequent contexts. Vectors are extracted from these (often weighted) co-occurrence matrices and specific words are selected to index the semantic space. In many ways I'm given to understand it performs as well as LSA without requiring the mathematically/conceptually complex step of SVD. See Lund & Burgess, 1996 for details.
| null | CC BY-SA 2.5 | null | 2010-11-20T07:05:49.040 | 2010-11-20T07:05:49.040 | null | null | 196 | null |
4753 | 1 | null | null | 8 | 2351 | I am interested in finding some practical (and reasonably well accepted) techniques for finding the underlying factors of a sparse matrix.
Specifically, I have a very large sparse matrix whose cells appear to be populated from an approximately geometric distribution. In its natural form the matrix is square. The values in the cells represent item x item co-occurrences under case 1 over the diagonal and under case 2 under the diagonal. If necessary I can subset the matrix to particularly interesting items in order to make it rectangular. I believe that there are meaningful factors underlying this structure. However my understanding is that because the matrix is sparse factor analysis is not an appropriate approach. What approach can I take that will make it most likely that I can find interpretable patterns in the data?
I saw that there was [another question](https://stats.stackexchange.com/questions/4267/essential-papers-on-matrix-decompositions) asking for references on sparse variants of PCA, but I think I'm looking for something more akin to an obliquely rotated factor solution. I'm willing to dig into suggested readings somewhat, but my prior experience with factor analysis (and related techniques) is limited, and I prefer a relatively straightforward answer (one with R code is even better).
| How can one extract meaningful factors from a sparse matrix? | CC BY-SA 2.5 | null | 2010-11-20T07:22:34.117 | 2011-05-26T15:05:23.787 | 2017-04-13T12:44:20.840 | -1 | 196 | [
"r",
"pca",
"factor-analysis",
"matrix-decomposition"
]
|
4754 | 2 | null | 4597 | 2 | null | A moving standard deviation sounds like a reasonable thing to use... here is a toy example in poorly written untested poorly optimized pseudo-C, things may go out of bounds or not work as I expect, but you should get the general idea:
```
const int NPixelColumns; //The number of pixels columns
const int WindowSize; //The size of the moving window for the standard deviation
double BrightnessVals[NPixelColumns]; //Someplace to store your data initially
int startIndex; //Where the moving window starts
int lcv; //Generic loop control variable
for (startIndex = 0; startIndex++; startIndex < (NPixelColumns-WindowSize))
{
int endIndex = startIndex + (WindowSize-1);
double sum; //the sum of values in the windows
double xbar; //the mean in the window
double deltasq[WindowSize]; //the squared differences between the mean and the value
double SS=0; //the sum of deltasq
for (lcv = startIndex; lcv++; lcv <= endIndex)
{
sum += BrightnessVals[lcv];
}
xbar = sum/WindowSize;
for (lcv = 0; lcv++; lcv < WindowSize)
{
deltasq[lcv] = pow(BrightnessVals[startIndex+lcv]-xbar,2);
SS += deltasq[lcv];
}
printf("At step %i the moving SD is: %f", startIndex, SS/sqrt(WindowSize-1));
}
```
In R this kind of thing is a snap:
```
sdwindow <- function(start,end,data)
{
return(sd(data[start:end]))
}
nsamp <- 1000 #The number of samples to look over
windowsize <- 10 #The size of the window to get the SD of
x <- rnorm(nsamp) #Sample data
start <- 1:(nsamp-windowsize) #starting points for the window
end <- (windowsize+1):nsamp #ending points for the window
doit <- Vectorize(sdwindow, vectorize.args = c("start","end")) #save me the trouble of figuring out mapply for the nth time.
doit(start,end,x) #generate the result
```
| null | CC BY-SA 2.5 | null | 2010-11-20T07:53:17.650 | 2010-11-20T08:07:46.623 | 2010-11-20T08:07:46.623 | 196 | 196 | null |
4756 | 1 | null | null | 60 | 41084 | I have a random sample of Bernoulli random variables $X_1 ... X_N$, where $X_i$ are i.i.d. r.v. and $P(X_i = 1) = p$, and $p$ is an unknown parameter.
Obviously, one can find an estimate for $p$: $\hat{p}:=(X_1+\dots+X_N)/N$.
My question is how can I build a confidence interval for $p$?
| Confidence interval for Bernoulli sampling | CC BY-SA 3.0 | null | 2010-11-20T12:05:26.533 | 2022-06-09T03:28:28.347 | 2022-06-09T03:28:28.347 | 11887 | null | [
"confidence-interval",
"binomial-distribution",
"bernoulli-distribution",
"faq"
]
|
4759 | 1 | 4782 | null | 8 | 1530 | just for curiosity...
What language is used most here?
R? MATLAB? Python? Java?
What for prototype or for production?
For example I think MATLAB is mostly used for prototyping, python for both prot. and production...
| What programming language for statistical inference? | CC BY-SA 2.5 | null | 2010-11-20T14:57:19.843 | 2016-06-12T06:52:39.940 | 2010-11-21T20:26:44.323 | null | 2046 | [
"r",
"matlab",
"python",
"java"
]
|
4760 | 2 | null | 4734 | 5 | null | Essentially, principal components analysis breaks down the data into chunks which represent the variance of your matrix.
Factor analysis does the same, BUT it only examines the variance which is common to multiple items. Basically, EFA is a tool for determining latent structure, while PCA is a tool for reducing the number of items. Both are useful, but FA tends to be more useful (for me, at least).
| null | CC BY-SA 2.5 | null | 2010-11-20T16:31:27.420 | 2010-11-20T16:31:27.420 | null | null | 656 | null |
4761 | 2 | null | 4759 | 6 | null | It should be clear by [looking at the most popular tags](https://stats.stackexchange.com/tags) that R is the most popular language on this site. Whether that makes it the most popular language for statistical analysis can't be inferred directly, but one might suppose as much.
| null | CC BY-SA 2.5 | null | 2010-11-20T19:19:08.980 | 2010-11-20T19:19:08.980 | 2017-04-13T12:44:53.777 | -1 | 5 | null |
4762 | 1 | 7383 | null | 2 | 8532 | Some googling revealed that doing the F-test for Lack-of-Fit in SPSS is not so trivial. It seems one has to “trick” SPSS to do that. See for example [this](http://www.math.umt.edu/olear/STAT458/Lab%205.pdf). Can anybody suggest a better source of information on how this can be done? I have SPSS 16. Of course I know it can be easily done using R but I am interested in the SPSS way.
Thanks
| F-test for Lack-of-Fit in SPSS | CC BY-SA 2.5 | null | 2010-11-20T20:22:06.530 | 2011-02-18T20:40:13.133 | null | null | 339 | [
"regression",
"anova",
"spss"
]
|
4763 | 2 | null | 4564 | 6 | null | GBP includes the junction tree algorithm as a special case, and since junction tree is exact, GBP will be exact whenever the region graph corresponds to a junction tree. This is the only general case where GBP is exact, as shown by Theorem 14 of [Pakzad and Anantharam (Neural Computation, 2005)](http://www.eecs.berkeley.edu/~ananth/2002+/Payam/submittedkikuchi.pdf).
| null | CC BY-SA 2.5 | null | 2010-11-20T21:51:48.870 | 2010-11-20T23:51:23.590 | 2010-11-20T23:51:23.590 | 2074 | 2074 | null |
4765 | 2 | null | 4759 | 8 | null | Well, you can PAY for MATLAB, and then either (1) program the stuff you really need from the ground up or (2) PAY MORE for MATLAB toolboxes. And discover that doing useful statistics in MATLAB was an afterthought handled in the increasingly less useful Statistics Toolbox. Or...you can download R for FREE and search for (and find!) the packages you need, which you can also download for FREE.
Lots of small scale production stuff can be done in R. If you're doing something really big (think US census), you probably need to go learn SAS--and get your employer to pay for it.
| null | CC BY-SA 2.5 | null | 2010-11-21T02:00:15.490 | 2010-11-21T02:00:15.490 | null | null | 5792 | null |
4766 | 1 | null | null | 7 | 3202 | Randomized SVD decomposes a matrix by extracting the first k singular values/vectors using k+p random projections. This works surprisingly well for large matrices.
My question concerns the singular values that are output from the algorithm. Why aren't the values equal to the first k-singular values if you do the full SVD?
Below I have a simple implementation in R. Any suggestions on improving the performance would be appreciated.
```
rsvd = function(A, k=10, p=5){
n = nrow(A)
y = A %*% matrix(rnorm(n * (k+p)), nrow=n)
q = qr.Q(qr(y))
b = t(q) %*% A
svd = svd(b)
list(u=q %*% svd$u, d=svd$d, v=svd$v)
}
> set.seed(10)
> A <- matrix(rnorm(500*500),500,500)
> svd(A)$d[1:15]
[1] 44.94307 44.48235 43.78984 43.44626 43.27146 43.15066 42.79720 42.54440 42.27439 42.21873 41.79763 41.51349 41.48338 41.35024 41.18068
> rsvd.o(A,10,5)$d
[1] 34.83741 33.83411 33.09522 32.65761 32.34326 31.80868 31.38253 30.96395 30.79063 30.34387 30.04538 29.56061 29.24128 29.12612 27.61804
B <- matrix(rnorm(500*50),500,500) # rank 50
> rsvd(B,10,5)$d
[1] 86.48035 83.02114 81.03988 80.04358 77.24979 76.10945 74.47357 74.08382
[9] 72.85898 72.06897 69.59526 67.70750 66.53867 62.96446 61.50838
> svd(B)$d[1:15]
[1] 92.44779 91.47689 88.71948 88.08170 87.24533 85.13312 84.14741 83.71757
[9] 82.80832 81.43005 80.73903 79.92959 78.87421 78.33509 77.38431
```
As Joris pointed out, I have this posted on stackoverflow as well. you can find the revelant conversation here
[https://stackoverflow.com/questions/4224031/randomized-svd-singular-values](https://stackoverflow.com/questions/4224031/randomized-svd-singular-values)
Also see the relevant paper by Martinsson et al: [A randomized algorithm for the decomposition of matrices](http://dx.doi.org/10.1016/j.acha.2010.02.003)
| Randomized SVD and singular values | CC BY-SA 2.5 | null | 2010-11-21T04:17:20.387 | 2010-11-28T16:01:55.277 | 2017-05-23T12:39:26.203 | -1 | 2078 | [
"matrix-decomposition",
"svd"
]
|
4767 | 2 | null | 4695 | 2 | null | This is a pretty general problem in time series analysis. I'd probably start by looking at some descriptive statistics like the cross-correlation to see if the samples are roughly independent over time. You could also test whether the correlation between successive samples is significant.
Or you could go the model-fitting route in which case one simple thing to do is to fit an auto-regressive model with some order k and then do model comparison versus the static model. If you assume that $\theta$ just follows a Gaussian random walk, then model you're describing is exactly a Kalman filter. So that might be another thing to look at.
| null | CC BY-SA 2.5 | null | 2010-11-21T04:25:37.277 | 2010-11-21T04:25:37.277 | null | null | 2077 | null |
4768 | 1 | 4772 | null | 27 | 17434 | I was asked this question during an interview for a trading position with a proprietary trading firm. I would very much like to know the answer to this question and the intuition behind it.
Amoeba Question:
A population of amoebas starts with 1. After 1 period that amoeba can divide into 1, 2, 3, or 0 (it can die) with equal probability. What is the probability that the entire population dies out eventually?
| Amoeba Interview Question | CC BY-SA 4.0 | null | 2010-11-21T05:20:48.913 | 2020-10-05T21:42:31.727 | 2018-05-18T17:08:45.177 | 44269 | 2079 | [
"probability"
]
|
4769 | 2 | null | 4768 | 7 | null | This sounds related to the [Galton Watson](http://en.wikipedia.org/wiki/Galton%E2%80%93Watson_process) process, originally formulated to study the survival of surnames. The probability depends on the expected number of sub-amoebas after a single division. In this case that expected number is $3/2,$ which is greater than the critical value of $1$, and thus the probability of extinction is less than $1$.
By considering the expected number of amoeba after $k$ divisions, one can easily show that if the expected number after one division is less than $1$, the probability of extinction is $1$. The other half of the problem, I am not so sure about.
| null | CC BY-SA 2.5 | null | 2010-11-21T05:45:09.687 | 2010-11-21T18:05:47.807 | 2010-11-21T18:05:47.807 | 795 | 795 | null |
4770 | 2 | null | 4267 | 2 | null | Witten, Tibshirani - Penalized matrix decomposition
[http://www.biostat.washington.edu/~dwitten/Papers/pmd.pdf](http://www.biostat.washington.edu/~dwitten/Papers/pmd.pdf)
[http://cran.r-project.org/web/packages/PMA/index.html](http://cran.r-project.org/web/packages/PMA/index.html)
Martinsson, Rokhlin, Szlam, Tygert - Randomized SVD
[http://cims.nyu.edu/~tygert/software.html](http://cims.nyu.edu/~tygert/software.html)
[http://cims.nyu.edu/~tygert/blanczos.pdf](http://cims.nyu.edu/~tygert/blanczos.pdf)
| null | CC BY-SA 2.5 | null | 2010-11-21T09:44:03.770 | 2010-11-22T19:18:52.707 | 2010-11-22T19:18:52.707 | 2078 | 2078 | null |
4771 | 2 | null | 2806 | 12 | null | you could trying using a couple of options.
1- Penalized Matrix Decomposition. You apply some penalty constraints on the u's and v's to get some sparsity. Quick algorithm that has been used on genomics data
See Whitten Tibshirani. They also have an R-pkg. " A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis."
2- Randomized SVD. Since SVD is a master algorithm, find a very quick approximation might be desirable, especially for exploratory analysis. Using randomized SVD, you can do PCA on huge datasets.
See Martinsson, Rokhlin, and Tygert "A randomized algorithm for the decomposition of matrices". Tygert has code for a very fast implementation of PCA.
Below is a simple implementation of randomized SVD in R.
```
ransvd = function(A, k=10, p=5) {
n = nrow(A)
y = A %*% matrix(rnorm(n * (k+p)), nrow=n)
q = qr.Q(qr(y))
b = t(q) %*% A
svd = svd(b)
list(u=q %*% svd$u, d=svd$d, v=svd$v)
}
```
| null | CC BY-SA 2.5 | null | 2010-11-21T09:56:02.610 | 2010-11-21T09:56:02.610 | null | null | 2078 | null |
4772 | 2 | null | 4768 | 38 | null | Cute problem. This is the kind of stuff that probabilists do in their heads for fun.
The technique is to assume that there is such a probability of extinction, call it $P$. Then, looking at a one-deep decision tree for the possible outcomes we see--using the Law of Total Probability--that
$P=\frac{1}{4} + \frac{1}{4}P + \frac{1}{4}P^2 + \frac{1}{4}P^3$
assuming that, in the cases of 2 or 3 "offspring" their extinction probabilities are IID. This equation has two feasible roots, $1$ and $\sqrt{2}-1$. Someone smarter than me might be able to explain why the $1$ isn't plausible.
Jobs must be getting tight -- what kind of interviewer expects you to solve cubic equations in your head?
| null | CC BY-SA 2.5 | null | 2010-11-21T11:47:10.303 | 2010-11-21T18:41:24.833 | 2010-11-21T18:41:24.833 | null | 5792 | null |
4773 | 2 | null | 4768 | 22 | null | Some back of the envelope calculation (litterally - I had an envelope lying around on my desk) gives me a probability of 42/111 (38%) of never reaching a population of 3.
I ran a quick Python simulation, seeing how many populations had died off by 20 generations (at which point they usually either died out or are in the thousands), and got 4164 dead out of 10000 runs.
So the answer is 42%.
| null | CC BY-SA 2.5 | null | 2010-11-21T11:48:51.067 | 2010-11-21T11:48:51.067 | null | null | 1737 | null |
4774 | 2 | null | 4172 | 1 | null | First of all, a question if interest: If you want the measurements not to differ significantly, why change the tool at all? Simply to get more frequent measurements, or for economical reasons?
Now for the reply.
I do not entirely understand how you gathered the data. You say both instruments were colocated, and that instrument B gathered data more frequently. For comparison purposes, you would want those measurements of B that were made at the same place as A. As you cannot match them exactly, you need to interpolate. For this, I would use timestamps, but assume you don't have them, as you went with the GPS coordinates (although you did say "4-minute interval"). I'll assume it's reasonable to assume the GPS coordinates are accurate for both instruments, and that the effect measured doesn't noticeably vary over very small distances, such as the location the instruments are in on the vehicle. For interpolation, you need a model of the variability of the measured effect between datapoints. You say you used the nearest neighbour. That's perfectly acceptable, but I would probably go with a linear interpolant myself. Using all datapoints would require a very good model, which you may not yet have.
As for the comparisons:
- You found that the instruments do differ, depending on the true value. This means at least one of the instruments is biased, but you will not be able to tell from the measurements alone which one. You may be able to fit a generalized linear model
- The correlation you measured is actually not all that good for what is supposed to be 1. The Pearson correlation does not require normality, but it does assume the variables are stationary and independent - which yours aren't.
- Large N isn't a problem, it's a good thing. Your large t-value also tells you that if your variable is normally distributed, the instruments are not equivalent.
- In testing the reported categories, start with a simple crosstabulation to see if any particular mismatch is more likely than others. That the time series is autocorrelated doesn't matter when looking just at the differences. To test for independence of the categories, you first need to determine whether categories depend on the discretized variable, and if not, whether they nevertheless are ordered or merely unordered labels. The choice of test comes after you characterize how you think the data should behave.
| null | CC BY-SA 2.5 | null | 2010-11-21T15:17:02.290 | 2010-11-21T15:17:02.290 | null | null | 2456 | null |
4775 | 1 | 4776 | null | 27 | 13360 | I was wondering if it is possible to do symbolic computation in R?
For example,
I was hoping to get the inverse of a symbolic covariance matrix of 3D Gaussian distribution.
Also can I do symbolic integration and differentiation in R?
| Symbolic computation in R? | CC BY-SA 4.0 | null | 2010-11-21T16:24:16.763 | 2019-01-26T23:44:32.410 | 2019-01-26T23:44:32.410 | 11887 | 1005 | [
"r"
]
|
4776 | 2 | null | 4775 | 22 | null | Yes. There is the [Ryacas package](http://cran.r-project.org/web/packages/Ryacas/index.html) which is hosted on Google Code [here](http://code.google.com/p/ryacas/). Ryacas has recently been expanded/converted to the rMathpiper package which is hosted [here](http://code.google.com/p/rmathpiper/). I have used Ryacas and it is straightforward, but you will need to install [Yacas](http://yacas.sourceforge.net/) in order for it to work (Yacas does all the heavy lifting; Ryacas is just an R interface to Yacas).
There is also the rSymPy project hosted on Google Code [here](http://rsympy.googlecode.com/). I haven't tried this one. The idea is similar, though, link to the sympy CAS which does the symbolic work.
| null | CC BY-SA 2.5 | null | 2010-11-21T16:35:20.947 | 2010-11-21T16:35:20.947 | null | null | null | null |
4777 | 2 | null | 4775 | 19 | null | Some things are also in base R --- see `help(deriv)` or `help(D)`.
A simple example from that help page:
```
R> trig.exp <- expression(sin(cos(x + y^2)))
R> ( D.sc <- D(trig.exp, "x") )
-(cos(cos(x + y^2)) * sin(x + y^2))
R> all.equal(D(trig.exp[[1]], "x"), D.sc)
[1] TRUE
R>
```
| null | CC BY-SA 2.5 | null | 2010-11-21T17:29:00.630 | 2010-11-21T17:29:00.630 | null | null | 334 | null |
4778 | 2 | null | 4775 | 6 | null | It makes more sense to use a "real" CAS like [Maxima](http://en.wikipedia.org/wiki/Maxima_%28software%29).
| null | CC BY-SA 2.5 | null | 2010-11-21T17:51:35.997 | 2010-11-21T23:40:42.427 | 2010-11-21T23:40:42.427 | 449 | 1966 | null |
4779 | 2 | null | 4700 | 18 | null | The distinction is only meaningful in the context of non-Bayesian statistics. In Bayesian statistics, all model parameters are "random".
| null | CC BY-SA 2.5 | null | 2010-11-21T18:00:46.553 | 2010-11-21T18:00:46.553 | null | null | 1966 | null |
4780 | 2 | null | 4766 | 4 | null | I do not think the singular values should match those of the full matrix. You are computing an approximation of the input matrix by projection onto $k+p$ random vectors. For a rank $k+p$ matrix to approximate a rank $n \gg k+p$ matrix, the trace should probably be the same, but then if the first $k$ singular values are to overlap, you have to push a lot of 'variance' to the last $p$ singular values of the approximation (probably so many that they are no longer the least significant singular values).
Another way of looking at this is one is approximating $A = U\Sigma V'$ by another decomposition, $T \Gamma W'$. We should not expect a fast randomized algorithm to magically work such that $T$ is the first $k$ columns of $U$, $\Gamma$ is a submatrix of $\Sigma$, etc.
| null | CC BY-SA 2.5 | null | 2010-11-21T18:37:03.600 | 2010-11-21T18:37:03.600 | null | null | 795 | null |
4781 | 2 | null | 4267 | 3 | null | Maybe, you can find interesting
- [Learning with Matrix Factorizations] PhD thesis by Nathan Srebro,
- [Investigation of Various Matrix Factorization Methods for Large Recommender Systems], Gábor Takács et.al. and almost the same technique described here
The last two links show how sparse matrix factorizations are used in Collaborative Filtering. However, I believe that SGD-like factorization algorithms can be useful somewhere else (at least they are extremely easy to code)
| null | CC BY-SA 2.5 | null | 2010-11-21T19:28:40.950 | 2010-11-23T09:02:09.487 | 2010-11-23T09:02:09.487 | 1725 | 1725 | null |
4782 | 2 | null | 4759 | 7 | null | I couldnt agree more with a vote for R. R is the "Lingua Franca" of the statistics world. It is the definition of cutting edge, while most packages for MATLAB and SAS take several months. The language is very simple to understand as opposed to SAS. It also gives you the power to connect with C/C++/Python and databases.
Consider Revolution Analytics version of R for a bit more performance.
[http://www.revolutionanalytics.com/products/revolution-r.php](http://www.revolutionanalytics.com/products/revolution-r.php)
| null | CC BY-SA 2.5 | null | 2010-11-21T21:02:25.647 | 2010-11-21T21:02:25.647 | null | null | 2078 | null |
4783 | 1 | null | null | 21 | 15046 | I have some cumulative frequency data. A line $y=ax+b$ looks like it fits the data extremely well, but there is cyclic/periodic wiggle in the line. I would like to estimate when the cumulative frequency will reach a certain value $c$. When I plot the residuals vs. fitted values, I get a beautiful sinusoidal behavior.
Now, to add another complication, note that in the residuals plots

there are two cycles that have lower values than the others, which represents a weekend effect that also must be taken into account.
So, where do I go from here? How can I combine some cosine, sine, or cyclic term into a regression model to approx. estimate when the cumulative frequency will equal $c$?
| How to add periodic component to linear regression model? | CC BY-SA 2.5 | null | 2010-11-21T21:21:46.837 | 2016-09-30T16:01:15.320 | 2010-11-21T23:08:42.967 | null | 2083 | [
"time-series",
"regression"
]
|
4784 | 2 | null | 4783 | 9 | null | You could try the wonderful `stl()` method -- it decomposes (using iterated `loess()` fitting) into trend and seasonal and remainder. This may just pick up your oscillations here.
| null | CC BY-SA 2.5 | null | 2010-11-21T21:55:09.817 | 2010-11-21T21:55:09.817 | null | null | 334 | null |
4785 | 2 | null | 4783 | 8 | null | If you know the frequency of the oscillation, you can include two additional predictors, sin(2π w t) and cos(2π w t) -- set w to get the desired wavelength -- and this will model the oscillation. You need both terms to fit the amplitude and the phase angle. If there is more than one frequency, you will need a sine and cosine term for each frequency.
If you don't know what the frequencies are, the standard way to isolate multiple frequencies is to detrend the data (get the residuals from the linear fit, as you have done) and run a discrete Fourier transform against the residuals. A quick and dirty way to do this is in MS-Excel, which has a Fourier Analysis tool in the Data Analysis Add-In. Run the analysis against the residuals, take the absolute value of the transforms, and bar graph the result. The peaks will be your major frequency components that you want to model.
When you add these cyclic predictors, pay close attention to their p-values in your regression, and don't overfit. Use only those frequencies that are statistically significant. Unfortunately, this may make fitting the low frequencies a little difficult.
| null | CC BY-SA 2.5 | null | 2010-11-21T22:57:47.480 | 2010-11-21T22:57:47.480 | null | null | 5792 | null |
4786 | 1 | null | null | 5 | 1202 | I am looking at using Interior Point method for optimizing a convex function. The convex function is basically the log-likelihood of a binary logistic regression model. Can I use this technique?
In generally, is there anything that prevents applying a constrained optimization technique to an unconstrained problem? From what I think, an unconstrained problem is just a constrained problem without the constraints and thus should be solvable using these techniques.
| Can constrained optimization techniques be applied to unconstrained problems? | CC BY-SA 2.5 | null | 2010-11-21T23:09:28.083 | 2010-11-22T12:52:59.547 | 2010-11-21T23:10:19.877 | null | 2071 | [
"optimization"
]
|
4787 | 2 | null | 4786 | 3 | null | As far as I know, there is no reason to stop you from applying constrained optimization to an unconstrainted problem. However, this may not be a great idea in terms of computational complexity and convergence. For example, fitting a logistic regression model can done efficiently with the Newton-Raphson approach (or the Fisher scoring variant). I am not sure if there is much to gain with the interior point approach in this particular case.
| null | CC BY-SA 2.5 | null | 2010-11-22T00:09:23.823 | 2010-11-22T00:09:23.823 | null | null | 530 | null |
4788 | 2 | null | 4786 | 3 | null | The general sense in optimization is that if you have a convex function and no constraints, you want to use the "powerful stuff", gradient descent, Newton, etc. Without constraints interior point methods are not very good (competitive).
In particular for the problem you're studying (binary logistic regression) you should consider trying simple stochastic gradient descent.
Nothing really stops you from applying constrained optimization techniques to unconstrained problems. The same way nothing stops from pushing (instead of riding) your car to work. But you should definitely try interior point methods w/o constraints and convince yourself about it.
Finally you mention that you want to try linear programming-based methods presumably without constraints, what you plan to do in this case I don't quite understand.
| null | CC BY-SA 2.5 | null | 2010-11-22T05:03:20.090 | 2010-11-22T09:03:55.123 | 2010-11-22T09:03:55.123 | 1540 | 1540 | null |
4789 | 2 | null | 4759 | 4 | null | R and SAS have each their pros and cons. I think more statisticians need to embrace the fact that lots of great statistical software is available, rather than endlessly bicker about which is superior.
R is free. SAS is very expensive. R gives you the ability to do just about anything. SAS may or may not. R has amazing graphical abilities. Seeing SAS graphics makes it feel like 1985 all over again. SAS has great customer support. R support = hours of searching mailing list archives. Also with a name like "R", search engine results are often poor. R is extremely slow and does not deal well with large data sets. SAS does fine with large data sets. SAS tends to be more robust. In my experience, when it comes to mixed effects modeling or anything involving design of experiments (such as analyzing crossover designs), SAS is superior.
For large scale, brute force simulations, I use Fortran. I used to use C, but have found Fortran is much easier to use. I've never used MATLAB. If I need statistical power of R but the speed of Fortran, I will write the time-intensive operations (i.e. loops) in Fortran and call the subroutine from R.
| null | CC BY-SA 3.0 | null | 2010-11-22T05:44:40.397 | 2016-06-12T06:52:39.940 | 2016-06-12T06:52:39.940 | 22047 | null | null |
4790 | 2 | null | 4663 | 13 | null | The "no free lunch" theorems suggest that there are no a-priori distinctions between statistical inference algorithms, i.e. whether LARS or LASSO works best depends on the nature of the particular dataset. In practice then, it is best to try both and use some reliable estimator of generalisation performance to decide which to use in operation (or use an ensemble). As the differences between LARS and LASSO are rather slight, the differences in performance are likely to be rather slight as well, but in general there is only one way to find out for sure!
| null | CC BY-SA 2.5 | null | 2010-11-22T10:56:39.330 | 2010-11-22T11:25:15.910 | 2010-11-22T11:25:15.910 | 887 | 887 | null |
4793 | 2 | null | 4786 | 4 | null | As far as I'm concerned, constrained optimization is a less-than-optimal way of avoiding strong fluctuations in your parameters for the independents due to a bad model-specification. Pretty often a constraint is "needed" when the variance-covariance matrix is ill-structured, when there is a lot of (unaccounted) correlation between independents, when you have aliasing or near-aliasing in datasets, when you gave the model too many degrees of freedom, and so on. Basically, every condition that inflates the variance on the parameter estimates will cause an unconstrained method to behave poorly.
You can look at constrained optimization, but I reckon you should first take a look at your model if you believe constrained optimization is necessary. This for two reasons :
- There's no way you can still rely on the inference, even on the estimated variances for your parameters
- You have no control over the amount of bias you introduce.
So depending on the goal of the analysis, constrained optimization can be a sub-optimal solution (purely estimating the parameters) or inappropriate (when inference is needed).
On a side note, penalized methods (in this case penalized likelihoods) are specifically designed for these cases, and introduce the bias in a controlled manner where it is accounted for (mostly). Using these, there is no need to go into constrained methods, as the classic optimization algorithms will do a pretty good job. And with the correct penalization, inference is still valid in many cases. So I'd rather go for such a method instead of putting arbitrary constraints that are not backed up with an inferential framework.
My 2 cents, YMMV.
| null | CC BY-SA 2.5 | null | 2010-11-22T12:52:59.547 | 2010-11-22T12:52:59.547 | null | null | 1124 | null |
4794 | 2 | null | 4783 | 4 | null | Let's begin by observing that ordinary least squares fitting for these data is likely inappropriate. If the individual data being accumulated are assumed, as usual, to have random error components, then the error in the cumulative data (not the [cumulative frequencies](http://en.wikipedia.org/wiki/Cumulative_frequency)--that's something different than what you have) is the cumulative sum of all the error terms. This makes the cumulative data heteroscedastic (they become more and more variable over time) and strongly positively correlated. Because these data are so regularly behaved, and there's so much of them, there's little problem with the fit you will get, but your estimates of errors, your predictions (which is what the question is all about), and especially your standard errors of prediction can be way off.
A standard procedure for analyzing such data starts with the original values. Take the day-to-day differences to remove the higher-frequency sinusoidal component. Take the weekly differences of those to remove a possible week-to-week cycle. Analyze what's left. [ARIMA](http://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average) modeling is a powerful flexible approach, but start simply: graph those differenced data to see what's going on, then move on from there. Note, too, that with less than two weeks of data your estimates of the weekly cycle will be poor and this uncertainty will dominate the uncertainty in the predictions.
| null | CC BY-SA 2.5 | null | 2010-11-22T15:13:43.460 | 2010-11-22T15:13:43.460 | null | null | 919 | null |
4795 | 2 | null | 4759 | 7 | null | "Popularity" depends on the community and the definition of "statistics". World-wide, taking a broad view of "statistical inference" as including any methods of drawing conclusions or taking actions based on quantitative data, there is little question that [Excel](http://www.pcworld.com/businesscenter/article/166123/forrester_microsoft_office_in_no_danger_from_competitors.html) [beats](http://blogs.technet.com/b/office2010/archive/2010/06/15/office-2010-availability.aspx) all other applications, including [R, SAS, Stata, SPSS, and S-Plus](http://www.kdnuggets.com/2010/06/software-popularity-of-data-analysis-software.html). (The links point to different kinds of statistics, but they are highly suggestive, to say the least.) Python and MATLAB aren't even blips in the statistics. I am not saying that this is a good thing or that we should like it: that's just how it is and that's how it's going to stay for a very long time.
We shouldn't draw any inferences from what may appear to be popular "here" in this forum. Commercial software vendors support their own forums, so naturally a place like SE will favor people using less actively supported software, especially free, open-source, and academic solutions.
| null | CC BY-SA 2.5 | null | 2010-11-22T15:36:31.757 | 2010-11-22T15:36:31.757 | null | null | 919 | null |
4796 | 1 | 4797 | null | 2 | 884 | To me, the two are similar in the sense that [slice sampling](http://.wikipedia.org/wiki/Slice_sampling) is just [Gibbs sampling](http://en.wikipedia.org/wiki/Gibbs_sampling) for the uniform distribution over the area under the plot of the density function. Is that right?
I was wondering if someone can compare between [slice sampling and Gibbs sampling](http://www.stat.purdue.edu/~jianzhan/notes/Gibbs.pdf).
For example, in terms of rate of convergence, which one is better?
If you can think about other aspects, please feel free to reply.
Thanks and regards!
| Comparison of Slice sampling and Gibbs sampling | CC BY-SA 2.5 | null | 2010-11-22T15:42:04.420 | 2010-11-30T16:40:59.760 | 2010-11-30T16:40:59.760 | 8 | 1005 | [
"bayesian",
"markov-chain-montecarlo",
"gibbs",
"simulation"
]
|
4797 | 2 | null | 4796 | 1 | null | I am not sure if the question is well posed.
If you can use both the gibbs sampler and slice sampling to sample from a posterior I would use the gibbs sampler as the slice sampler seems unnecessary to me. Use of a slice sampler introduces additional variables which at the very least increases run time for the sampler. So, I am not sure why one would use the slice sampler if we can use the gibbs sampler. If you cannot use the gibbs but you can use the slice then your question seems irrelevant.
Thus, I am not sure why one would consider handicapping the sampler by using the slice when a gibbs sampler can be used.
| null | CC BY-SA 2.5 | null | 2010-11-22T15:54:00.333 | 2010-11-22T15:54:00.333 | null | null | null | null |
4798 | 2 | null | 4783 | 2 | null | Clearly the dominant oscillation has period one day. Looks like there are also lower-frequency components relating to the day of the week, so add a component with frequency one week (i.e. one-seventh of a day) and its first few harmonics. That gives a model of the form:
$$\mbox{E}(y) = c + a_0 \cos(2\pi t) + b_0 \sin(2\pi t) + a_1 \cos(2 \pi t/7) + b_1 \sin(2 \pi t/7) + a_2 \cos(4 \pi t/7) + b_2 \sin(4 \pi t/7) + \ldots $$
– assuming $t$ is measured in days. Here $y$ is the raw data, not its cumulative sum.
| null | CC BY-SA 2.5 | null | 2010-11-22T17:43:59.917 | 2010-11-22T17:43:59.917 | null | null | 449 | null |
4799 | 1 | null | null | 8 | 3672 | Where can I find a good proof that CRF based models and logistic regression based models are convex? Is there a general trick to test/prove that a model or objective function is convex?
| Proof that CRF models and logistic models are convex functions | CC BY-SA 2.5 | null | 2010-11-22T18:41:41.273 | 2010-11-23T19:16:41.050 | 2010-11-23T15:24:42.790 | null | 2071 | [
"logistic",
"optimization"
]
|
4800 | 2 | null | 4799 | 7 | null | One trick is to rewrite objective functions in terms of functions which are known to be convex.
Objective function of ML trained log-linear model is a sum of negative log-likelihoods, so it's sufficient to show that negative log-likelihood for each datapoint is convex.
Considering datapoint fixed, we can write its negative log-likelihood term as
$$-\langle \theta,\phi(y)\rangle+\log \sum_y \exp(\langle \theta,\phi(y)\rangle)$$
First term is linear so it's sufficient to show that second term, known as the log-normalizer, is convex.
Write it as $f(\mathbf{g}(\mathbf{\theta}))$ where $f(\mathbf{y})=\log \sum_y \exp y$ and $g_y(\theta)=\langle \mathbf{\theta},\phi(y)\rangle$. Here $g$ is a linear function, and $f$ is a known convex function called log-sum-exp. See page 72 of Boyd's Convex Optimization [book](http://www.stanford.edu/~boyd/cvxbook/). Composition of a convex function and a linear function is convex, see section 3.2.2
Another approach is to use the fact that log-normalizer is the cumulant generating function. For instance see example 3.41 in Boyd's book, or Proposition 3.1 in Wainwright's "Graphical models, exponential families, and variational inference" [manuscript](http://www.eecs.berkeley.edu/~wainwrig/Papers/WaiJor08_FTML.pdf). This means that second derivative is the covariance matrix of sufficient statistic $\phi$ which by definition is positive semi-definite, which means that Hessian of the log-normalizer is positive semi-definite. Positive semi-definite Hessian guarantees the function is convex, see section 3.1.4 of Boyd's book.
Technically, the log-normalizer is not the traditional cumulant generating function. CGF is $g(\phi)=\log(Z(\theta+\phi))-\log(Z(\theta))$. However, derivative of log-normalizer evaluated at $\theta$ is the same as the derivative of the CGF evaluated at $\mathbf{0}$, so it produces cumulants just like CGF.
I couldn't find full proof of equivalence, usually people omit it because it's just several steps of uninspiring algebra. A very terse derivation for continuous output space is on page 5 of Xinhua Zhang's "Graphical Models" [thesis](http://dspace.anu.edu.au/bitstream/1885/49340/1/01front.pdf). I believe a saw full derivation in Lawrence D. Brown's "Fundamentals of statistical exponential families"
| null | CC BY-SA 2.5 | null | 2010-11-22T19:30:36.430 | 2010-11-23T19:16:41.050 | 2010-11-23T19:16:41.050 | 511 | 511 | null |
4802 | 1 | null | null | 3 | 349 | I am trying to determine if I can model my system as a M/M/1 queue and if so do the numbers I get from it help me at all.
I can model my system like this:

System Description
A. I want to spec out the max number of clients I can have connect to my system. Each client can generate at max 40 messages/second that are placed in the buffer. If link B goes down it will buffer all messages. For the sake of my math I am going to assume that it is actually generating a fixed 40 messages/second.
B. I am specifying that this link should be able to go down for 3 days, but normal operation spec this as a network connection that all clients will have to communicate on. This network connection may also have traffic unrelated to my system on it. The link is fast enough that it isn't a bottle neck at all. I have found that it can easily pass 500,000 messages in less then 10 seconds.
C. Allows for the clients to dump any data as fast as link B will allow. This queue can be as big as needed.
D. This isn't really a link per say, block E pulls directly from the queue.
E, F, and G. This is where the fun comes. Initially every packet I received I placed a single SQL query to the database. I found that each query took about 5-15ms with a few taking 1ms and a few taking 600+ms. I don't have a model for this delay and suspect I will need one.
I realized that this delay was going to drastically hurt my systems performance so I started looking into how to improve this performance. I found that if I grouped multiple messaged from C together and did a larger SQL query I was able to improve performance. When I grouped 10 messages per query I was observing 10-20ms, again with a few very short and a few very long. So 10-20ms/10messages gives me an effective 1-2ms per message instead of the 5-15ms I had before.
My Objectives
- Determine how to model E, F, and G. I figured the easiest and most accurate method would be by experimentation, but I am not sure what data I should be looking to collect.
- Determine how many messages I should group together in order to minimize latency while maximizing the number of clients in stage A that I can have. I figured I could have a variable number of grouped messages based off of load, but I don't know how to come up with what this should look like.
- Finally be able to list a max number of clients I can have in stage A and say with some amount of certainty that their messages can be stored to the database in x time once it gets to stage C.
Any help would be great. Thanks!
| How to predict and optimize a queue? | CC BY-SA 2.5 | null | 2010-11-22T21:25:42.370 | 2011-05-13T11:45:54.837 | 2011-04-13T10:33:37.850 | 449 | 2095 | [
"modeling",
"experiment-design",
"queueing"
]
|
4803 | 1 | 4841 | null | 2 | 368 | I'm analyzing a noisy time series where where the inter-event interval is known to follow a two-gamma mixture distribution. If there was a simple model that would generate that kind of thing, it would be pretty simple to implement into BUGS. But otherwise, I can't think of anything that wouldn't be prohibitively kludgey.
Can anyone think of a model that induces mixture-gamma waiting times?
| Modeling a gamma-mixture waiting model in BUGS | CC BY-SA 2.5 | null | 2010-11-22T21:32:18.330 | 2011-03-27T16:02:58.457 | 2011-03-27T16:02:58.457 | 919 | 996 | [
"bayesian",
"markov-chain-montecarlo",
"bugs",
"mixture-distribution"
]
|
4804 | 2 | null | 4799 | 2 | null | First, convexity is not only a feature of a function, but rather, a function and the domain over which it is defined.
To address your question more directly, another trick (rather another formulation) is to compute the Hessian matrix of your likelihood function. A per [wiki](http://en.wikipedia.org/wiki/Convex_function) a continuous, twice differentiable function of several variables is convex on a convex set if and only if its Hessian matrix is positive semidefinite on the interior of the convex set.
Since the Hessian is real symmetric, it is sufficient to have diagonal [dominance](http://en.wikipedia.org/wiki/Diagonally_dominant_matrix), for it to be PSD (this is obvious to show for the logistic model).
| null | CC BY-SA 2.5 | null | 2010-11-22T21:38:15.220 | 2010-11-22T21:50:21.293 | 2010-11-22T21:50:21.293 | 603 | 603 | null |
4805 | 1 | 7469 | null | 5 | 1405 | SAS EM has a node that generates the transformation for a dataset that makes it best fit a normal distribution. I can't imagine the people at SAS are the only ones to figure out how to do this. Is there any documentation out there on methodologies that do the same? Or an open source implementation?
| Methodology for data transform that best fits normal | CC BY-SA 2.5 | null | 2010-11-22T21:43:45.823 | 2011-02-21T20:55:54.027 | 2010-11-22T21:58:02.020 | 919 | 1951 | [
"data-transformation",
"normal-distribution"
]
|
4806 | 2 | null | 4597 | 0 | null | You might try to find the correlation of the series of differences with the moving average of the series of differences using some scale for the moving average (for example, use a 10 point moving average as the scale). This way you can get an idea about how "wiggly" the series is at different scales. This tells you whether the series has a tendency to move back and forth or keep moving in the same direction.
If the correlation is reasonably negative that implies that it is wiggly, it goes one way then comes back near where it was before. If the correlation is zero then it's not wiggly, if it moves in one direction, it has no tendency to move back where it came from. If it's positive then you might call it "anti-wiggly", it has a tendency to keep moving in the same direction it has been moving in (it has lots of trends).
Repeat this at several different scales, say 2, 4, 8, 16, 24 ... data points, and then you can look at the graph.
| null | CC BY-SA 2.5 | null | 2010-11-22T22:26:51.780 | 2010-11-22T22:26:51.780 | null | null | 1146 | null |
4807 | 1 | null | null | 7 | 349 | I study labor and delivery as an epidemiologist. It is well established that a large fetus has a higher risk of causing maternal birth trauma. But a large baby is also likely to be delivered by cesarean section due to difficult labor. Those delivered by cesarean section will not have any risk for maternal birth trauma and, therefore, excluded from an analysis using trauma as the outcome. Thus, informative censoring is a real issue to estimate an unbiased relative risk, particularly when the cesarean rate is high. Are there any methods to adjust for such a bias?
| Informative censoring caused by cesarean section | CC BY-SA 2.5 | null | 2010-11-22T23:14:02.377 | 2013-07-25T12:09:30.270 | null | null | null | [
"censoring"
]
|
4808 | 2 | null | 4805 | 1 | null | Maybe the Box–Cox transformation or other sort of [power transform](http://en.wikipedia.org/wiki/Power_transform)?
| null | CC BY-SA 2.5 | null | 2010-11-22T23:51:15.713 | 2010-11-22T23:51:15.713 | null | null | 449 | null |
4809 | 1 | 15516 | null | 1 | 875 | I'm working on a meta-analysis and have generated a quirky question for which I'm at a bit of a loss. The MA is for a large set of factorial experiments. Calculating the Log Response Ratio (LRR) and variance in said ratio for the experimental data is a cinch, and we're comparing the effects of one type of treatment to the other (and any interactions).
However, my group is curious at examining the effect of HALF the level of one of the treatments (it's a continuous treatment, and we've been looking at the highest versus the lowest level of the treatment). We have fit nonlinear curves for each experiment that describe how the treatment effects the response over a wide range of treatment levels. We've got the coefficient error and residual error for each of these curve fits. And how we want to calculate log(full treatment) - log(half treatment) from the fitted curve for the this half-treatment log response ratio. Easy.
But...how would we then calculate the variance in the half-treatment log response ratio? Would we use the SE estimates for the curve coefficients? The residual error? What would the sample size be for the variance calculation? Or is this unimportant? Thoughts?
| Determining variance of meta-analysis log-response ratio generated from fitted curve | CC BY-SA 2.5 | null | 2010-11-22T23:57:08.500 | 2011-09-13T20:28:24.587 | null | null | 101 | [
"variance",
"meta-analysis"
]
|
4810 | 1 | 4885 | null | 14 | 18931 | This may be too much of a general question but I hope I can find help here. I am starting a RA job in my university and my topic will be related to Internet Traffic Analysis. I am fairly new to the world of analysis but I guess in the world of research this is what I have to do a lot of.
I've been through a few papers and in a lot of them I find they use Probability Density (PDF), CDF, CCDF etc to explain the results they've obtained. For example, PDF of user session duration, CDF of bytes transferred each day etc. I have taken probability and statistics class so I understand what they are but I am still confused to the cases in which such representation would be chosen.
So, if there is anyone out there who does such graphs and analysis (in any other topic in general or this) could you tell me simply in what situation would I use one or the other of these representations
| How to use CDF and PDF statistics for analysis | CC BY-SA 2.5 | null | 2010-11-23T01:18:41.480 | 2012-05-01T19:32:12.390 | null | null | 2101 | [
"mathematical-statistics"
]
|
4812 | 1 | 4813 | null | 13 | 4127 | I'm currently working as a teaching assistant at my university, in an introductory statistics course (for medical students).
Offline, there are many books available with information to aid the teacher. However, what I am interested to know is if you might direct me to any (good) resources that provide exercises (with solutions) in statistics, which are available online? (e.g: teachers notes).
The subject material can range between descriptive statistics, probability, and parametric/non-parametric statistical inference.
| Online resources for learning statistics, exercises (with solutions)? | CC BY-SA 3.0 | null | 2010-11-23T06:53:05.397 | 2016-12-20T18:21:34.180 | 2016-12-20T18:21:34.180 | 22468 | 253 | [
"probability",
"references",
"inference",
"descriptive-statistics",
"teaching"
]
|
4813 | 2 | null | 4812 | 9 | null | This is a totally amazing online resource for multi-level modelling, not sure if that's any good to you:
[http://www.cmm.bristol.ac.uk/](http://www.cmm.bristol.ac.uk/)
Even includes a test at the start to give you an idea of where to start.
And should this be community wiki?
| null | CC BY-SA 2.5 | null | 2010-11-23T07:50:07.560 | 2010-11-23T07:50:07.560 | null | null | 199 | null |
4814 | 1 | 4815 | null | 7 | 1797 | I have two LME models with the same interaction, one containing both main effects and one containing only one main effect, say :
$$ H\_CE = Season + Crownlevel + Season:Crownlevel , random = 1|CollectorID $$
and
$$ H\_CE = Season + Season:Crownlevel , random = 1|CollectorID $$
There are 4 levels of each, and every combination of Season, Crownlevel and CollectorID
The AIC, BIC and log likelihood of both models are completely equal. Given the formula for AIC being
$$ \mathit{AIC} = 2k - 2\ln(L)\ $$
one would expect this to be different, even if the likelihoods are exactly the same. In the end, they have a different number of parameters. Or so I thought...
Trying this toy example in R :
```
library(nlme)
Season <- rep(as.factor(rep(letters[1:4],each=4)),4)
Crownlevel <-rep(as.factor(rep(letters[11:14],4)),4)
CollectorID <-rep(letters[20:23],each=16)
X <- model.matrix(~Season+Crownlevel+Season:Crownlevel)
B <- c(1,1,-2,2,0.3,0.4,0.4,2,3,1,-2,-3,-4,2,1,2)
H_CE <- X %*% B + rnorm(16*4)
KBM <- data.frame(Season,Crownlevel,H_CE,CollectorID)
model1 <- lme(H_CE~Season+Crownlevel+Season:Crownlevel,data=KBM,
method="ML",random=~1|CollectorID)
model1e <- lme(H_CE~Season+Season:Crownlevel,data=KBM,
method="ML",random=~1|CollectorID)
```
I get :
```
anova(model1,model1e)
Model df AIC BIC logLik
model1 1 18 174.1834 213.0433 -69.09168
model1e 2 18 174.1834 213.0433 -69.09168
```
What am I missing here? Why are the numbers completely equal? It has to do something with the model specification, but I can't really see what.
The model specification in itself is faulty, I know that. But I can't explain what makes it return a different set of parameters, but exactly the same residuals, likelihood and degrees of freedom :
```
> all.equal(residuals(model1),residuals(model1e))
[1] TRUE
```
---
As [fabians](https://stats.stackexchange.com/a/4815/615) rightfully pointed out, both models are perfectly equivalent. Yet, I fail to see why in the AIC calculation the same value for the number of parameters `k` is used.
The `k` in AIC uses the df, which explains everything.
| Why do I get equal AIC, BIC and log likelihood for different models in LME framework? | CC BY-SA 3.0 | null | 2010-11-23T09:38:30.733 | 2012-10-10T14:12:14.540 | 2017-04-13T12:44:48.343 | -1 | 1124 | [
"r",
"mixed-model",
"maximum-likelihood",
"aic",
"bic"
]
|
4815 | 2 | null | 4814 | 6 | null | The models are exactly equivalent. In both models you effectively specify one parameter for each combination of levels of Season and Crownlevel - the only difference is the parameterization:
In the first model, you fit main effects for Season and Crownlevel and an interaction effect to capture the combination-specific deviations from the main effects.
In the second model, you specify only the main effect of season, and the interaction effect then captures the deviations for each crownlevel within a season.
```
H_CE~Season:Crownlevel
```
would also yield an equivalent model, with one parameter for each combination of season and crownlevel (minus one that is non-identifiable because of the intercept, i.e. constitutes the reference category).
BTW: I don't think your model specification is faulty, which specification is better depends on the inference you want to do with your model.
| null | CC BY-SA 2.5 | null | 2010-11-23T10:00:56.627 | 2010-11-23T10:00:56.627 | null | null | 1979 | null |
4816 | 1 | 4821 | null | 19 | 14254 | Let $X \sim \chi^2_k$ be a chi-squared distributed random variable with $k$ degrees of freedom. What are the sharpest known bounds for the following probabilities
$$
\mathbb{P}[X > t] \leq 1 - \delta_1(t, k)
$$
and
$$
\mathbb{P}[X < z] \leq 1 - \delta_2(z, k)
$$
where $\delta_1$ and $\delta_2$ are some functions. Pointers to relevant papers would be appreciated.
| What are the sharpest known tail bounds for $\chi_k^2$ distributed variables? | CC BY-SA 2.5 | null | 2010-11-23T10:12:46.490 | 2022-08-12T20:27:22.900 | null | null | 168 | [
"probability",
"chi-squared-test"
]
|
4817 | 1 | 4956 | null | 6 | 1431 | I wasn't able to find one on Google, and was wondering if one exists...
| Is there an unpaired version of the sign test? | CC BY-SA 2.5 | null | 2010-11-23T10:48:00.267 | 2010-11-27T07:53:59.150 | 2010-11-23T16:27:15.573 | 919 | 253 | [
"hypothesis-testing",
"nonparametric",
"median",
"sign-test"
]
|
4818 | 1 | 4822 | null | 13 | 4188 | I just noticed how the non exact McNemar's test uses the $\chi^{2}$ asymptotic distribution. But since the exact test (for the two case table) relies on the binomial distribution, how come it is not common to suggest the normal approximation to the binomial distribution?
| Why does the McNemar's test use $\chi^{2}$ and not the normal distribution? | CC BY-SA 4.0 | null | 2010-11-23T10:51:09.000 | 2021-02-03T19:43:36.443 | 2021-02-03T19:43:36.443 | 11887 | 253 | [
"distributions",
"normal-distribution",
"binomial-distribution",
"mcnemar-test",
"chi-squared-distribution"
]
|
4819 | 2 | null | 4817 | 2 | null | The extension goes thorugh introducing rank to somewhat regulate the order of data and the result are Wilcoxon tests (Mann-Whitney in particular).
| null | CC BY-SA 2.5 | null | 2010-11-23T10:56:34.037 | 2010-11-23T10:56:34.037 | null | null | null | null |
4820 | 2 | null | 4817 | 3 | null | I'm not sure if such a test can exist conceptually. The sign test uses the pairing of the data to decide whether one value is bigger than the corresponding other value. But in an unpaired situation there is nothing like a corresponding other value (every value in the other group could be a potential counterpart for comparison). Correct me please, if I'm not getting the point...
| null | CC BY-SA 2.5 | null | 2010-11-23T11:51:57.410 | 2010-11-23T11:51:57.410 | null | null | 1573 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.