Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
7347
|
2
| null |
7330
|
1
| null |
Yes, arbitrary rotations in an $n$ dimensional space can be written as the compositions of Givens rotations. The other question (and I think what you're aiming at) is
>
Or perhaps a better question is: if a
prior distribution is invariant when
rotated about 2 arbitrary dimensions,
is it invariant when rotated about an
arbitrary number of dimensions?
The answer is yes, and moreover look no further! since this class of distribution has been completely characterized: it's a subset of the [elliptical distributions](http://en.wikipedia.org/wiki/Elliptical_distribution), when $\Sigma=I$. Elliptical distributions are all and only the rotationally invariant distirbutions, after an affine transformation of its variables. The standard normal multivariate distribution is its best-known example.
| null |
CC BY-SA 2.5
| null |
2011-02-17T22:00:09.717
|
2011-02-17T22:00:09.717
| null | null |
30
| null |
7348
|
1
|
7349
| null |
35
|
24598
|
I find R can take a long time to generate plots when millions of points are present - unsurprising given that points are plotted individually. Furthermore, such plots are often too cluttered and dense to be useful. Many of the points overlap and form a black mass and a lot of time is spent plotting more points into that mass.
Are there any statistical alternatives to representing large $n$ data in a standard scatterplot? I have considered a density plot, but what other alternatives are there?
|
Statistical methods to more efficiently plot data when millions of points are present?
|
CC BY-SA 4.0
| null |
2011-02-17T22:29:30.310
|
2019-11-14T10:45:29.190
|
2018-06-27T21:45:14.067
|
8013
|
2726
|
[
"r",
"data-visualization"
] |
7349
|
2
| null |
7348
|
14
| null |
This is a hard task with no ready solutions (this is of course because density plot is so a tempting fallback than no one really cares). So, what can you do?
If they really overlap (i.e. have exactly the same X & Y coordinates) and you are not using alpha, the best idea would be just to reduce the overlap using `unique` (with alpha, it may be summed over such groups).
If not, you may manually round the coordinates to the nearest pixels and use the previous method (yet this is a dirty solution).
Finally, you can make a density plot only to use it to subsample the points in the most dense areas. This on the other hand will not make the exactly same plot and may introduce artifacts if not precisely tuned.
| null |
CC BY-SA 2.5
| null |
2011-02-17T23:32:55.593
|
2011-02-17T23:32:55.593
| null | null | null | null |
7350
|
2
| null |
7
|
1
| null |
[http://www.ckan.net](http://www.ckan.net) has a number of datasets too.
[http://www.biotorrents.net/browse.php](http://www.biotorrents.net/browse.php) is also starting to have quite a large amount of BIG datasets.
| null |
CC BY-SA 2.5
| null |
2011-02-18T00:06:24.633
|
2011-02-18T00:06:24.633
| null | null |
3291
| null |
7351
|
1
|
7352
| null |
48
|
7976
|
I am trying to get upto speed in Bayesian Statistics. I have a little bit of stats background (STAT 101) but not too much - I think I can understand prior, posterior, and likelihood :D.
I don't want to read a Bayesian textbook just yet.
I'd prefer to read from a source (website preferred) that will ramp me up quickly. Something like [this](http://www.stat.washington.edu/raftery/Research/PDF/bayescourse.pdf), but that has more details.
Any advice?
|
Bayesian statistics tutorial
|
CC BY-SA 2.5
| null |
2011-02-18T00:35:17.267
|
2022-07-16T16:57:20.707
|
2012-10-16T16:16:40.557
| null |
3301
|
[
"bayesian",
"references"
] |
7352
|
2
| null |
7351
|
19
| null |
Here's a place to start:
[ftp://selab.janelia.org/pub/publications/Eddy-ATG3/Eddy-ATG3-reprint.pdf](ftp://selab.janelia.org/pub/publications/Eddy-ATG3/Eddy-ATG3-reprint.pdf)
[http://blog.oscarbonilla.com/2009/05/visualizing-bayes-theorem/](http://blog.oscarbonilla.com/2009/05/visualizing-bayes-theorem/)
[http://yudkowsky.net/rational/bayes](http://yudkowsky.net/rational/bayes)
[http://www.math.umass.edu/~lavine/whatisbayes.pdf](http://www.math.umass.edu/~lavine/whatisbayes.pdf)
[http://en.wikipedia.org/wiki/Bayesian_inference](http://en.wikipedia.org/wiki/Bayesian_inference)
[http://en.wikipedia.org/wiki/Bayesian_probability](http://en.wikipedia.org/wiki/Bayesian_probability)
[Tutorial_on_Bayesian_Statistics_and_Clinical_Trials](http://gistsupport.medshelf.org/Marina%27s_Tutorial_on_Bayesian_Statistics_and_Clinical_Trials)
| null |
CC BY-SA 3.0
| null |
2011-02-18T01:04:21.207
|
2011-06-23T20:29:50.033
|
2011-06-23T20:29:50.033
|
2775
|
2775
| null |
7353
|
2
| null |
7348
|
17
| null |
I must admit that I do not fully understand your last paragraph:
>
"I am not looking for a density plot
(although those are often useful), I
would want the same output as a simple
plot call but much faster than
millions of overplots if possible."
It is also unclear what type of plot (function) you are looking for.
Given that you have metric variables, you might find hexagon binned plots or sunnflower plots usefull. For further references, see
- Graphics of Large Datasets by Unwin/Theus/Hofmann
- Quick-R on "High Density Scatterplots"
- ggplot2's stat_hexbin
| null |
CC BY-SA 3.0
| null |
2011-02-18T01:13:27.283
|
2016-01-13T18:52:38.197
|
2016-01-13T18:52:38.197
|
36419
|
307
| null |
7354
|
2
| null |
7208
|
4
| null |
I find caracal's answer convincing, but I also believe Cohen's Kappa can only account for part of what constitutes interrater reliability. The simple % of ratings in agreement accounts for another part, and the correlation between ratings, a third. It takes all three methods to gain a complete picture. For details please see [http://pareonline.net/getvn.asp?v=9&n=4](http://pareonline.net/getvn.asp?v=9&n=4) :
>
"[...] the general practice of
describing interrater reliability as a
single, unified concept is at best
imprecise, and at worst potentially
misleading."
| null |
CC BY-SA 2.5
| null |
2011-02-18T01:26:44.263
|
2011-02-18T01:26:44.263
| null | null |
2669
| null |
7355
|
2
| null |
7351
|
5
| null |
Some more depth:
- http://math.tut.fi/~piche/bayes/notes01.pdf covers Bayes' theorem
- https://ccrma.stanford.edu/~jos/bayes/bayes.pdf and
- http://www-personal.une.edu.au/~jvanderw/Introduction_to_Bayesian_Statistics1.pdf are more about statistical applications
| null |
CC BY-SA 2.5
| null |
2011-02-18T01:33:51.137
|
2011-02-18T01:33:51.137
| null | null |
2958
| null |
7356
|
2
| null |
7348
|
45
| null |
Look at the [hexbin](http://cran.r-project.org/package=hexbin) package which implements paper/method by Dan Carr. The [pdf vignette](http://cran.r-project.org/web/packages/hexbin/vignettes/hexagon_binning.pdf) has more details which I quote below:
>
1 Overview
Hexagon binning is a form of bivariate
histogram useful for visualizing the
struc- ture in datasets with large n.
The underlying concept of hexagon
binning is extremely simple;
the xy plane over the set (range(x), range(y)) is tessellated by
a regular grid of hexagons.
the number of points falling in each hexagon are counted and stored in
a data structure
the hexagons with count > 0 are plotted using a color ramp or varying
the radius of the hexagon in
proportion to the counts. The
underlying algorithm is extremely fast
and eective for displaying the
structure of datasets with $n \ge 10^6$
If the size of the grid and the cuts
in the color ramp are chosen in a
clever fashion than the structure
inherent in the data should emerge in
the binned plots. The same caveats
apply to hexagon binning as apply to
histograms and care should be
exercised in choosing the binning
parameters
| null |
CC BY-SA 2.5
| null |
2011-02-18T02:02:39.183
|
2011-02-18T02:02:39.183
| null | null |
334
| null |
7357
|
1
|
7359
| null |
44
|
21328
|
I know this is a fairly specific `R` question, but I may be thinking about proportion variance explained, $R^2$, incorrectly. Here goes.
I'm trying to use the `R` package `randomForest`. I have some training data and testing data. When I fit a random forest model, the `randomForest` function allows you to input new testing data to test. It then tells you the percentage of variance explained in this new data. When I look at this, I get one number.
When I use the `predict()` function to predict the outcome value of the testing data based on the model fit from the training data, and I take the squared correlation coefficient between these values and the actual outcome values for the testing data, I get a different number. These values don't match up.
Here's some `R` code to demonstrate the problem.
```
# use the built in iris data
data(iris)
#load the randomForest library
library(randomForest)
# split the data into training and testing sets
index <- 1:nrow(iris)
trainindex <- sample(index, trunc(length(index)/2))
trainset <- iris[trainindex, ]
testset <- iris[-trainindex, ]
# fit a model to the training set (column 1, Sepal.Length, will be the outcome)
set.seed(42)
model <- randomForest(x=trainset[ ,-1],y=trainset[ ,1])
# predict values for the testing set (the first column is the outcome, leave it out)
predicted <- predict(model, testset[ ,-1])
# what's the squared correlation coefficient between predicted and actual values?
cor(predicted, testset[, 1])^2
# now, refit the model using built-in x.test and y.test
set.seed(42)
randomForest(x=trainset[ ,-1], y=trainset[ ,1], xtest=testset[ ,-1], ytest=testset[ ,1])
```
|
Manually calculated $R^2$ doesn't match up with randomForest() $R^2$ for testing new data
|
CC BY-SA 3.0
| null |
2011-02-18T02:32:48.823
|
2018-01-09T09:06:16.900
|
2018-01-09T09:06:16.900
|
128677
|
36
|
[
"r",
"correlation",
"predictive-models",
"random-forest",
"r-squared"
] |
7358
|
1
|
7377
| null |
23
|
12515
|
I've got a particular MCMC algorithm which I would like to port to C/C++. Much of the expensive computation is in C already via Cython, but I want to have the whole sampler written in a compiled language so that I can just write wrappers for Python/R/Matlab/whatever.
After poking around I'm leaning towards C++. A couple of relevant libraries I know of are Armadillo (http://arma.sourceforge.net/) and Scythe (http://scythe.wustl.edu/). Both try to emulate some aspects of R/Matlab to ease the learning curve, which I like a lot. Scythe squares a little better with what I want to do I think. In particular, its RNG includes a lot of distributions where Armadillo only has uniform/normal, which is inconvenient. Armadillo seems to be under pretty active development while Scythe saw its last release in 2007.
So what I'm wondering is if anyone has experience with these libraries -- or others I have almost surely missed -- and if so, whether there is anything to recommend one over the others for a statistician very familiar with Python/R/Matlab but less so with compiled languages (not completely ignorant, but not exactly proficient...).
|
C++ libraries for statistical computing
|
CC BY-SA 2.5
| null |
2011-02-18T02:40:12.390
|
2017-11-22T14:23:28.570
|
2017-11-22T14:23:28.570
|
11887
|
26
|
[
"markov-chain-montecarlo",
"software",
"c++",
"computational-statistics"
] |
7359
|
2
| null |
7357
|
66
| null |
The reason that the $R^2$ values are not matching is because `randomForest` is reporting variation explained as opposed to variance explained. I think this is a common misunderstanding about $R^2$ that is perpetuated in textbooks. I even mentioned this on another thread the other day. If you want an example, see the (otherwise quite good) textbook Seber and Lee, Linear Regression Analysis, 2nd. ed.
A general definition for $R^2$ is
$$
R^2 = 1 - \frac{\sum_i (y_i - \hat{y}_i)^2}{\sum_i (y_i - \bar{y})^2} .
$$
That is, we compute the mean-squared error, divide it by the variance of the original observations and then subtract this from one. (Note that if your predictions are really bad, this value can go negative.)
Now, what happens with linear regression (with an intercept term!) is that the average value of the $\hat{y}_i$'s matches $\bar{y}$. Furthermore, the residual vector $y - \hat{y}$ is orthogonal to the vector of fitted values $\hat{y}$. When you put these two things together, then the definition reduces to the one that is more commonly encountered, i.e.,
$$
R^2_{\mathrm{LR}} = \mathrm{Corr}(y,\hat{y})^2 .
$$
(I've used the subscripts $\mathrm{LR}$ in $R^2_{\mathrm{LR}}$ to indicate linear regression.)
The `randomForest` call is using the first definition, so if you do
```
> y <- testset[,1]
> 1 - sum((y-predicted)^2)/sum((y-mean(y))^2)
```
you'll see that the answers match.
| null |
CC BY-SA 2.5
| null |
2011-02-18T03:31:08.217
|
2011-02-18T04:21:19.807
|
2011-02-18T04:21:19.807
|
2970
|
2970
| null |
7360
|
2
| null |
7358
|
1
| null |
There are numerous C/C++ libraries out there, most focusing on a particular problem domain of (e.g. PDE solvers). There are two comprehensive libraries I can think of that you may find especially useful because they are written in C but have excellent Python wrappers already written.
1) [IMSL C](http://www.roguewave.com/products/imsl-numerical-libraries/c-library.aspx) and [PyIMSL](http://www.roguewave.com/products/imsl-numerical-libraries/pyimsl-studio.aspx)
2) [trilinos](http://trilinos.sandia.gov/) and [pytrilinos](http://trilinos.sandia.gov/packages/pytrilinos/index.html)
I have never used trilinos as the functionality is primarily on numerical analysis methods, but I use PyIMSL a lot for statistical work (and in a previous work life I developed the software too).
With respect to RNGs, here are the ones in C and Python in IMSL
## DISCRETE
- random_binomial: Generates pseudorandom binomial numbers from a binomial distribution.
- random_geometric: Generates pseudorandom numbers from a geometric distribution.
- random_hypergeometric: Generates pseudorandom numbers from a hypergeometric distribution.
- random_logarithmic: Generates pseudorandom numbers from a logarithmic distribution.
- random_neg_binomial: Generates pseudorandom numbers from a negative binomial distribution.
- random_poisson: Generates pseudorandom numbers from a Poisson distribution.
- random_uniform_discrete: Generates pseudorandom numbers from a discrete uniform distribution.
- random_general_discrete: Generates pseudorandom numbers from a general discrete distribution using an alias method or optionally a table lookup method.
## UNIVARIATE CONTINUOUS DISTRIBUTIONS
- random_beta: Generates pseudorandom numbers from a beta distribution.
- random_cauchy: Generates pseudorandom numbers from a Cauchy distribution.
- random_chi_squared: Generates pseudorandom numbers from a chi-squared distribution.
- random_exponential: Generates pseudorandom numbers from a standard exponential distribution.
- random_exponential_mix: Generates pseudorandom mixed numbers from a standard exponential distribution.
- random_gamma: Generates pseudorandom numbers from a standard gamma distribution.
- random_lognormal: Generates pseudorandom numbers from a lognormal distribution.
- random_normal: Generates pseudorandom numbers from a standard normal distribution.
- random_stable: Sets up a table to generate pseudorandom numbers from a general discrete distribution.
- random_student_t: Generates pseudorandom numbers from a Student's t distribution.
- random_triangular: Generates pseudorandom numbers from a triangular distribution.
- random_uniform: Generates pseudorandom numbers from a uniform (0, 1) distribution.
- random_von_mises: Generates pseudorandom numbers from a von Mises distribution.
- random_weibull: Generates pseudorandom numbers from a Weibull distribution.
- random_general_continuous: Generates pseudorandom numbers from a general continuous distribution.
## MULTIVARIATE CONTINUOUS DISTRIBUTIONS
- random_normal_multivariate: Generates pseudorandom numbers from a multivariate normal distribution.
- random_orthogonal_matrix: Generates a pseudorandom orthogonal matrix or a correlation matrix.
- random_mvar_from_data: Generates pseudorandom numbers from a multivariate distribution determined from a given sample.
- random_multinomial: Generates pseudorandom numbers from a multinomial distribution.
- random_sphere: Generates pseudorandom points on a unit circle or K-dimensional sphere.
- random_table_twoway: Generates a pseudorandom two-way table.
## ORDER STATISTICS
- random_order_normal: Generates pseudorandom order statistics from a standard normal distribution.
- random_order_uniform: Generates pseudorandom order statistics from a uniform (0, 1) distribution.
## STOCHASTIC PROCESSES
- random_arma: Generates pseudorandom ARMA process numbers.
- random_npp: Generates pseudorandom numbers from a nonhomogeneous Poisson process.
## SAMPLES AND PERMUTATIONS
- random_permutation: Generates a pseudorandom permutation.
- random_sample_indices: Generates a simple pseudorandom sample of indices.
- random_sample: Generates a simple pseudorandom sample from a finite population.
## UTILITY FUNCTIONS
- random_option: Selects the uniform (0, 1) multiplicative congruential pseudorandom number generator.
- random_option_get: Retrieves the uniform (0, 1) multiplicative congruential pseudorandom number generator.
- random_seed_get: Retrieves the current value of the seed used in the IMSL random number generators.
- random_substream_seed_get: Retrieves a seed for the congruential generators that do not do shuffling that will generate random numbers beginning 100,000 numbers farther along.
- random_seed_set: Initializes a random seed for use in the IMSL random number generators.
- random_table_set: Sets the current table used in the shuffled generator.
- random_table_get: Retrieves the current table used in the shuffled generator.
- random_GFSR_table_set: Sets the current table used in the GFSR generator.
- random_GFSR_table_get: Retrieves the current table used in the GFSR generator.
- random_MT32_init: Initializes the 32-bit Mersenne Twister generator using an array.
- random_MT32_table_get: Retrieves the current table used in the 32-bit Mersenne Twister generator.
- random_MT32_table_set: Sets the current table used in the 32-bit Mersenne Twister generator.
- random_MT64_init: Initializes the 64-bit Mersenne Twister generator using an array.
- random_MT64_table_get: Retrieves the current table used in the 64-bit Mersenne Twister generator.
- random_MT64_table_set: Sets the current table used in the 64-bit Mersenne Twister generator.
## LOW-DISCREPANCY SEQUENCE
- faure_next_point: Computes a shuffled Faure sequence.
| null |
CC BY-SA 2.5
| null |
2011-02-18T04:08:36.080
|
2011-02-19T02:40:37.700
|
2011-02-19T02:40:37.700
|
1080
|
1080
| null |
7361
|
2
| null |
7358
|
7
| null |
Boost Random from the Boost C++ libraries could be a good fit for you. In addition to many types of RNGs, it offers a variety of different distributions to draw from, such as
- Uniform (real)
- Uniform (unit sphere or arbitrary dimension)
- Bernoulli
- Binomial
- Cauchy
- Gamma
- Poisson
- Geometric
- Triangle
- Exponential
- Normal
- Lognormal
In addition, [Boost Math](http://www.boost.org/doc/libs/1_45_0/libs/math/doc/sf_and_dist/html/index.html) complements the above distributions you can sample from with numerous density functions of many distributions. It also has several neat helper functions; just to give you an idea:
```
students_t dist(5);
cout << "CDF at t = 1 is " << cdf(dist, 1.0) << endl;
cout << "Complement of CDF at t = 1 is " << cdf(complement(dist, 1.0)) << endl;
for(double i = 10; i < 1e10; i *= 10)
{
// Calculate the quantile for a 1 in i chance:
double t = quantile(complement(dist, 1/i));
// Print it out:
cout << "Quantile of students-t with 5 degrees of freedom\n"
"for a 1 in " << i << " chance is " << t << endl;
}
```
If you decided to use Boost, you also get to use its UBLAS library that features a variety of different matrix types and operations.
| null |
CC BY-SA 2.5
| null |
2011-02-18T04:25:52.087
|
2011-02-18T04:25:52.087
| null | null |
1537
| null |
7362
|
1
|
7368
| null |
3
|
4082
|
In Orwin's fail safe N test how to decide the values of criterion for a trivial log odd's ratio and mean log odds ratio in missing studies. I am a medical doctor. Please tell me in simple english.
The data are
```
1. Classic fail-safe N
Z-value for observed studies 27.97543
P-value for observed studies 0.00000
Alpha 0.05000
Tails 2.00000
Z for alpha 1.95996
Number of observed studies 5.00000
Number of missing studies that wouldbring p-value to >alpha 1014.0000
2. Orwin's fail-safe N
Odds ration in observed studies 5.7339
Criterian for a ‘trivial’ odds ratio ?
Mean odds ratio in missing studies ?
```
|
Orwin's fail safe N test
|
CC BY-SA 2.5
| null |
2011-02-18T05:58:42.340
|
2011-02-18T10:01:57.817
|
2011-02-18T09:45:21.120
|
307
|
2956
|
[
"meta-analysis",
"publication-bias"
] |
7363
|
2
| null |
7326
|
0
| null |
I believe a chi-squared test is what you are looking for. Because your dataset has a long tail, many tags will not be sampled well or will not end up in your sample at all. You may want to look into Yates' chi-square test, which attempts to correct for this by loosening the standards of what is significant for rare tags.
| null |
CC BY-SA 2.5
| null |
2011-02-18T06:07:14.057
|
2011-02-18T06:07:14.057
| null | null |
2965
| null |
7364
|
1
| null | null |
4
|
125
|
The standard factor model formulation is
$y=W x+\epsilon$
where $x \sim \mathcal{N}(0, I)$, $\epsilon \sim\mathcal{N}(0, \Sigma)$. $W$ and $\Sigma$ are typically estimated from MLE. The solution can be obtained numerically; in general there are no analytical solutions.
Question: assume that $\Sigma$ belongs to some class of psd matrices, such as matrices of bounded norm, or with bounded trace. Is there an analytical solution to factor models in which the noise is vanishing, i.e.
$y=W x+ n^{-1}\epsilon$
with $n\rightarrow \infty$? By this, I mean than the solutions $(W_n, \Sigma_n)$ converge and that possibly the solution can be expressed in closed form. I know the answer is in the affirmative when it is known a priori that $\Sigma = I$, but I would be happy to see that it holds more generally.
Pointers to literature welcome.
|
Factor models with small noises
|
CC BY-SA 2.5
| null |
2011-02-18T06:10:01.733
|
2011-02-18T06:10:01.733
| null | null |
30
|
[
"factor-analysis",
"maximum-likelihood",
"asymptotics"
] |
7365
|
2
| null |
7308
|
7
| null |
The quotation in full [can be found here](http://books.google.com/books?id=cdBPOJUP4VsC&lpg=PP1&dq=wooldridge%20econometrics&hl=fr&pg=PA357#v=onepage&q=wooldridge%20econometrics&f=false). The estimate $\hat{\theta}_N$ is the solution of minimization problem ([page 344](http://books.google.com/books?id=cdBPOJUP4VsC&lpg=PP1&dq=wooldridge%20econometrics&hl=fr&pg=PA357#v=onepage&q=wooldridge%20econometrics&f=false)):
\begin{align}
\min_{\theta\in \Theta}N^{-1}\sum_{i=1}^Nq(w_i,\theta)
\end{align}
If the solution $\hat{\theta}_N$ is interior point of $\Theta$, objective function is twice differentiable and gradient of the objective function is zero, then Hessian of the objective function (which is $\hat{H}$) is positive semi-definite.
Now what Wooldridge is saying that for given sample the empirical Hessian is not guaranteed to be positive definite or even positive semidefinite. This is true, since Wooldridge does not require that objective function $N^{-1}\sum_{i=1}^Nq(w_i,\theta)$ has nice properties, he requires that there exists a unique solution $\theta_0$ for
$$\min_{\theta\in\Theta}Eq(w,\theta).$$
So for given sample objective function $N^{-1}\sum_{i=1}^Nq(w_i,\theta)$ may be minimized on the boundary point of $\Theta$ in which Hessian of objective function needs not to be positive definite.
Further in his book Wooldridge gives an examples of estimates of Hessian which are guaranteed to be numerically positive definite. In practice non-positive definiteness of Hessian should indicate that solution is either on the boundary point or the algorithm failed to find the solution. Which usually is a further indication that the model fitted may be inappropriate for a given data.
Here is the numerical example. I generate non-linear least squares problem:
$$y_i=c_1x_i^{c_2}+\varepsilon_i$$
I take $X$ uniformly distributed in interval $[1,2]$ and $\varepsilon$ normal with zero mean and variance $\sigma^2$. I generated a sample of size 10, in R 2.11.1 using `set.seed(3)`. Here is the [link to the values](http://mif.vu.lt/~zemlys/download/source/badhessian.csv) of $x_i$ and $y_i$.
I chose the objective function square of usual non-linear least squares objective function:
$$q(w,\theta)=(y-c_1x_i^{c_2})^4$$
Here is the code in R for optimising function, its gradient and hessian.
```
##First set-up the epxressions for optimising function, its gradient and hessian.
##I use symbolic derivation of R to guard against human error
mt <- expression((y-c1*x^c2)^4)
gradmt <- c(D(mt,"c1"),D(mt,"c2"))
hessmt <- lapply(gradmt,function(l)c(D(l,"c1"),D(l,"c2")))
##Evaluate the expressions on data to get the empirical values.
##Note there was a bug in previous version of the answer res should not be squared.
optf <- function(p) {
res <- eval(mt,list(y=y,x=x,c1=p[1],c2=p[2]))
mean(res)
}
gf <- function(p) {
evl <- list(y=y,x=x,c1=p[1],c2=p[2])
res <- sapply(gradmt,function(l)eval(l,evl))
apply(res,2,mean)
}
hesf <- function(p) {
evl <- list(y=y,x=x,c1=p[1],c2=p[2])
res1 <- lapply(hessmt,function(l)sapply(l,function(ll)eval(ll,evl)))
res <- sapply(res1,function(l)apply(l,2,mean))
res
}
```
First test that gradient and hessian works as advertised.
```
set.seed(3)
x <- runif(10,1,2)
y <- 0.3*x^0.2
> optf(c(0.3,0.2))
[1] 0
> gf(c(0.3,0.2))
[1] 0 0
> hesf(c(0.3,0.2))
[,1] [,2]
[1,] 0 0
[2,] 0 0
> eigen(hesf(c(0.3,0.2)))$values
[1] 0 0
```
The hessian is zero, so it is positive semi-definite. Now for the values of $x$ and $y$ given in the link we get
```
> df <- read.csv("badhessian.csv")
> df
x y
1 1.168042 0.3998378
2 1.807516 0.5939584
3 1.384942 3.6700205
4 1.327734 -3.3390724
5 1.602101 4.1317608
6 1.604394 -1.9045958
7 1.124633 -3.0865249
8 1.294601 -1.8331763
9 1.577610 1.0865977
10 1.630979 0.7869717
> x <- df$x
> y <- df$y
> opt <- optim(c(1,1),optf,gr=gf,method="BFGS")
> opt$par
[1] -114.91316 -32.54386
> gf(opt$par)
[1] -0.0005795979 -0.0002399711
> hesf(opt$par)
[,1] [,2]
[1,] 0.0002514806 -0.003670634
[2,] -0.0036706345 0.050998404
> eigen(hesf(opt$par))$values
[1] 5.126253e-02 -1.264959e-05
```
Gradient is zero, but the hessian is non positive.
Note: This is my third attempt to give an answer. I hope I finally managed to give precise mathematical statements, which eluded me in the previous versions.
| null |
CC BY-SA 2.5
| null |
2011-02-18T08:56:34.040
|
2011-02-25T11:23:38.733
|
2011-02-25T11:23:38.733
|
2116
|
2116
| null |
7366
|
1
|
7373
| null |
2
|
2311
|
I am a beginner in statistics, therefore I hope I can state my problem in a correct manner. I have a some instances or samples and I can collect below statistical parameters for classification and regression problem:
- Sample Size
- Minimum value
- Maximum value
- Standard deviation
- Variance
- Mean
And, I want to use z-scores to compare or classify samples, my question is: does using z-score make sense or what can I use instead of z-score to obtain meaningful classification parameter?
|
Is z-score meaningful in classification or regression?
|
CC BY-SA 2.5
| null |
2011-02-18T09:09:36.343
|
2011-02-18T11:08:11.973
|
2011-02-18T10:05:42.170
|
930
|
2170
|
[
"classification",
"z-statistic"
] |
7367
|
2
| null |
7358
|
9
| null |
I would strongly suggest that you have a look at `RCpp` and `RcppArmadillo` packages for `R`. Basically, you would not need to worry about the wrappers as they are already "included". Furthermore the syntactic sugar is really sweet (pun intended).
As a side remark, I would recommend that you have a look at `JAGS`, which does MCMC and its source code is in C++.
| null |
CC BY-SA 2.5
| null |
2011-02-18T09:32:00.917
|
2011-02-18T09:32:00.917
| null | null |
1443
| null |
7368
|
2
| null |
7362
|
6
| null |
The criterion for a 'trivial' effect size (odds ratio in your example) should be decided based on the size of effect that would be considered 'trivial' in the particular scenario, rather than on statistical grounds. If you're looking at an intervention that may be given to a considerable segment of the population with few side-effects and may prevent early death in a few (statins are one example that come to my mind, but you're the medic), then a small reduction in death rates might still be important, so a trivial reduction could perhaps be 1% or less, i.e. an odds ratio of 0.99 or closer to 1. If you're looking at an invasive or costly intervention or one with severe side-effects, or a condition that is an irritation or of short duration, the trivial reduction would be very much larger.
Rosenthal's original fail-safe N based on statistical significance assumed the mean effect size in missing studies was the null effect size. Orwin's method allows you to choose this, but the null effect size remains the simplest choice.
Having said all that, I don't like either Rosenthal's or Orwin's 'fail-safe N' myself (though I prefer Orwin's to Rosenthal's). As Rosenberg points out in the abstract of the paper below, they "are unweighted and are not based on the framework in which most meta-analyses are performed". He suggests a general, weighted fail-safe N using either the fixed- or random-effects frameworks that are far more commonly used for meta-analysis.
Michael S. Rosenberg. [The file-drawer problem revisited: a general weighted method for calculating fail-safe numbers in meta-analysis.](http://dx.doi.org/10.1111/j.0014-3820.2005.tb01004.x) Evolution 59 (2):464-468, 2005.
| null |
CC BY-SA 2.5
| null |
2011-02-18T10:01:57.817
|
2011-02-18T10:01:57.817
| null | null |
449
| null |
7369
|
2
| null |
7344
|
4
| null |
In addition to @mpiktas's comment, you can also have a look at the [rms](http://cran.r-project.org/web/packages/rms/index.html) package from Frank Harrell. The advantage is that it handles both LM and GLM for model fitting and prediction; see for example the `plot.Predict()` function. If you're planning to do serious job in regression modeling, this package and its companion [Hmisc](http://cran.r-project.org/web/packages/Hmisc/index.html) are really good.
| null |
CC BY-SA 2.5
| null |
2011-02-18T10:04:53.950
|
2011-02-18T10:04:53.950
| null | null |
930
| null |
7370
|
1
|
7372
| null |
2
|
136
|
I would like to check different gradient algorithms. For example:
```
fr <- function(x) { ## Rosenbrock Banana function
x1 <- x[1]
x2 <- x[2]
print(c(x1,x2))
100 * (x2 - x1 * x1)^2 + (1 - x1)^2
}
optim(c(-1.2,1),fr,method="BFGS")
```
prints to the screen the values at which the RBF has been evaluated.
How can I store these values in a matrix ? (instead of just printing them to the screen)
|
How to store checks of gradient algorithm in a matrix using R?
|
CC BY-SA 2.5
| null |
2011-02-18T10:21:17.707
|
2016-03-04T12:19:38.220
|
2016-03-04T12:19:38.220
|
603
|
603
|
[
"r"
] |
7372
|
2
| null |
7370
|
4
| null |
Use the function capture.output:
```
cc<-capture.output(vv<-optim(c(-1.2,1),fr,method="BFGS"))
t(sapply(strsplit(gsub(" +"," ",cc)," "),function(l)as.numeric(l[2:3])))
```
The variable `vv` is used so that the result of optim will not be printed only your calls to the function. Each call to the function results in one element of `cc`. Then I strip extra spaces, so that I can split the strings with the space. Afterwards you simply select what is needed.
This solution will work if optim produces errors, but I think it is not very hard to adapt the code to guard against that.
This is of course only the quick hack. For more complicated solutions you will need to delve into the code of optim.
@onestop suggested using sink, it is more simple:
```
sink("bu.txt")
vv<-optim(c(-1.2,1),fr,method="BFGS")
sink()
read.table("bu.txt")
```
Note the additional `sink`, so that the output is diverted again to the console.
| null |
CC BY-SA 2.5
| null |
2011-02-18T11:00:21.777
|
2011-02-19T06:50:00.900
|
2011-02-19T06:50:00.900
|
2116
|
2116
| null |
7373
|
2
| null |
7366
|
3
| null |
In order to perform linear regression you'd need not only the means and variances of the variables but also all their covariances (or equivalently their correlations).
If you can collect the means, variances and covariances/correlations separately in each of the classes you wish to classify, you can do [linear discriminant analysis](http://en.wikipedia.org/wiki/Linear_discriminant_analysis), which is a classification method, albeit a somewhat old-fashioned one with some rather restrictive normality assumptions.
| null |
CC BY-SA 2.5
| null |
2011-02-18T11:08:11.973
|
2011-02-18T11:08:11.973
| null | null |
449
| null |
7374
|
2
| null |
7351
|
2
| null |
You could try '[Teaching Bayesian Reasoning In Less Than Two Hours](https://www.apa.org/pubs/journals/releases/xge-1303380.pdf)'.
| null |
CC BY-SA 3.0
| null |
2011-02-18T12:10:29.613
|
2016-06-27T06:44:41.230
|
2016-06-27T06:44:41.230
|
22
|
22
| null |
7375
|
2
| null |
7351
|
8
| null |
If you'd like to try a few learn by examples, you may be interested in "[Bayesian Computation in R](http://bayes.bgsu.edu/bcwr/)" by Jim Albert.
Its related R package is called LearnBayes.
| null |
CC BY-SA 3.0
| null |
2011-02-18T12:59:46.510
|
2012-05-15T07:17:04.137
|
2012-05-15T07:17:04.137
|
582
|
3306
| null |
7376
|
1
|
7378
| null |
30
|
18639
|
Inter-market analysis is a method of modeling market behavior by means of finding relationships between different markets. Often times, a correlation is computed between two markets, say S&P 500 and 30-Year US treasuries. These computations are more often than not based on price data, which is obvious to everyone that it does not fit the definition of stationary time series.
Possible solutions aside (using returns instead), is the computation of correlation whose data is non-stationary even a valid statistical calculation?
Would you say that such a correlation calculation is somewhat unreliable, or just plain nonsense?
|
Does correlation assume stationarity of data?
|
CC BY-SA 2.5
| null |
2011-02-18T13:07:06.643
|
2016-08-05T18:06:20.690
| null | null |
3306
|
[
"correlation",
"stationarity"
] |
7377
|
2
| null |
7358
|
18
| null |
We have spent some time making the wrapping from C++ into [R](http://www.r-project.org) (and back for that matter) a lot easier via our [Rcpp](http://dirk.eddelbuettel.com/code/rcpp.html) package.
And because linear algebra is already such a well-understood and coded-for field, [Armadillo](http://arma.sf.net), a current, modern, plesant, well-documted, small, templated, ... library was a very natural fit for our first extended wrapper: [RcppArmadillo](http://dirk.eddelbuettel.com/code/rcpp.armadillo.html).
This has caught the attention of other MCMC users as well. I gave a one-day work at the U of Rochester business school last summer, and have help another researcher in the MidWest with similar explorations. Give [RcppArmadillo](http://dirk.eddelbuettel.com/code/rcpp.armadillo.html) a try -- it works well, is actively maintained (new Armadillo release 1.1.4 today, I will make a new RcppArmadillo later) and supported.
And because I just luuv this example so much, here is a quick "fast" version of `lm()` returning coefficient and std.errors:
```
extern "C" SEXP fastLm(SEXP ys, SEXP Xs) {
try {
Rcpp::NumericVector yr(ys); // creates Rcpp vector
Rcpp::NumericMatrix Xr(Xs); // creates Rcpp matrix
int n = Xr.nrow(), k = Xr.ncol();
arma::mat X(Xr.begin(), n, k, false); // avoids extra copy
arma::colvec y(yr.begin(), yr.size(), false);
arma::colvec coef = arma::solve(X, y); // fit model y ~ X
arma::colvec res = y - X*coef; // residuals
double s2 = std::inner_product(res.begin(), res.end(),
res.begin(), double())/(n - k);
// std.errors of coefficients
arma::colvec std_err =
arma::sqrt(s2 * arma::diagvec( arma::pinv(arma::trans(X)*X) ));
return Rcpp::List::create(Rcpp::Named("coefficients") = coef,
Rcpp::Named("stderr") = std_err,
Rcpp::Named("df") = n - k);
} catch( std::exception &ex ) {
forward_exception_to_r( ex );
} catch(...) {
::Rf_error( "c++ exception (unknown reason)" );
}
return R_NilValue; // -Wall
}
```
Lastly, you also get immediate prototyping via [inline](http://cran.r-project.org/package=inline) which may make 'time to code' faster.
| null |
CC BY-SA 2.5
| null |
2011-02-18T15:41:38.457
|
2011-02-18T16:39:18.800
|
2011-02-18T16:39:18.800
|
334
|
334
| null |
7378
|
2
| null |
7376
|
42
| null |
The correlation measures linear relationship. In informal context relationship means something stable. When we calculate the sample correlation for stationary variables and increase the number of available data points this sample correlation tends to true correlation.
It can be shown that for prices, which usually are random walks, the sample correlation tends to random variable. This means that no matter how much data we have, the result will always be different.
Note I tried expressing mathematical intuition without the mathematics. From mathematical point of view the explanation is very clear: Sample moments of stationary processes converge in probability to constants. Sample moments of random walks converge to integrals of brownian motion which are random variables. Since relationship is usually expressed as a number and not a random variable, the reason for not calculating the correlation for non-stationary variables becomes evident.
Update Since we are interested in correlation between two variables assume first that they come from stationary process $Z_t=(X_t,Y_t)$. Stationarity implies that $EZ_t$ and $cov(Z_t,Z_{t-h})$ do not depend on $t$. So correlation
$$corr(X_t,Y_t)=\frac{cov(X_t,Y_t)}{\sqrt{DX_tDY_t}}$$
also does not depend on $t$, since all the quantities in the formula come from matrix $cov(Z_t)$, which does not depend on $t$. So the calculation of sample correlation
$$\hat{\rho}=\frac{\frac{1}{T}\sum_{t=1}^T(X_t-\bar{X})(Y_t-\bar{Y})}{\sqrt{\frac{1}{T^2}\sum_{t=1}^T(X_t-\bar{X})^2\sum_{t=1}^T(Y_t-\bar{Y})^2}}$$
makes sense, since we may have reasonable hope that sample correlation will estimate $\rho=corr(X_t,Y_t)$. It turns out that this hope is not unfounded, since for stationary processes satisfying certain conditions we have that $\hat{\rho}\to\rho$, as $T\to\infty$ in probability. Furthermore $\sqrt{T}(\hat{\rho}-\rho)\to N(0,\sigma_{\rho}^2)$ in distribution, so we can test the hypotheses about $\rho$.
Now suppose that $Z_t$ is not stationary. Then $corr(X_t,Y_t)$ may depend on $t$. So when we observe a sample of size $T$ we potentialy need to estimate $T$ different correlations $\rho_t$. This is of course infeasible, so in best case scenario we only can estimate some functional of $\rho_t$ such as mean or variance. But the result may not have sensible interpretation.
Now let us examine what happens with correlation of probably most studied non-stationary process random walk. We call process $Z_t=(X_t,Y_t)$ a random walk if $Z_t=\sum_{s=1}^t(U_t,V_t)$, where $C_t=(U_t,V_t)$ is a stationary process. For simplicity assume that $EC_t=0$. Then
\begin{align}
corr(X_tY_t)=\frac{EX_tY_t}{\sqrt{DX_tDY_t}}=\frac{E\sum_{s=1}^tU_t\sum_{s=1}^tV_t}{\sqrt{D\sum_{s=1}^tU_tD\sum_{s=1}^tV_t}}
\end{align}
To simplify matters further, assume that $C_t=(U_t,V_t)$ is a white noise. This means that all correlations $E(C_tC_{t+h})$ are zero for $h>0$. Note that this does not restrict $corr(U_t,V_t)$ to zero.
Then
\begin{align}
corr(X_t,Y_t)=\frac{tEU_tV_t}{\sqrt{t^2DU_tDV_t}}=corr(U_0,V_0).
\end{align}
So far so good, though the process is not stationary, correlation makes sense, although we had to make same restrictive assumptions.
Now to see what happens to sample correlation we will need to use the following fact about random walks, called functional central limit theorem:
\begin{align}
\frac{1}{\sqrt{T}}Z_{[Ts]}=\frac{1}{\sqrt{T}}\sum_{t=1}^{[Ts]}C_t\to (cov(C_0))^{-1/2}W_s,
\end{align}
in distribution, where $s\in[0,1]$ and $W_s=(W_{1s},W_{2s})$ is bivariate [Brownian motion](http://en.wikipedia.org/wiki/Wiener_process) (two-dimensional Wiener process). For convenience introduce definition $M_s=(M_{1s},M_{2s})=(cov(C_0))^{-1/2}W_s$.
Again for simplicity let us define sample correlation as
\begin{align}
\hat{\rho}=\frac{\frac{1}{T}\sum_{t=1}^TX_{t}Y_t}{\sqrt{\frac{1}{T}\sum_{t=1}^TX_t^2\frac{1}{T}\sum_{t=1}^TY_t^2}}
\end{align}
Let us start with the variances. We have
\begin{align}
E\frac{1}{T}\sum_{t=1}^TX_t^2=\frac{1}{T}E\sum_{t=1}^T\left(\sum_{s=1}^tU_t\right)^2=\frac{1}{T}\sum_{t=1}^Tt\sigma_U^2=\sigma_U\frac{T+1}{2}.
\end{align}
This goes to infinity as $T$ increases, so we hit the first problem, sample variance does not converge. On the other hand [continuous mapping theorem](http://en.wikipedia.org/wiki/Continuous_mapping_theorem) in conjunction with functional central limit theorem gives us
\begin{align}
\frac{1}{T^2}\sum_{t=1}^TX_t^2=\sum_{t=1}^T\frac{1}{T}\left(\frac{1}{\sqrt{T}}\sum_{s=1}^tU_t\right)^2\to \int_0^1M_{1s}^2ds
\end{align}
where convergence is convergence in distribution, as $T\to \infty$.
Similarly we get
\begin{align}
\frac{1}{T^2}\sum_{t=1}^TY_t^2\to \int_0^1M_{2s}^2ds
\end{align}
and
\begin{align}
\frac{1}{T^2}\sum_{t=1}^TX_tY_t\to \int_0^1M_{1s}M_{2s}ds
\end{align}
So finally for sample correlation of our random walk we get
\begin{align}
\hat{\rho}\to \frac{\int_0^1M_{1s}M_{2s}ds}{\sqrt{\int_0^1M_{1s}^2ds\int_0^1M_{2s}^2ds}}
\end{align}
in distribution as $T\to \infty$.
So although correlation is well defined, sample correlation does not converge towards it, as in stationary process case. Instead it converges to a certain random variable.
| null |
CC BY-SA 3.0
| null |
2011-02-18T15:46:08.050
|
2016-08-05T18:06:20.690
|
2016-08-05T18:06:20.690
|
31363
|
2116
| null |
7379
|
1
|
14670
| null |
4
|
195
|
I'm reading through someone else's code for plotting the results of a psychology experiment, and (according to the code comments) they calculate the accuracy error of their behavioral paradigm as follows:
$\textit{accuracy error} = \sqrt{\frac{(\textit{accuracy}) (1-\textit{accuracy})}{\textit{total trials}}}$
It's output seems to be very similar to the original accuracy. What is this? Is this some sort of multiple comparison correction? Why would they do this?
|
What is this measure of error?
|
CC BY-SA 2.5
| null |
2011-02-18T16:40:42.403
|
2011-08-23T00:38:20.873
|
2011-02-18T16:54:52.120
|
919
|
2019
|
[
"binomial-distribution",
"error"
] |
7380
|
5
| null | null |
0
| null |
Econometrics is a field of statistics dealing with applications to economics.
For econometrics resources, refer to the following questions:
- Free econometrics textbooks
- Introductory statistics and econometrics in R
- Good econometrics textbooks?
| null |
CC BY-SA 3.0
| null |
2011-02-18T17:52:05.987
|
2013-09-02T13:46:59.017
|
2013-09-02T13:46:59.017
|
27581
|
2116
| null |
7381
|
4
| null | null |
0
| null |
Econometrics is a field of statistics dealing with applications to economics.
| null |
CC BY-SA 2.5
| null |
2011-02-18T17:52:05.987
|
2011-02-18T20:29:10.600
|
2011-02-18T20:29:10.600
|
2116
|
2116
| null |
7382
|
2
| null |
7376
|
14
| null |
>
...is the computation of correlation whose data is non-stationary even a valid statistical calculation?
Let $W$ be a discrete random walk. Pick a positive number $h$. Define the processes $P$ and $V$ by $P(0) = 1$, $P(t+1) = -P(t)$ if $V(t) > h$, and otherwise $P(t+1) = P(t)$; and $V(t) = P(t)W(t)$. In other words, $V$ starts out identical to $W$ but every time $V$ rises above $h$, it switches signs (otherwise emulating $W$ in all respects).

(In this figure (for $h=5$) $W$ is blue and $V$ is red. There are four switches in sign.)
In effect, over short periods of time $V$ tends to be either perfectly correlated with $W$ or perfectly anticorrelated with it; however, using a correlation function to describe the relationship between $V$ and $W$ wouldn't be useful (a word that perhaps more aptly captures the problem than "unreliable" or "nonsense").
Mathematica code to produce the figure:
```
With[{h=5},
pv[{p_, v_}, w_] := With[{q=If[v > h, -p, p]}, {q, q w}];
w = Accumulate[RandomInteger[{-1,1}, 25 h^2]];
{p,v} = FoldList[pv, {1,0}, w] // Transpose;
ListPlot[{w,v}, Joined->True]]
```
| null |
CC BY-SA 2.5
| null |
2011-02-18T19:18:50.377
|
2011-02-18T19:18:50.377
| null | null |
919
| null |
7383
|
2
| null |
4762
|
1
| null |
To use SPSS for the Lack of fit test go to: Analyze>>Compare Means>>Means.
Then in the dialogue box that appears assign your Independent and Dependent Variables. Select Options and a new dialogue box will appear. Check the option at the bottom of the screen that says "Test for Linearity".
| null |
CC BY-SA 2.5
| null |
2011-02-18T20:40:13.133
|
2011-02-18T20:40:13.133
| null | null | null | null |
7384
|
2
| null |
7268
|
6
| null |
Aggregation also works without using `zoo` (with random data from 2 variables for 3 days and 4 hosts like from JWM). I assume that you have data from all hosts for each hour.
```
nHosts <- 4 # number of hosts
dates <- seq(as.POSIXct("2011-01-01 00:00:00"),
as.POSIXct("2011-01-03 23:59:30"), by=30)
hosts <- factor(sample(1:nHosts, length(dates), replace=TRUE),
labels=paste("host", 1:nHosts, sep=""))
x1 <- sample(0:20, length(dates), replace=TRUE) # data from 1st variable
x2 <- rpois(length(dates), 2) # data from 2nd variable
Data <- data.frame(dates=dates, hosts=hosts, x1=x1, x2=x2)
```
I'm not entirely sure if you want to average just within each hour, or within each hour over all days. I'll do both.
```
Data$hFac <- droplevels(cut(Data$dates, breaks="hour"))
Data$hour <- as.POSIXlt(dates)$hour # extract hour of the day
# average both variables over days within each hour and host
# formula notation was introduced in R 2.12.0 I think
res1 <- aggregate(cbind(x1, x2) ~ hour + hosts, data=Data, FUN=mean)
# only average both variables within each hour and host
res2 <- aggregate(cbind(x1, x2) ~ hFac + hosts, data=Data, FUN=mean)
```
The result looks like this:
```
> head(res1)
hour hosts x1 x2
1 0 host1 9.578431 2.049020
2 1 host1 10.200000 2.200000
3 2 host1 10.423077 2.153846
4 3 host1 10.241758 1.879121
5 4 host1 8.574713 2.011494
6 5 host1 9.670588 2.070588
> head(res2)
hFac hosts x1 x2
1 2011-01-01 00:00:00 host1 9.192308 2.307692
2 2011-01-01 01:00:00 host1 10.677419 2.064516
3 2011-01-01 02:00:00 host1 11.041667 1.875000
4 2011-01-01 03:00:00 host1 10.448276 1.965517
5 2011-01-01 04:00:00 host1 8.555556 2.074074
6 2011-01-01 05:00:00 host1 8.809524 2.095238
```
I'm also not entirely sure about the type of graph you want. Here's the bare-bones version of a graph for just the first variable with separate data lines for each host.
```
# using the data that is averaged over days as well
res1L <- split(subset(res1, select="x1"), res1$hosts)
mat1 <- do.call(cbind, res1L)
colnames(mat1) <- levels(hosts)
rownames(mat1) <- 0:23
matplot(mat1, main="x1 per hour, avg. over days", xaxt="n", type="o", pch=16, lty=1)
axis(side=1, at=seq(0, 23, by=2))
legend(x="topleft", legend=colnames(mat1), col=1:nHosts, lty=1)
```
The same graph for the data that is only averaged within each hour.
```
res2L <- split(subset(res2, select="x1"), res2$hosts)
mat2 <- do.call(cbind, res2L)
colnames(mat2) <- levels(hosts)
rownames(mat2) <- levels(Data$hFac)
matplot(mat2, main="x1 per hour", type="o", pch=16, lty=1)
legend(x="topleft", legend=colnames(mat2), col=1:nHosts, lty=1)
```
| null |
CC BY-SA 2.5
| null |
2011-02-18T20:53:58.767
|
2011-02-18T21:03:34.373
|
2011-02-18T21:03:34.373
|
1909
|
1909
| null |
7385
|
1
|
7418
| null |
14
|
5082
|
this is my first post. I'm truly grateful for this community.
I am trying to analyze longitudinal count data that is zero-truncated (probability that response variable = 0 is 0), and the mean != variance, so a negative binomial distribution was chosen over a poisson.
Functions/commands I've ruled out:
R
- gee() function in R does not account for zero-truncation nor the negative binomial distribution (not even with the MASS package loaded)
- glm.nb() in R doesn't allow for different correlation structures
- vglm() from the VGAM package can make use of the posnegbinomial family, but it has the same problem as Stata's ztnb command (see below) in that I can't refit the models using a non-independent correlation structure.
Stata
- If the data wasn't longitudinal, I could just use the Stata packages ztnb to run my analysis, BUT that command assumes that my observations are independent.
I've also ruled out GLMM for various methodological/philosophical reasons.
For now, I've settled on Stata's xtgee command (yes, I know that xtnbreg also does the same thing) that takes into account both the nonindependent correlation structures and the neg binomial family, but not the zero-truncation. The added benefit of using xtgee is that I can also calculate qic values (using the qic command) to determine the best fitting correlation structures for my response variables.
If there is a package/command in R or Stata that can take 1) nbinomial family, 2) GEE and 3) zero-truncation into account, I'd be dying to know.
I'd greatly appreciate any ideas you may have. Thank you.
-Casey
|
R/Stata package for zero-truncated negative binomial GEE?
|
CC BY-SA 2.5
| null |
2011-02-18T21:20:51.227
|
2013-08-13T14:53:07.500
|
2011-02-19T03:52:51.513
|
3309
|
3309
|
[
"r",
"stata",
"count-data",
"panel-data",
"truncation"
] |
7386
|
6
| null | null |
0
| null |
I am reluctant to make this nomination because I have been happy with the moderators. I would be delighted to see them continue in their roles.
However, to date only two people have entered nominations. (Is everyone else waiting until just before the deadline?) As you might guess from my activity here, I value this forum and hope to see it attract many more participants.
As part of this self-nomination process, we're supposed to say a little about our qualifications. The statistics about my participation here are clear enough; there's no need to dwell on that. I have successfully nurtured technical online communities (via listservers--remember them?--and a Web magazine) and greatly enjoyed how they fostered collegial, productive interchanges. I have long believed strongly in contributing original content to the Web (rather than just copying bits and pieces of other stuff) and in the power of communities of collaborators. This site combines both those tenets, in a good way. Let's all keep contributing as much as we can to keep it growing and successful.
| null |
CC BY-SA 2.5
| null |
2011-02-18T22:29:42.007
|
2011-02-18T22:29:42.007
|
2011-02-18T22:29:42.007
|
919
|
919
| null |
7387
|
2
| null |
7202
|
2
| null |
Mixed models are usually used to take account of the correlation structure likely with a model like this. Look up Analyze>Mixed Models (MIXED) or the newer Mixed Models>Generalized Linear if you have the latest version.
HTH,
Jon Peck
| null |
CC BY-SA 2.5
| null |
2011-02-18T22:55:09.213
|
2011-02-18T22:55:09.213
| null | null | null | null |
7389
|
1
|
7649
| null |
8
|
2223
|
My question deals with how to be able to assert that an "improved"
evolutionary algorithm is indeed improved (at least from a statistic
point of view) and not just random luck (a concern given the
stochastic nature of these algorithms).
Let's assume I am dealing with a standard GA (before) and an "improved"
GA (after). And I have a suite of 8 test problems.
I run both both of these algorithms repeatedly, for instance 10 times(?)
through each of the 8 test problems and and record how many
generations it took to come up with the solution. I would start out
with the same initial random population (using same seed).
Would I use a paired t-test for means to verify that any difference
(hopefully an improvement) between the averages for each test question
would be statistically significant? Should I run these algorithms more
than 10 times for each test/pair?
Any pitfalls I should be aware of? I assume I could use this approach
for any (evolutionary) algorithm comparison.
Or am I really on the wrong track here? I am basically looking for a
way to compare two implementations of an evolutionary algorithm and
report on how well one might work compared to the other.
Thanks!
|
How to check if modified genetic algorithm is significantly better than the original?
|
CC BY-SA 2.5
| null |
2011-02-18T23:34:23.113
|
2014-02-05T20:13:19.270
|
2011-02-20T23:52:35.640
| null |
10633
|
[
"t-test",
"genetic-algorithms",
"multiple-comparisons"
] |
7391
|
1
|
35542
| null |
6
|
353
|
I have seen asserted that the problem of computing the null distribution of Kolmogorov's $D_n^+$ statistic for a finite sample size maps onto the problem of computing the number of lattice paths that stay below the diagonal, and thus can be solved by the [ballot theorem](http://en.wikipedia.org/wiki/Bertrand%27s_ballot_theorem). I am familiar with lattice paths and the ballot problem. I am also familiar the expression of the distribution of $D_n^+$ as a series of integrals. But I don't see how one problem maps onto the other. Can someone explain or point me to an ariticle or book that does?
I also see the claim the the null distribution of the Kolmogorov-Smirnov $D_n = \max(D_n^+,D_n^-)$ maps onto another lattice path problem that could be solved by a "two-sided ballot theorem". I don't know what a "two-sided" version of the ballot problem would be. Again, can someone explain or point me to an explanation?
Finally, is there a general framework around all of this? Can the Kuiper statistic be mapped to yet another lattice path problem? The two-sample KS test? The AD statistic?
|
Kolmogorov-Smirnov and lattice paths
|
CC BY-SA 2.5
| null |
2011-02-19T01:57:52.283
|
2023-01-03T18:35:41.610
|
2012-09-01T22:54:53.643
|
8413
|
21874
|
[
"kolmogorov-smirnov-test"
] |
7392
|
2
| null |
7385
|
9
| null |
Hmm, good first question! I don't know of a package that meets your precise requirements. I think Stata's [xtgee](http://www.stata.com/help.cgi?xtgee) is a good choice if you also specify the `vce(robust)` option to give Huber-White standard errors, or `vce(bootstrap)` if that's practical. Either of these options will ensure the standard errors are consistently estimated despite the model misspecification that you'll have by ignoring the zero truncation.
That leaves the question of what effect ignoring the zero truncation will have on the point estimate(s) of interest to you. It's worth a quick search to see if there is relevant literature on this in general, i.e. not necessarily in a GEE context -- i would have thought you can pretty safely assume any such results will be relevant in the GEE case too. If you can't find anything, you could always simulate data with zero-truncation and known effect estimates and assess the bias by simulation.
| null |
CC BY-SA 2.5
| null |
2011-02-19T10:01:05.533
|
2011-02-19T10:01:05.533
| null | null |
449
| null |
7393
|
2
| null |
7389
|
4
| null |
It might not be what you want to hear, but from what I've seen the new algorithm is just compared to the old one on benchmark functions.
E.g. as done here: [Efficient Natural Evolution Strategies, (Schaul, Sun Yi, Wierstra, Schmidhuber)](http://www.idsia.ch/~tom/publications/enes.pdf)
| null |
CC BY-SA 2.5
| null |
2011-02-19T10:03:01.937
|
2011-02-19T13:02:11.847
|
2011-02-19T13:02:11.847
|
2860
|
2860
| null |
7394
|
1
|
7395
| null |
6
|
8465
|
I have searched a lot, and I can only find tables that show critical values up to n=30. Can someone provide, or point me to, a simple method of estimating this value for different $\alpha$?
|
How to estimate a critical value of Spearman's correlation for n=100?
|
CC BY-SA 2.5
| null |
2011-02-19T18:41:52.773
|
2011-02-19T19:16:55.457
| null | null |
977
|
[
"hypothesis-testing",
"correlation",
"spearman-rho"
] |
7395
|
2
| null |
7394
|
3
| null |
For values over thirty the approximation (for a two-tailed test) is
$$\frac{\Phi^{-1}\left(1-\tfrac{\alpha}{2}\right)}{\sqrt{n-1}}$$
so for example with $\alpha = 0.05$ and $n=100$ the numerator is about 1.96 and the denominator about 9.95, giving a critical value of about 0.197.
This comes from $\rho$ having approximately a normal distribution for large $n$, with mean $0$ and variance $1/(n − 1)$, assuming independence of the observations.
| null |
CC BY-SA 2.5
| null |
2011-02-19T19:08:00.543
|
2011-02-19T19:16:17.797
|
2011-02-19T19:16:17.797
|
2958
|
2958
| null |
7396
|
2
| null |
7394
|
7
| null |
See Wikipedia: [Spearman's rank correlation coefficient#Determining significance](http://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient#Determining_significance):
"One can test for significance using
$$t = r \sqrt{\frac{n-2}{1-r^2}},$$
which is distributed approximately as Student's $t$ distribution with $n − 2$ degrees of freedom under the null hypothesis."
Here $r$ is the sample estimate of Spearman's rank correlation coefficient. The reason critical values often aren't tabulated for $n > 30$ is that this approximation gets better as $n$ gets larger, and is very good for $n > 30$. The Stata statistical software package uses this formula to calculate $p$-values for all values of $n$.
| null |
CC BY-SA 2.5
| null |
2011-02-19T19:16:55.457
|
2011-02-19T19:16:55.457
| null | null |
449
| null |
7397
|
2
| null |
363
|
4
| null |
- Michael Oakes' Statistical Inference: A Commentary for the Social and Behavioral Sciences.
- Elazar Pedhazur's Multiple Regression in Behavioral Research. If you can stand the immense detail and the self-important tone.
In case you're interested, I've reviewed both on Amazon and at [https://yellowbrickstats.com/favorites.htm](https://yellowbrickstats.com/favorites.htm)
| null |
CC BY-SA 4.0
| null |
2011-02-19T19:25:19.687
|
2021-03-11T13:59:49.813
|
2021-03-11T13:59:49.813
|
2669
|
2669
| null |
7398
|
2
| null |
363
|
3
| null |
Rice: [Mathematical Statistics and Data Analysis](http://goo.gl/wKbcW)
| null |
CC BY-SA 2.5
| null |
2011-02-19T19:47:16.733
|
2011-02-19T19:47:16.733
| null | null |
609
| null |
7399
|
6
| null | null |
0
| null |
I am nominating myself in part because of friendly pressure and because election is really election when there are more candidates than places to be filled.
I came to this site nearly three months ago and became instantly hooked. Moderating would not take a lot out of me, since I am already visiting the site daily, reading all the questions, trying to get clarifications, fixing formatting and of course answering the questions.
I think that current moderators do wonderful job and if I will be their replacement I intend to continue in the same spirit.
| null |
CC BY-SA 2.5
| null |
2011-02-19T19:49:46.760
|
2011-02-19T19:49:46.760
|
2011-02-19T19:49:46.760
|
2116
|
2116
| null |
7400
|
1
|
7405
| null |
53
|
81362
|
Given two histograms, how do we assess whether they are similar or not?
Is it sufficient to simply look at the two histograms?
The simple one to one mapping has the problem that if a histogram is slightly different and slightly shifted then we'll not get the desired result.
Any suggestions?
|
How to assess the similarity of two histograms?
|
CC BY-SA 2.5
| null |
2011-02-19T18:52:26.557
|
2021-05-11T15:45:42.833
|
2011-02-21T06:40:35.937
|
183
|
3325
|
[
"histogram",
"image-processing"
] |
7401
|
2
| null |
7400
|
11
| null |
You're looking for the [Kolmogorov-Smirnov test](http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test). Don't forget to divide the bar heights by the sum of all observations of each histogram.
Note that the KS-test is also reporting a difference if e.g. the means of the distributions are shifted relative to one another. If translation of the histogram along the x-axis is not meaningful in your application, you may want to subtract the mean from each histogram first.
| null |
CC BY-SA 2.5
| null |
2011-02-19T19:22:04.820
|
2011-02-19T20:09:43.253
| null | null |
198
| null |
7402
|
1
|
7404
| null |
14
|
48821
|
I know that a Type II error is where H1 is true, but H0 is not rejected.
### Question
How do I calculate the probability of a Type II error involving a normal distribution, where the standard deviation is known?
|
How do I find the probability of a type II error?
|
CC BY-SA 2.5
| null |
2011-02-19T20:56:08.153
|
2018-11-19T08:55:24.940
|
2011-02-21T05:55:26.353
|
183
| null |
[
"probability",
"statistical-power",
"type-i-and-ii-errors"
] |
7404
|
2
| null |
7402
|
32
| null |
In addition to specifying $\alpha$ (probability of a type I error), you need a fully specified hypothesis pair, i.e., $\mu_{0}$, $\mu_{1}$ and $\sigma$ need to be known. $\beta$ (probability of type II error) is $1 - \textrm{power}$. I assume a one-sided $H_{1}: \mu_{1} > \mu_{0}$. In R:
```
> sigma <- 15 # theoretical standard deviation
> mu0 <- 100 # expected value under H0
> mu1 <- 130 # expected value under H1
> alpha <- 0.05 # probability of type I error
# critical value for a level alpha test
> crit <- qnorm(1-alpha, mu0, sigma)
# power: probability for values > critical value under H1
> (pow <- pnorm(crit, mu1, sigma, lower.tail=FALSE))
[1] 0.63876
# probability for type II error: 1 - power
> (beta <- 1-pow)
[1] 0.36124
```
Edit: visualization

```
xLims <- c(50, 180)
left <- seq(xLims[1], crit, length.out=100)
right <- seq(crit, xLims[2], length.out=100)
yH0r <- dnorm(right, mu0, sigma)
yH1l <- dnorm(left, mu1, sigma)
yH1r <- dnorm(right, mu1, sigma)
curve(dnorm(x, mu0, sigma), xlim=xLims, lwd=2, col="red", xlab="x", ylab="density",
main="Normal distribution under H0 and H1", ylim=c(0, 0.03), xaxs="i")
curve(dnorm(x, mu1, sigma), lwd=2, col="blue", add=TRUE)
polygon(c(right, rev(right)), c(yH0r, numeric(length(right))), border=NA,
col=rgb(1, 0.3, 0.3, 0.6))
polygon(c(left, rev(left)), c(yH1l, numeric(length(left))), border=NA,
col=rgb(0.3, 0.3, 1, 0.6))
polygon(c(right, rev(right)), c(yH1r, numeric(length(right))), border=NA,
density=5, lty=2, lwd=2, angle=45, col="darkgray")
abline(v=crit, lty=1, lwd=3, col="red")
text(crit+1, 0.03, adj=0, label="critical value")
text(mu0-10, 0.025, adj=1, label="distribution under H0")
text(mu1+10, 0.025, adj=0, label="distribution under H1")
text(crit+8, 0.01, adj=0, label="power", cex=1.3)
text(crit-12, 0.004, expression(beta), cex=1.3)
text(crit+5, 0.0015, expression(alpha), cex=1.3)
```
| null |
CC BY-SA 4.0
| null |
2011-02-19T21:13:06.140
|
2018-11-19T08:55:24.940
|
2018-11-19T08:55:24.940
|
1909
|
1909
| null |
7405
|
2
| null |
7400
|
11
| null |
A recent paper that may be worth reading is:
[Cao, Y. Petzold, L.](http://dx.doi.org/10.1016/j.jcp.2005.06.012) Accuracy limitations and the measurement of errors in the stochastic simulation of chemically reacting systems, 2006.
Although this paper's focus is on comparing stochastic simulation algorithms, essentially the main idea is how to compare two histogram.
You can access the [pdf](http://engineering.ucsb.edu/~cse/Files/distributiondistance042.pdf) from the author's webpage.
| null |
CC BY-SA 2.5
| null |
2011-02-19T22:11:05.970
|
2011-02-19T22:11:05.970
| null | null |
8
| null |
7406
|
2
| null |
7211
|
0
| null |
Andrew McCallum (UMass) has a few NLP related software projects available on his [webpage](http://www.cs.umass.edu/~mccallum/code.html). These are all in Java (I think) with source code available.
| null |
CC BY-SA 2.5
| null |
2011-02-19T22:26:45.590
|
2011-02-19T22:26:45.590
| null | null |
1913
| null |
7407
|
1
| null | null |
6
|
300
|
I recently stumbled upon the concept of [sample complexity](http://www.google.com/search?q=%22sample%20complexity%22), and was wondering if there are any texts, papers or tutorials that provide:
- An introduction to the concept (rigorous or informal)
- An analysis of the sample complexity of established and popular classification methods or kernel methods.
- Advice or information on how to measure it in practice.
Any help with the topic would be greatly appreciated.
|
Measuring and analyzing sample complexity
|
CC BY-SA 3.0
| null |
2011-02-19T22:41:23.000
|
2014-10-28T12:27:00.393
|
2012-05-02T14:19:06.373
|
2798
|
2798
|
[
"machine-learning"
] |
7408
|
1
|
7409
| null |
4
|
890
|
Here's a real basic question. I'm trying to teach myself a bit of stats with Verzani's Using R for Introductory Statistics.
In question 5.13 he asks: A sample of 100 people is drawn from a population of 600,000. If it is known that 40% of the population has a specific attribute, what is the probability that 35 or fewer in the sample have that attribute.
Now, I guess you're supposed to reason that the population is sufficiently large that assuming independent Bernoulli trials is close enough. Then, you get your answer like this:
```
> pbinom(35,100,0.4)
[1] 0.1794694
```
My question is this. How would you go about answering a question like that without assuming independence, say if the population was smaller.
I'm sure it'll become obvious after I read more. Just trying to make sure I'm not missing something. Sorry for the intro level question.
Thanks!
|
Sampling from a fixed population
|
CC BY-SA 2.5
| null |
2011-02-19T23:46:53.130
|
2011-02-20T00:44:16.023
| null | null |
3317
|
[
"self-study",
"sampling"
] |
7409
|
2
| null |
7408
|
9
| null |
When sampling without replacement, the distribution is a hypergeometric one. The problem is usually presented as follows: in an urn with $n$ (600.000) marbles, $m$ (40% = 240.000) are red, $n-m$ (60% = 360.000) are black. What is the probability of picking $r$ (35) red marbles in a sample of $k$ (100) marbles? The error by assuming sampling with replacement is really small when $n$ is very large, such as in your case (thanks Henry!).
$\begin{array}{r|ll|l}
~ & y_{1} & y_{2} & \Sigma \\\hline
x_{1} & r & m-r & m \\
x_{2} & k-r & ~ & n-m \\\hline
\Sigma & k & n-k & n
\end{array}$
In R: `dhyper(r, m, n-m, k)`. For the total probability of $0, \ldots, r$ marbles: `phyper(r, m, n-m, k)`:
```
> phyper(35, 240000, 360000, 100)
[1] 0.1794489
# check
> sum(dhyper(0:35, 240000, 360000, 100))
[1] 0.1794489
```
Google "finite population correction" for correcting the error when computing sample mean and variance with small populations.
| null |
CC BY-SA 2.5
| null |
2011-02-20T00:05:12.063
|
2011-02-20T00:44:16.023
|
2011-02-20T00:44:16.023
|
1909
|
1909
| null |
7410
|
2
| null |
7400
|
30
| null |
The standard answer to this question is the [chi-squared test](http://www.itl.nist.gov/div898/handbook/eda/section3/eda35f.htm). The KS test is for unbinned data, not binned data. (If you have the unbinned data, then by all means use a KS-style test, but if you only have the histogram, the KS test is not appropriate.)
| null |
CC BY-SA 2.5
| null |
2011-02-20T06:29:59.583
|
2011-02-20T06:29:59.583
| null | null |
21874
| null |
7411
|
2
| null |
3
|
3
| null |
[Meta.Numerics](https://web.archive.org/web/20110123164637/http://www.meta-numerics.net/) is a .NET library with good support for statistical analysis.
Unlike R (an S clone) and Octave (a Matlab clone), it does not have a "front end". It is more like GSL, in that it is a library that you link to when you are writing your own application that needs to do statistical analysis. C# and Visual Basic are more common programming languages than C/C++ for line-of-business apps, and Meta.Numerics has more extensive support for statistical constructs and tests than GSL.
| null |
CC BY-SA 4.0
| null |
2011-02-20T07:03:34.513
|
2022-11-27T23:08:57.587
|
2022-11-27T23:08:57.587
|
362671
|
21874
| null |
7412
|
1
| null | null |
4
|
1926
|
I want to model how traffic will flow on real networks (not just the internet, also, say, Intel's internal LAN).
Is there a place I can get real network topologies data I can use?
|
Where can I get real data of big network topology?
|
CC BY-SA 2.5
| null |
2011-02-20T08:03:25.657
|
2012-06-04T09:15:27.240
| null | null |
3328
|
[
"networks",
"topologies"
] |
7413
|
2
| null |
7211
|
4
| null |
Here are two further integrated projects:
- Python Natural Language Toolkit (easy installation, good documentation)
- Java MALLET (no experience with it, but looks promising; included in the link given by @Nick)
Both are open-source software.
| null |
CC BY-SA 2.5
| null |
2011-02-20T09:20:16.843
|
2011-02-20T09:20:16.843
| null | null |
930
| null |
7414
|
2
| null |
6234
|
11
| null |
This won't compete with @Shane's answer because circular displays are really well suited for displaying complex relationships with high-dimensional datasets.
For Venn diagrams, I've been using the [venneuler](http://cran.r-project.org/web/packages/venneuler/index.html) R package. It has a simple yet intuitive interface and produce nifty diagrams with transparency, compared to the basic `venn()` function [described](http://www.jstatsoft.org/v11/c01/paper) in the Journal of Statistical Software. It does not handle more than 3 categories, though. Another project is [eVenn](http://cran.r-project.org/web/packages/eVenn/index.html) and it deals with $K=4$ sets.
More recently, I came across a new package that deal with higher-order relation sets, and probably allow to reproduce some of the Venn diagrams shown on Wikipedia or on this webpage, [What is a Venn Diagram?](http://www.combinatorics.org/Surveys/ds5/VennWhatEJC.html), but it is also limited to $K=4$ sets. It is called VennDiagram, but see the reference paper: [VennDiagram: a package for the generation of highly-customizable Venn and Euler diagrams in R](http://www.ncbi.nlm.nih.gov/pubmed/21269502) (Chen and Boutros, BMC Bioinformatics 2011, 12:35).
For further reference, you might be interested in
>
Kestler et al., Generalized Venn
diagrams: a new method of visualizing
complex genetic set relations,
Bioinformatics, 21(8), 1592-1595
(2004).
Venn diagrams have their limitations, though. In this respect, I like the approach taken by Robert Kosara in [Sightings: A Vennerable Challenge](http://eagereyes.org/blog/2008/sightings-a-vennerable-challenge.html), or with [Parallel Sets](http://kosara.net/papers/2010/Kosara_BeautifulVis_2010.pdf) (but see also [this discussion](http://www.stat.columbia.edu/~cook/movabletype/archives/2007/10/venn_diagram_ch.html) on Andrew Gelman weblog).
| null |
CC BY-SA 2.5
| null |
2011-02-20T09:40:05.920
|
2011-02-20T09:48:31.850
|
2011-02-20T09:48:31.850
|
930
|
930
| null |
7415
|
1
| null | null |
10
|
25072
|
What are the statistical techniques to create a sample set, which is representative of the entire population (with a known confidence level)?
Also,
- How to validate, if the sample fits the overall dataset?
- Is it possible, without parsing the entire dataset (which could be billions of records)?
|
How to make representative sample set from a large overall dataset?
|
CC BY-SA 2.5
| null |
2011-02-20T09:54:18.693
|
2011-02-20T19:28:51.147
|
2011-02-20T09:56:10.747
|
930
|
3292
|
[
"sampling",
"sample-size",
"validation"
] |
7416
|
2
| null |
7048
|
8
| null |
In addition to linking quantitative or qualitative data to spatial patterns, as illustrated by @whuber, I would like to mention the use of EDA, with brushing and the various of linking plots together, for longitudinal and high-dimensional data analysis.
Both are discussed in the excellent book, [Interactive and Dynamic Graphics for Data Analysis With R and GGobi](http://www.ggobi.org/book/), by Dianne Cook and Deborah F. Swayne (Springer UseR!, 2007), that you surely know. The authors have a nice discussion on EDA in Chapter 1, justifying the need for EDA to "force the unexpected upon us", quoting John Tukey (p. 13): The use of interactive and dynamic displays is neither [data snooping](http://en.wikipedia.org/wiki/Data_dredging), nor preliminary data inspection (e.g., purely graphical summaries of the data), but it is merely seen as an interactive investigation of the data which might precede or complement pure hypothesis-based statistical modeling.
Using GGobi together with its R interface ([rggobi](http://cran.r-project.org/web/packages/rggobi/index.html)) also solves the problem of how to generate static graphics for intermediate report or final publication, even with [Projection Pursuit](http://en.wikipedia.org/wiki/Projection_pursuit) (pp. 26-34), thanks to the [DescribeDisplay](http://cran.r-project.org/web/packages/DescribeDisplay/index.html) or [ggplot2](http://had.co.nz/ggplot2/) packages.
In the same line, [Michael Friendly](http://www.datavis.ca/) has long advocated the use of data visualization in Categorical Data Analysis, which has been largely exemplified in the vcd package, but also in the more recent [vcdExtra](http://cran.r-project.org/web/packages/vcdExtra/index.html) package (including dynamic viz. through the [rgl](http://cran.r-project.org/web/packages/rgl/index.html) package), which acts as a glue between the [vcd](http://cran.r-project.org/web/packages/vcd/index.html) and [gnm](http://cran.r-project.org/web/packages/gnm/index.html) packages for extending log-linear models. He recently gave a nice summary of that work during the [6th CARME](http://carme2011.agrocampus-ouest.fr/) conference, [Advances in Visualizing Categorical Data Using the vcd, gnm and vcdExtra Packages in R](http://www.datavis.ca/papers/adv-vcd-4up.pdf).
Hence, EDA can also be thought of as providing a visual explanation of data (in the sense that it may account for unexpected patterns in the observed data), prior to a purely statistical modeling approach, or in parallel to it. That is, EDA not only provides useful ways for studying the internal structure of the data at hand, but it may also help to refine and/or summarize statistical models applied on it. It is in essence what [biplots](http://en.wikipedia.org/wiki/Biplot) allow to do, for example. Although they are not multidimensional analysis techniques per se, they are tools for visualizing results from multidimensional analysis (by giving an approximation of the relationships when considering all individuals together, or all variables together, or both). Factor scores can be used in subsequent modeling in place of the original metric to either reduce the dimensionality or to provide intermediate levels of representation.
Sidenote
At risk of being old-fashionned, I'm still using `xlispstat` ([Luke Tierney](http://www.stat.uiowa.edu/~luke/)) from time to time. It has simple yet effective functionalities for interactive displays, currently not available in base R graphics. I'm not aware of similar capabilities in Clojure+Incanter (+Processing).
| null |
CC BY-SA 2.5
| null |
2011-02-20T10:46:02.080
|
2011-02-20T10:46:02.080
| null | null |
930
| null |
7417
|
2
| null |
7389
|
5
| null |
I used paired t-test to compare my algorithm to GA, although I had about 200 test cases. You can use a non-parametric alternative such as the Wilcoxon Ranks Test. Regardless of what you use to test the statistical significance, bear in mind the "real-life" significance. If the performance improvement that your algorithm provides is below measurement limits, or below any practical interest, then even if it is statistically significant (i.e. "good" p-value), it doesn't matter.
| null |
CC BY-SA 3.0
| null |
2011-02-20T11:18:24.227
|
2014-02-05T20:13:19.270
|
2014-02-05T20:13:19.270
|
35895
|
1496
| null |
7418
|
2
| null |
7385
|
12
| null |
For R two options spring to mind, both of which I am only vaguely familiar with at best.
The first is the `pscl` package, which can fit zero truncated inflated and hurdle models in a very nice, flexible manner. The `pscl` package suggests the use of the `sandwich` package which provides "Model-robust standard error estimators for cross-sectional, time series and longitudinal data". So you could fit your count model and then use the `sandwich` package to estimate an appropriate covariance matrix for the residuals taking into account the longitudinal nature of the data.
The second option might be to look the `geepack` package which looks like it can do what you want but only for a negative binomial model with known theta, as it will fit any type of GLM that R's `glm()` function can (so use the family function from MASS).
A third option has raised it's head: `gamlss` and it's add-on package `gamlss.tr`. The latter includes a function `gen.trun()` that can turn any of the distributions supported by `gamlss()` into a truncated distribution in a flexible way - you can specify left truncated at 0 negative binomial distribution for example. `gamlss()` itself includes support for random effects which should take care of the longitudinal nature of the data. It isn't immediately clear however if you have to use at least one smooth function of a covariate in the model or can just model everything as linear functions like in a GLM.
| null |
CC BY-SA 3.0
| null |
2011-02-20T11:51:29.197
|
2012-01-21T18:35:40.003
|
2012-01-21T18:35:40.003
|
1390
|
1390
| null |
7419
|
1
|
7674
| null |
10
|
1588
|
I have been doing some casual internet research on biclusters. (I have read the Wiki article several times.) So far, it seems as if there are few definitions or standard terminology.
- I was wondering if there were any standard papers or books that anybody who is interested in algorithms for finding biclusters should read.
- Is it possible to say what is the state of the art in the field? I was intrigued by the notion of finding biclusters using genetic algorithms, so I would appreciate comments on that approach in particular in the context of other approaches.
- Usually in clustering, the goal is to partition the data-set into groups where each element is in some group. Do bicluster algorithms also seek to put all elements in a particular group?
|
Getting started with biclustering
|
CC BY-SA 2.5
| null |
2011-02-20T12:13:24.220
|
2011-12-21T08:32:30.810
|
2011-12-21T08:32:30.810
|
264
|
847
|
[
"clustering",
"data-mining"
] |
7420
|
2
| null |
7415
|
2
| null |
On your second question first, you might ask, "how was the data entered?" If you think that the data was entered in a relatively arbitrary fashion (i.e., independent of any observable or unobservable characteristics of your observations that might influence your ultimate analysis using the data), then you might consider the first 5 million, say, or however many you're comfortable working with, as representative of the full sample and select randomly from this group to create a sample that you can work with.
To compare two empirical distributions, you can use qq-plots and the two-sample Kolmogorov–Smirnov non-parametric test for differences in distributions (see, e.g., here: [http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test](http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test)). In this case, you would test the distribution of each variable in your sample against the distribution of that variable in your "full" data set (again, it could be just 5 million observations from your full sample). The KS test can suffer from low power (i.e., it's hard to reject the null hypothesis of no difference between the groups), but, with that many observations, you should be okay.
| null |
CC BY-SA 2.5
| null |
2011-02-20T16:16:58.580
|
2011-02-20T16:24:28.540
|
2011-02-20T16:24:28.540
|
401
|
401
| null |
7422
|
1
| null | null |
3
|
1663
|
I have network traffic data in the following for for each hour of a ten day period as follows in a R dataset.
```
Day Hour Volume Category
0 00 100 P2P
0 00 50 email
0 00 200 gaming
0 00 200 video
0 00 150 web
0 00 120 P2P
0 00 180 web
0 00 80 email
....
0 01 150 P2P
0 01 200 P2P
0 01 50 Web
...
...
10 23 100 web
10 23 200 email
10 23 300 gaming
10 23 300 gaming
```
As seen there are repetitions of Category within a single hour also. I need to calculate the volatility and the peak hour to average hour ratios of these different application categories.
Volatility: Standard deviation of hourly volumes divided by hourly average.
Peak hour to avg. hour ratio: Ratio of volume of the maximum hour to the volume of the average hour for that application.
So how do I aggregate and calculate these two statistics for each category? I am new to R and don't have much knowledge of how to aggregate and get the averages as mentioned.
So, the final result would look something like this
```
Category Volatility Peak to Avg. Ratio
Web 0.55 1.5
P2P 0.30 2.1
email 0.6 1.7
gaming 0.4 2.9
```
|
Calculating hourly volatility and peak-to-average ratio in R
|
CC BY-SA 2.5
| null |
2011-02-20T16:48:57.387
|
2011-02-21T14:28:56.023
|
2011-02-21T14:28:56.023
|
919
|
2101
|
[
"r",
"aggregation"
] |
7423
|
2
| null |
7422
|
2
| null |
Check out the [plyr package](http://cran.r-project.org/web/packages/plyr/index.html), which has [great documentation](http://had.co.nz/plyr/). While you could solve your problem with `aggregate()` function, I'd argue that learning the plyr family of functions will be worth it in the end.
For your specific problem, this would obtain what you want:
```
stats = ddply(
.data = my_data
, .variables = .( Hour , Category)
, .fun = function(x){
to_return = data.frame(
volatility = sd(x$Volume)/mean(x$Volume)
, pa_ratio = max(x$Volume)/mean(x$Volume)
)
return( to_return )
}
)
```
| null |
CC BY-SA 2.5
| null |
2011-02-20T18:24:48.317
|
2011-02-20T18:24:48.317
| null | null |
364
| null |
7424
|
2
| null |
7415
|
8
| null |
If you don't wish to parse the entire data set then you probably can't use [stratified sampling](http://en.wikipedia.org/wiki/Stratified_sampling), so I'd suggest taking a large [simple random sample](http://en.wikipedia.org/wiki/Simple_random_sample). By taking a random sample, you ensure that the sample will, on average, be representative of the entire dataset, and standard statistical measures of precision such as standard errors and confidence intervals will tell you how far off the population values your sample estimates are likely to be, so there's no real need to validate that a sample is representative of the population unless you have some concerns that is was truly sampled at random.
How large a simple random sample? Well, the larger the sample, the more precise your estimates will be. As you already have the data, conventional sample size calculations aren't really applicable -- you may as well use as much of your dataset as is practical for computing. Unless you're planning to do some complex analyses that will make computation time an issue, a simple approach would be to make the simple random sample as large as can be analysed on your PC without leading to [paging](http://en.wikipedia.org/wiki/Paging) or other memory issues. One rule of thumb to limit the size of your dataset to no more than half your computer's RAM so as to have space to manipulate it and leave space for the OS and maybe a couple of other smaller applications (such as an editor and a web browser). Another limitation is that 32-bit Windows operating systems won't allow the address space for any single application to be larger than $2^{31}$ bytes = 2.1GB, so if you're using 32-bit Windows, 1GB may be a reasonable limit on the size of a dataset.
It's then a matter of some simple arithmetic to calculate how many observations you can sample given how many variables you have for each observation and how many bytes each variable takes up.
| null |
CC BY-SA 2.5
| null |
2011-02-20T18:49:31.990
|
2011-02-20T19:28:51.147
|
2011-02-20T19:28:51.147
|
449
|
449
| null |
7425
|
2
| null |
7175
|
7
| null |
I think the best quality measure for clustering is the cluster assumption, as given by Seeger in [Learning with labeled and unlabeled data](http://webcache.googleusercontent.com/search?q=cache:http://people.kyb.tuebingen.mpg.de/seeger/papers/review.pdf):
>
For example, assume X = Rd and the validity of the “cluster assumption”, namely that two points x, x should have the same label t if there is a path between them in X which passes only through regions of relatively high P(x).
Yes, this brings the whole idea of centroids and centers down. After all, this are rather arbitrary concepts if you think about the fact that your data might lie within a non-linear submanifold of the space you are actually operating in.
You can easily construct a synthetic dataset where mixture models break down. E.g. this one: .
Long story short: I'd measure the quality of a clustering algorithm in a minimax way. The best clustering algorithm is the one which minimizes the maximal distance of a point to its nearest neighbor of the same cluster while it maximizes the minimal distance of a point to its nearest neighbor from a different cluster.
You might also be interested in [A Nonparametric Information Theoretic Clustering Algorithm](http://www.icml2010.org/papers/168.pdf).
| null |
CC BY-SA 3.0
| null |
2011-02-20T19:52:06.927
|
2013-07-02T04:33:56.040
|
2013-07-02T04:33:56.040
|
7290
|
2860
| null |
7426
|
1
|
7428
| null |
3
|
11200
|
How can I conduct an Egger's test using SPSS17? For each study included in the meta-analysis I know effect size and sample size of patients and controls groups.
|
Egger's test in SPSS
|
CC BY-SA 2.5
| null |
2011-02-20T21:27:02.557
|
2011-02-23T04:50:37.073
|
2011-02-20T23:42:57.017
| null |
3333
|
[
"spss",
"meta-analysis",
"funnel-plot",
"publication-bias"
] |
7427
|
2
| null |
7426
|
2
| null |
I don't use PASW anymore, but implementation of the Egger's test for asymmetry is quite simple. First please look at the Egger's [paper](http://goo.gl/6gnEj) where he propose "theory" behind the test.
Basically you have two variables: (i) normalized effect estimate (your estimate divided by its standard error), and (ii) precision (reciprocal of the standard error of the estimate). Then you should conduct simple linear regression and test for intercept $\beta_0 = 0$.
| null |
CC BY-SA 2.5
| null |
2011-02-20T21:53:59.553
|
2011-02-23T04:50:37.073
|
2011-02-23T04:50:37.073
|
609
|
609
| null |
7428
|
2
| null |
7426
|
5
| null |
In order to conduct Egger's regression test you will also need the standard errors ($SE_i$) of your effect sizes ($ES_i$). Then generate the so called standard normal deviate (SND) which is defined as effect size divided by its standard error ($ES_i / SE_i$). Next, generate the precision which is $\frac{1}{SE_i}$. The regression model is: $SND = a + b \cdot precision$ (I know the error term is missing but let's keep it simple). Finally, estimate this regression model (unweighted) in SPSS/PASW (see [Egger et al 1997](http://www.bmj.com/content/315/7109/629.full): "Methods: Measures of funnel plot asymmetry").
The logic of Egger's regression test in explained in another CrossValidated thread: "[Egger’s linear regression method intercept in meta analysis](https://stats.stackexchange.com/questions/7040/eggers-linear-regression-method-intercept-in-meta-analysis)".
| null |
CC BY-SA 2.5
| null |
2011-02-20T22:12:41.893
|
2011-02-20T22:12:41.893
|
2017-04-13T12:44:29.013
|
-1
|
307
| null |
7429
|
1
|
7431
| null |
4
|
5551
|
What should I do if the expected value in a Chi-square goodness-of-fit test is zero?
I know there's Fisher's test but I have a very large table!
|
Computing chi-square for large tables with some expected cell counts equal to zero
|
CC BY-SA 2.5
| null |
2011-02-21T00:53:30.507
|
2011-02-21T08:14:45.830
|
2011-02-21T08:08:14.360
|
449
|
3338
|
[
"chi-squared-test",
"goodness-of-fit"
] |
7430
|
1
|
7434
| null |
7
|
8274
|
A uniform prior for a scale parameter (like the variance) is uniform on the logarithmic scale.
What functional form does this prior have on the linear scale? And why so?
|
Creating a uniform prior on the logarithmic scale
|
CC BY-SA 2.5
| null |
2011-02-21T02:44:28.760
|
2019-06-30T03:10:08.347
|
2011-02-21T05:59:30.047
|
183
|
1098
|
[
"bayesian",
"prior"
] |
7431
|
2
| null |
7429
|
6
| null |
If the expected value of a cell is zero in a goodness of fit test (I'm assuming you really mean goodness of fit, where the fit is to a theoretical distribution, not another observed distribution) then there are two possibilities:
- You also observed this value zero times. Just discard the zero expected value and try again. Zero observations is the only value you should get with zero expected value.
- You observed this value more than zero times. In this case your null hypothesis is obviously wrong, because this value has a nonzero probability.
| null |
CC BY-SA 2.5
| null |
2011-02-21T02:54:03.923
|
2011-02-21T02:54:03.923
| null | null |
1347
| null |
7432
|
1
| null | null |
5
|
10621
|
In SPSS Version 19 there seems to be a new feature called Automatic Linear Modelling. It creates a 'Model' (which is new to me) and the function seems to combine a number of the functions that is typically required for prediction model development.
The functionality seems incomplete with only a subset of prediction selection techniques and most notable it's missing Backwards step wise.
### QUESTIONS
- Do people see this as good or evil?
- And if 'good' then are there ways to decompose what it is doing?
- Specifically how do I find the regression equation co-efficients when bagging or boosting?
To me it seems to hides a lot of steps and I'm not exactly sure how it's creating what it presents. So any pointers to tutorials or the like (as the SPSS documentation isn't great) is appreciated.
|
Is automatic linear modelling in SPSS a good or bad thing?
|
CC BY-SA 3.0
| null |
2011-02-21T03:21:45.580
|
2011-08-18T21:40:36.737
|
2011-08-18T18:13:53.687
| null |
3189
|
[
"regression",
"modeling",
"spss"
] |
7433
|
2
| null |
7432
|
5
| null |
I had a quick look at the [IBM SPSS advertising material](ftp://public.dhe.ibm.com/common/ssi/ecm/en/ytd03023usen/YTD03023USEN.PDF).
It sounds like it is part of a general move on the part of IBM/SPSS to get involved with predictive analytics.
Terms like automatic data preparation, boosting, bagging, and automated model selection are popular in data mining and predictive analytics communities.
In that sense you may see similarities with open source tools like [Rattle](http://rattle.togaware.com/) and [Weka](http://www.cs.waikato.ac.nz/ml/weka/).
You might find useful [this article by John Maindonald introducing data mining](http://maths.anu.edu.au/~johnm/dm/dmpaper.html).
In summary, if you have some combination of the following factors, then such tools may interest you:
- interested in building predictive models (as opposed to testing apriori hypotheses)
- you have lots of data
- you want some hand holding on the steps of data analysis
| null |
CC BY-SA 2.5
| null |
2011-02-21T03:36:42.370
|
2011-02-21T05:38:03.153
|
2011-02-21T05:38:03.153
|
183
|
183
| null |
7434
|
2
| null |
7430
|
15
| null |
It's just a standard change of variables; the (monotone & 1-1) transformation is $y = \exp(x)$ with inverse $x=\log(y)$ and Jacobian $\frac{dx}{dy} = \frac{1}{y}$.
With a uniform prior $p_y(y) \propto 1$ on $\mathbb{R}$ we get $p_x(x) = p_y(x(y)) |\frac{dx}{dy}| \propto \frac{1}{y}$ on $(0, \infty)$.
Edit: Wikipedia has a bit on transformations of random variables: [http://en.wikipedia.org/wiki/Probability_density_function#Dependent_variables_and_change_of_variables](http://en.wikipedia.org/wiki/Probability_density_function#Dependent_variables_and_change_of_variables). Similar material will be in any intro probability book. Jim Pitman's "Probability" presents the material in a pretty distinctive way as well IIRC.
| null |
CC BY-SA 2.5
| null |
2011-02-21T05:27:07.727
|
2011-02-21T07:00:52.987
|
2011-02-21T07:00:52.987
|
26
|
26
| null |
7435
|
2
| null |
7402
|
1
| null |
To supplement caracal's answer, if you are looking for a user-friendly GUI option for calculating Type II error rates or power for many common designs including the ones implied by your question, you may wish to check out the free software, [G Power 3](http://www.psycho.uni-duesseldorf.de/abteilungen/aap/gpower3/).
| null |
CC BY-SA 2.5
| null |
2011-02-21T06:37:28.657
|
2011-02-21T06:37:28.657
| null | null |
183
| null |
7436
|
2
| null |
7429
|
3
| null |
If you are using [Pearson's chi-square test as a test independence](http://en.wikipedia.org/wiki/Pearson%27s_chi-square_test#Test_of_independence) of two variables in a two-way [contingency table](http://en.wikipedia.org/wiki/Contingency_table), you'll get a zero expected value only if you have a whole row of zeroes or a whole column of zeroes. You can simply remove that row or column.
| null |
CC BY-SA 2.5
| null |
2011-02-21T08:14:45.830
|
2011-02-21T08:14:45.830
| null | null |
449
| null |
7438
|
1
| null | null |
1
|
10776
|
As a basic question in Regression Analysis, I wanted to ask how can I calculate Margin of Error when I fit a straight line to a set of data.
Assume that I have variation of parameter $A$ as a function of parameter $B$, then $A=mB \pm e$, where $m$ is the tangent of the fitted straight line and $e$ is what i'm looking for!
Part of my data:
{{39.7678, 2320.3}, {30.8438, 1614.21}, {125.846, 3078.81}, {55.2345, \
1947.98}, {22.0671, 972.995}, {30.1827, 701.99}, {29.5734, 837.784}, \
{24.6913, 1134.23}, {27.2493, 918.887}, {62.7684, 4535.07}, {101.449, \
5499.83}, {125.248, 6513.04}, {187.409, 6257.72}, {174.138, 5243.63}, \
{120.747, 3768.02}, {84.178, 3453.12}, {60.2404, 3075.15}, {63.8622, \
3517.73}, {101.9, 7240.11}, {90.6265, 5706.74}, {100.897, 7353.84}, \
{159.316, 9867.36}, {109.798, 11471.2}, {104.311, 6924.54}, {82.7057, \
6339.06}, {140.205, 6555.52}, {173.469, 8644.27}, {138.432, 9655.86}, \
{95.2955, 5643.33}, {64.563, 3848.77}, {50.7936, 4733.24}, {34.776,
0. - 2707.89 I}, {25.3775, 6158.}}
|
How to calculate margin of error in linear regression?
|
CC BY-SA 2.5
| null |
2011-02-21T09:57:19.063
|
2011-02-21T19:18:51.447
|
2011-02-21T10:07:55.150
|
2116
| null |
[
"regression"
] |
7439
|
1
|
7444
| null |
33
|
61965
|
You can have data in wide format or in long format.
This is quite an important thing, as the useable methods are different, depending on the format.
I know you have to work with `melt()` and `cast()` from the reshape package, but there seems some things that I don't get.
Can someone give me a short overview how you do this?
|
How to change data between wide and long formats in R?
|
CC BY-SA 2.5
| null |
2011-02-21T10:27:05.680
|
2016-05-10T18:01:17.903
|
2011-02-21T16:30:39.317
|
8
|
3140
|
[
"data-transformation",
"r"
] |
7440
|
1
|
7449
| null |
143
|
176928
|
I need to determine the KL-divergence between two Gaussians. I am comparing my results to [these](http://allisons.org/ll/MML/KL/Normal/), but I can't reproduce their result. My result is obviously wrong, because the KL is not 0 for KL(p, p).
I wonder where I am doing a mistake and ask if anyone can spot it.
Let $p(x) = N(\mu_1, \sigma_1)$ and $q(x) = N(\mu_2, \sigma_2)$. From Bishop's
PRML I know that
$$KL(p, q) = - \int p(x) \log q(x) dx + \int p(x) \log p(x) dx$$
where integration is done over all real line, and that
$$\int p(x) \log p(x) dx = -\frac{1}{2} (1 + \log 2 \pi \sigma_1^2),$$
so I restrict myself to $\int p(x) \log q(x) dx$, which I can write out as
$$-\int p(x) \log \frac{1}{(2 \pi \sigma_2^2)^{(1/2)}} e^{-\frac{(x-\mu_2)^2}{2 \sigma_2^2}} dx,$$
which can be separated into
$$\frac{1}{2} \log (2 \pi \sigma_2^2) - \int p(x) \log e^{-\frac{(x-\mu_2)^2}{2 \sigma_2^2}} dx.$$
Taking the log I get
$$\frac{1}{2} \log (2 \pi \sigma_2^2) - \int p(x) \bigg(-\frac{(x-\mu_2)^2}{2 \sigma_2^2} \bigg) dx,$$
where I separate the sums and get $\sigma_2^2$ out of the integral.
$$\frac{1}{2} \log (2 \pi \sigma^2_2) + \frac{\int p(x) x^2 dx - \int p(x) 2x\mu_2 dx + \int p(x) \mu_2^2 dx}{2 \sigma_2^2}$$
Letting $\langle \rangle$ denote the expectation operator under $p$, I can rewrite this as
$$\frac{1}{2} \log (2 \pi \sigma_2^2) + \frac{\langle x^2 \rangle - 2 \langle x \rangle \mu_2 + \mu_2^2}{2 \sigma_2^2}.$$
We know that $var(x) = \langle x^2 \rangle - \langle x \rangle ^2$. Thus
$$\langle x^2 \rangle = \sigma_1^2 + \mu_1^2$$
and therefore
$$\frac{1}{2} \log (2 \pi \sigma_2^2) + \frac{\sigma_1^2 + \mu_1^2 - 2 \mu_1 \mu_2 + \mu_2^2}{2 \sigma_2^2},$$
which I can put as
$$\frac{1}{2} \log (2 \pi \sigma_2^2) + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2 \sigma_2^2}.$$
Putting everything together, I get to
\begin{align*}
KL(p, q) &= - \int p(x) \log q(x) dx + \int p(x) \log p(x) dx\\\\
&= \frac{1}{2} \log (2 \pi \sigma_2^2) + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2 \sigma_2^2} - \frac{1}{2} (1 + \log 2 \pi \sigma_1^2)\\\\
&= \log \frac{\sigma_2}{\sigma_1} + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2 \sigma_2^2}.
\end{align*}
Which is wrong since it equals $1$ for two identical Gaussians.
Can anyone spot my error?
Update
Thanks to mpiktas for clearing things up. The correct answer is:
$KL(p, q) = \log \frac{\sigma_2}{\sigma_1} + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2 \sigma_2^2} - \frac{1}{2}$
|
KL divergence between two univariate Gaussians
|
CC BY-SA 4.0
| null |
2011-02-21T10:30:18.527
|
2022-12-27T09:52:24.333
|
2022-12-27T09:31:54.853
|
362671
|
2860
|
[
"normal-distribution",
"kullback-leibler"
] |
7441
|
2
| null |
7438
|
2
| null |
It seems that you want the residuals of linear regression without the intercept term where dependent variable is $A$ and the independent variable is $B$. This can be done with various statistical packages. Here is the implementation in R.
```
aa <-"(39.7678, 2320.3}, {30.8438, 1614.21}, {125.846, 3078.81}, {55.2345, 1947.98}, {22.0671, 972.995}, {30.1827, 701.99}, {29.5734, 837.784}, {24.6913, 1134.23}, {27.2493, 918.887}, {62.7684, 4535.07}, {101.449, 5499.83}, {125.248, 6513.04}, {187.409, 6257.72}, {174.138, 5243.63}, {120.747, 3768.02}, {84.178, 3453.12}, {60.2404, 3075.15}, {63.8622, 3517.73}, {101.9, 7240.11}, {90.6265, 5706.74}, {100.897, 7353.84}, {159.316, 9867.36}, {109.798, 11471.2}, {104.311, 6924.54}, {82.7057, 6339.06}, {140.205, 6555.52}, {173.469, 8644.27}, {138.432, 9655.86}, {95.2955, 5643.33}, {64.563, 3848.77}, {50.7936, 4733.24}, {34.776, 2707.89 }, {25.3775, 6158)"
aa <- gsub(" *","",aa)
aa <- gsub("[}],[{]",");(",aa)
df<-t(sapply(strsplit(aa,";")[[1]],function(l)eval(parse(text=paste("c",l,sep="")))))
rownames(df) <-NULL
colnames(df)<-c("A","B")
```
All the code above is to read in your data into R. It would helped a lot if you simply provided the link to txt or csv file. Note that I fixed some errors in the last two sets of data.
Here are the residuals:
```
residuals(lsfit(y=df[,"A"],x=df[,"B"],intercept=FALSE))
[1] 1.4348447 4.1759382 74.9819367 23.0525277 5.9925519
[6] 18.5853413 15.7326309 5.9530386 12.0686534 -12.1540823
[11] 10.5880176 17.6480900 84.0271524 87.5096181 58.4967124
[16] 27.1300775 9.4368024 5.7468757 -17.7116076 -3.6527783
[21] -20.5935056 -3.6995884 -79.7141307 -10.0871737 -22.0199405
[26] 31.9032911 30.6597014 -21.0894627 2.0637986 0.9786668
[31] -27.4027873 -9.9602093 -76.3569045
```
Note that these residuals have non-zero mean.
```
res <- residuals(lsfit(y=df[,"A"],x=df[,"B"],intercept=FALSE))
mean(res)
[1] 6.779518
```
So clearly you need to include intercept, since otherwise you cannot write $\pm e$. As @whuber pointed out, you probably need root mean square error. For the model with the intercept you can calculate is as follows:
```
> res <- residuals(lsfit(y=df[,"A"],x=df[,"B"],intercept=TRUE))
> res
[1] -15.716076 -15.933997 61.009708 4.341324 -16.804527 -5.347439
[7] -7.631077 -16.168352 -10.955176 -20.023554 6.761571 18.067708
[13] 83.376799 82.609513 47.412760 14.726472 -4.550764 -6.385968
[19] -14.245054 -6.612126 -16.650343 10.776996 -58.516325 -7.943081
[25] -22.329422 32.500930 40.010676 -7.499212 -1.161282 -9.766886
[31] -34.441787 -25.486854 -77.425155
> sqrt(sum(res^2)/(length(res)-2))
[1] 34.70359
>
```
Note that you can use functions `lm` and `summary` to get all the proper statistics of linear regression:
```
> summary(lm(A~B,data=data.frame(df)))
Call:
lm(formula = A ~ B, data = data.frame(df))
Residuals:
Min 1Q Median 3Q Max
-77.425 -16.168 -7.499 10.777 83.377
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 26.874614 12.027882 2.234 0.0328 *
B 0.012330 0.002169 5.685 3.02e-06 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 34.7 on 31 degrees of freedom
Multiple R-squared: 0.5104, Adjusted R-squared: 0.4946
F-statistic: 32.32 on 1 and 31 DF, p-value: 3.018e-06
```
| null |
CC BY-SA 2.5
| null |
2011-02-21T10:30:24.187
|
2011-02-21T19:18:51.447
|
2011-02-21T19:18:51.447
|
2116
|
2116
| null |
7443
|
2
| null |
7440
|
56
| null |
I did not have a look at your calculation but here is mine with a lot of details.
Suppose $p$ is the density of a normal random variable with mean $\mu_1$ and variance $\sigma^2_1$, and that $q$ is the density of a normal random variable with mean $\mu_2$ and variance $\sigma^2_2$.
The Kullback-Leibler distance from $q$ to $p$ is:
$$\int \left[\log( p(x)) - \log( q(x)) \right] p(x) dx$$
\begin{align}&=\int \left[ -\frac{1}{2} \log(2\pi) - \log(\sigma_1) - \frac{1}{2} \left(\frac{x-\mu_1}{\sigma_1}\right)^2 + \frac{1}{2}\log(2\pi) + \log(\sigma_2) + \frac{1}{2} \left(\frac{x-\mu_2}{\sigma_2}\right)^2 \right]\times \frac{1}{\sqrt{2\pi}\sigma_1} \exp\left[-\frac{1}{2}\left(\frac{x-\mu_1}{\sigma_1}\right)^2\right] dx\\&=\int \left\{\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{1}{2} \left[ \left(\frac{x-\mu_2}{\sigma_2}\right)^2 - \left(\frac{x-\mu_1}{\sigma_1}\right)^2 \right] \right\}\times \frac{1}{\sqrt{2\pi}\sigma_1} \exp\left[-\frac{1}{2}\left(\frac{x-\mu_1}{\sigma_1}\right)^2\right] dx\\&
=E_{1} \left\{\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{1}{2} \left[ \left(\frac{x-\mu_2}{\sigma_2}\right)^2 - \left(\frac{x-\mu_1}{\sigma_1}\right)^2 \right]\right\}\\&=\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{1}{2\sigma_2^2} E_1 \left\{(X-\mu_2)^2\right\} - \frac{1}{2\sigma_1^2} E_1 \left\{(X-\mu_1)^2\right\}\\ &=\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{1}{2\sigma_2^2} E_1 \left\{(X-\mu_2)^2\right\} - \frac{1}{2};\end{align}
(Now note that $(X - \mu_2)^2 = (X-\mu_1+\mu_1-\mu_2)^2 = (X-\mu_1)^2 + 2(X-\mu_1)(\mu_1-\mu_2) + (\mu_1-\mu_2)^2$)
\begin{align}&=\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{1}{2\sigma_2^2}
\left[E_1\left\{(X-\mu_1)^2\right\} + 2(\mu_1-\mu_2)E_1\left\{X-\mu_1\right\} + (\mu_1-\mu_2)^2\right] - \frac{1}{2}\\&=\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{\sigma_1^2 + (\mu_1-\mu_2)^2}{2\sigma_2^2} - \frac{1}{2}.\end{align}
| null |
CC BY-SA 4.0
| null |
2011-02-21T10:58:18.427
|
2022-12-27T09:52:24.333
|
2022-12-27T09:52:24.333
|
362671
|
3019
| null |
7444
|
2
| null |
7439
|
29
| null |
There are several resources on Hadley Wickham's [website](http://had.co.nz/reshape/) for the package (now called `reshape2`), including a link to a [paper](http://www.jstatsoft.org/v21/i12) on the package in the Journal of Statistical Software.
Here is a brief example from the paper:
```
> require(reshape2)
Loading required package: reshape2
> data(smiths)
> smiths
subject time age weight height
1 John Smith 1 33 90 1.87
2 Mary Smith 1 NA NA 1.54
```
We note that the data are in the wide form. To go to the long form, we make the `smiths` data frame molten:
```
> melt(smiths)
Using subject as id variables
subject variable value
1 John Smith time 1.00
2 Mary Smith time 1.00
3 John Smith age 33.00
4 Mary Smith age NA
5 John Smith weight 90.00
6 Mary Smith weight NA
7 John Smith height 1.87
8 Mary Smith height 1.54
```
Notice how `melt()` chose one of the variables as the id, but we can state explicitly which to use via argument `'id'`:
```
> melt(smiths, id = "subject")
subject variable value
1 John Smith time 1.00
2 Mary Smith time 1.00
3 John Smith age 33.00
4 Mary Smith age NA
5 John Smith weight 90.00
6 Mary Smith weight NA
7 John Smith height 1.87
8 Mary Smith height 1.54
```
Here is another example from `?cast`:
```
#Air quality example
names(airquality) <- tolower(names(airquality))
aqm <- melt(airquality, id=c("month", "day"), na.rm=TRUE)
```
If we store the molten data frame, we can cast into other forms. In the new version of `reshape` (called `reshape2`) there are functions `acast()` and `dcast()` returning an array-like (array, matrix, vector) result or a data frame respectively. These functions also take an aggregating function (eg `mean()`) to provide summaries of data in molten form. For example, following on from the Air Quality example above, we can generate, in wide form, monthly mean values for the variables in the data set:
```
> dcast(aqm, month ~ variable, mean)
month ozone solar.r wind temp
1 5 23.61538 181.2963 11.622581 65.54839
2 6 29.44444 190.1667 10.266667 79.10000
3 7 59.11538 216.4839 8.941935 83.90323
4 8 59.96154 171.8571 8.793548 83.96774
5 9 31.44828 167.4333 10.180000 76.90000
```
There are really only two main functions in `reshape2`: `melt()` and the `acast()` and `dcast()` pairing. Look at the examples in the help pages for these two functions, see Hadley's website (link above) and look at the paper I mentioned. That should get you started.
You might also look into Hadley's [plyr package](http://had.co.nz/plyr/) which does similar things to `reshape2` but is designed to do a whole lot more besides.
| null |
CC BY-SA 2.5
| null |
2011-02-21T11:09:33.423
|
2011-02-21T11:09:33.423
| null | null |
1390
| null |
7446
|
2
| null |
7439
|
8
| null |
- Quick-R has simple example of using reshape package
- See also ?reshape (LINK) for the Base R way of moving between wide and long format.
| null |
CC BY-SA 2.5
| null |
2011-02-21T11:14:36.693
|
2011-02-21T11:14:36.693
| null | null |
183
| null |
7447
|
1
| null | null |
15
|
4577
|
Both [Root Mean Square](http://en.wikipedia.org/wiki/Root_mean_square) and [Average absolute deviation](http://en.wikipedia.org/wiki/Absolute_deviation#Average_absolute_deviation) seem like the measures of the magnitude of variability (especially when the variates are both +ve and -ve). What are the rules of thumb to choose one of them over the other?
|
Root mean square vs average absolute deviation?
|
CC BY-SA 2.5
| null |
2011-02-21T11:40:25.130
|
2011-02-21T11:52:42.560
|
2011-02-21T11:50:01.563
| null |
3292
|
[
"regression",
"standard-deviation"
] |
7448
|
2
| null |
7447
|
16
| null |
In theory, this should be determined by how important different sized errors are to you, or in other words, your loss function.
In the real world, people put ease of use first. So RMS deviations (or the related variances) are easier to combine, and easier to calculate in a single pass, while average absolute deviations are more robust to outliers and exist for more distributions. Basic linear regression and many of its offshoots are based on minimsing RMS errors.
Another point is that the mean will minimise RMS deviations while the median will minimise absolute deviations, and you may prefer one of these.
| null |
CC BY-SA 2.5
| null |
2011-02-21T11:52:42.560
|
2011-02-21T11:52:42.560
| null | null |
2958
| null |
7449
|
2
| null |
7440
|
103
| null |
OK, my bad. The error is in the last equation:
\begin{align}
KL(p, q) &= - \int p(x) \log q(x) dx + \int p(x) \log p(x) dx\\\\
&=\frac{1}{2} \log (2 \pi \sigma_2^2) + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2 \sigma_2^2} - \frac{1}{2} (1 + \log 2 \pi \sigma_1^2)\\\\
&= \log \frac{\sigma_2}{\sigma_1} + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2 \sigma_2^2} - \frac{1}{2}
\end{align}
Note the missing $-\frac{1}{2}$. The last line becomes zero when $\mu_1=\mu_2$ and $\sigma_1=\sigma_2$.
| null |
CC BY-SA 3.0
| null |
2011-02-21T11:55:19.103
|
2022-12-27T08:40:09.397
|
2022-12-27T08:40:09.397
|
2116
|
2116
| null |
7450
|
1
| null | null |
12
|
5193
|
I have a problem with the estimation parameter for Zipf. My situation is the following:
I have a sample set (measured from an experiment that generates calls that should follow a Zipf distribution). I have to demonstrate that this generator really generates calls with zipf distribution.
I already read this Q&A [How to calculate Zipf's law coefficient from a set of top frequencies?](https://stats.stackexchange.com/questions/6780/how-to-calculate-zipfs-law-coefficient-from-a-set-of-top-frequencies) but I reach bad results because i use a truncated distribution. For example if I set the "s" value to "0.9" for the generation process, if I try to estimate "s" value as wrote in the reported Q&A I obtain "s" equal to 0.2 ca. I think this is due to the fact that i use a TRUNCATED distribution (i have to limit the zipf with a truncation point, it is right-truncated).
How can i estimate parameters with a truncated zipf distribution?
|
How to estimate parameters for Zipf truncated distribution from a data sample?
|
CC BY-SA 2.5
| null |
2011-02-21T13:51:50.393
|
2011-04-08T01:51:15.640
|
2017-04-13T12:44:41.980
|
-1
|
3342
|
[
"distributions",
"estimation",
"pareto-distribution",
"zipf"
] |
7452
|
2
| null |
7450
|
5
| null |
The paper
Clauset, A et al, [Power-law Distributions in Empirical Data](http://arxiv.org/abs/0706.1062). 2009
contains a very good description of how to go about fitting power law models. The associated [web-page](http://tuvalu.santafe.edu/~aaronc/powerlaws/) has code samples. Unfortunately, it doesn't give code for truncated distributions, but it may give you a pointer.
---
As an aside, the paper discusses the fact that many "power-law datasets" can be modelled equally well (and in some cases better) with the Log normal or exponential distributions!
| null |
CC BY-SA 2.5
| null |
2011-02-21T14:36:44.550
|
2011-02-21T14:36:44.550
| null | null |
8
| null |
7454
|
2
| null |
7439
|
7
| null |
You don't have to use `melt` and `cast`.
Reshaping data can be done lots of ways. In your particular example on your cite using `recast` with `aggregate` was redundant because `aggregate` does the task fine all on it's own.
```
aggregate(cbind(LPMVTUZ, LPMVTVC, LPMVTXC) ~ year, dtm, sum)
# or even briefer by first removing the columns you don't want to use
aggregate(. ~ year, dtm[,-2], sum)
```
I do like how, in your blog post, you explain what `melt` is doing. Very few people understand that and once you see it then it gets easier to see how `cast` works and how you might write your own functions if you want.
| null |
CC BY-SA 3.0
| null |
2011-02-21T15:25:54.473
|
2014-05-25T07:14:03.977
|
2014-05-25T07:14:03.977
|
601
|
601
| null |
7455
|
1
|
7473
| null |
16
|
4726
|
Can someone refer me to a good reference that explains the connection between Bayesian statistics and generative modeling techniques? Why do we usually use generative models with Bayesian techniques?
Why it is especially appealing to use Bayesian statistics in the absence of complete data, if at all?
Note that I come from a more machine learning oriented view, and I am interested in reading more about it from the statistics community.
Any good reference that discusses these points would be greatly appreciated.
Thanks.
|
The connection between Bayesian statistics and generative modeling
|
CC BY-SA 2.5
| null |
2011-02-21T16:34:03.293
|
2012-10-16T13:51:01.003
|
2011-02-22T07:53:42.247
|
2116
|
3347
|
[
"bayesian",
"generative-models"
] |
7456
|
2
| null |
7438
|
3
| null |
You are asking to estimate the typical size of the error. This is usually done by estimating the variance of the random variable $e$ and taking its square root. It is often called the "root mean square error".
These data look like they are formatted for Mathematica. A Mathematica 8 solution is
```
{a, b} = {{39.7678, 2320.3},...<31 values omitted>...{25.3775, 6158.}} // Transpose;
model = LinearModelFit[{{b} // Transpose, a}];
model["EstimatedVariance"] // Sqrt
```
This assumes the data are given in the sequence (a,b) and not (b,a)!
It is important to note that this model omits the usual constant term. If you intended that to be there, include it by inserting a column of 1's in the "design matrix":
```
m = {ConstantArray[1, Length[a]], b} // Transpose;
model = LinearModelFit[{m, a}];
model["EstimatedVariance"] // Sqrt
```
In either case the output is a single number: your "margin of error."
| null |
CC BY-SA 2.5
| null |
2011-02-21T16:44:01.187
|
2011-02-21T16:44:01.187
| null | null |
919
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.