Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
7905
2
null
7891
2
null
A bit vague answer, but I'll give it a chance. I always felt that the size constraint is the central idea behind this method -- without is seems just to converge to other approaches, effectively to 2-means and ideologically to unsupervised SVM. The previous rather invalidates this idea, the latter way is more intriguing while you may hope to save some pain using SVM optimization framework and kernel tricks.
null
CC BY-SA 2.5
null
2011-03-05T00:02:10.823
2011-03-05T00:02:10.823
null
null
null
null
7906
2
null
7900
7
null
While generating random data from regular expressions would be a convenient interface, it is not directly supported in R. You could try one level of indirection though: generate random numbers and convert them into strings. For example, to convert a number into a character, you could use the following: ``` > rawToChar(as.raw(65)) [1] "A" ``` By carefully selecting the range of the random number to draw you can restrict your self to a desired set of ASCII characters that might correspond to a regular expression, e.g., to the character class `[a-zA-Z]`. Clearly, this is neither an elegant nor efficient solution, but it is at least native and could give you the desired effect with some boilerplate.
null
CC BY-SA 2.5
null
2011-03-05T00:36:49.843
2011-03-05T00:36:49.843
null
null
1537
null
7907
2
null
7902
4
null
From a response to comment, we can adopt an urn model. The urn contains 100,000 balls representing all cases. An unknown number of these are black ("invalid"); they are of no interest. We are interested solely in the non-black balls in the urn. Of those, some are of color "A" and others of color "B". The main research question appears to be "what proportion of the balls of interest are A's?" This urn model says option (2) is the one to use. A simple random sample (without replacement) of 2,000 balls from this urn yielded 1000 black balls, 300 A's, and 700 B's, for n = 1000 A's & B's. The rest is routine. In particular, the distribution of A's (conditional on a non-black ball being drawn) is Binomial(p, 1000). A standard estimate of p is #A's / (Total A's & B's) = 30%. The estimated variance of the total is p(1-p), whence the variance of the estimated proportion of A's equals p(1-p)/n = 0.00021. Its square root, 1.45%, is the standard error of estimate of p. Because the numbers of A's and B's are large, yet are small compared to the expected number of non-black balls (about 50,000), it is appropriate to use normal-theory confidence intervals and to ignore the correction for sampling without replacement. (The correction shrinks the confidence interval to 0.99 times its width.) A 99% two-sided confidence interval therefore extends 2.58 * 1.45% = 3.73% to either side of the estimated proportions E.g., a confidence interval for the proportion of A's (out of all the A's and B's in the urn) extends from 26.27% to 33.73%. If you are uncomfortable using conditional probabilities (which is at the root of this analysis), you can estimate the contents of the urn (i.e., total numbers of black balls, A's, and B's) using the [multinomial distribution](http://en.wikipedia.org/wiki/Multinomial_distribution). You will get exactly the same results, because in the end you care only about the proportion of A's relative to the numbers of A's and B's, so all estimates involving the number of black balls never enter the calculation. Another way to get some intuition is to recognize that (except for the tiny correction term being neglected here) the size of the confidence interval depends only on the observed numbers of A's and B's and not on the number of balls in the urn. That's why there's no concern here about whether the "population" is 50,000 or 100,000. An auxiliary research question seems to be to estimate the total number of A's and B's in the urn. For this purpose the urn contains only two kinds of balls, black ones and non-black ones, and we want to estimate the number of non-black balls. This is a standard binomial sampling situation. Without more ado, the estimated number of non-black balls equals 100,000 * (1000/2000) = 50,000 and the estimated proportion is 1/2, with standard error $\sqrt{(1/2)(1 - 1/2)/2000}$ = 1.1%. Therefore the estimate of 50,000 has a 99% two-sided confidence interval from 48,560 to 51,440.
null
CC BY-SA 2.5
null
2011-03-05T01:53:21.813
2011-03-05T01:53:21.813
null
null
919
null
7908
2
null
7903
4
null
## Ad hoc approach I'd assume that $\beta_i$ is reasonably reliable because it was estimated on many students, most of who did not cheat on question $i$. For each student $j$, sort the questions in order of increasing difficulty, compute $\beta_i + q_j$ (note that $q_j$ is just a constant offset) and threshold it at some reasonable place (e.g. p(correct) < 0.6). This gives a set of questions which the student is unlikely to answer correctly. You can now use hypothesis testing to see whether this is violated, in which case the student probably cheated (assuming of course your model is correct). One caveat is that if there are few such questions, you might not have enough data for the test to be reliable. Also, I don't think it's possible to determine which question he cheated on, because he always has a 50% chance of guessing. But if you assume in addition that many students got access to (and cheated on) the same set of questions, you can compare these across students and see which questions got answered more often than chance. You can do a similar trick with questions. I.e. for each question, sort students by $q_j$, add $\beta_i$ (this is now a constant offset) and threshold at probability 0.6. This gives you a list of students who shouldn't be able to answer this question correctly. So they have a 60% chance to guess. Again, do hypothesis testing and see whether this is violated. This only works if most students cheated on the same set of questions (e.g. if a subset of questions 'leaked' before the exam). ## Principled approach For each student, there is a binary variable $c_j$ with a Bernoulli prior with some suitable probability, indicating whether the student is a cheater. For each question there is a binary variable $l_i$, again with some suitable Bernoulli prior, indicating whether the question was leaked. Then there is a set of binary variables $a_{ij}$, indicating whether student $j$ answered question $i$ correctly. If $c_j = 1$ and $l_i = 1$, then the distribution of $a_{ij}$ is Bernoulli with probability 0.99. Otherwise the distribution is $logit(\beta_i + q_j)$. These $a_{ij}$ are the observed variables. $c_j$ and $l_i$ are hidden and must be inferred. You probably can do it by Gibbs sampling. But other approaches might also be feasible, maybe something related to biclustering.
null
CC BY-SA 2.5
null
2011-03-05T03:21:10.143
2011-03-05T07:35:00.167
2011-03-05T07:35:00.167
3369
3369
null
7909
2
null
7868
1
null
This can be done using relational database. R has a nice implementation of this (see this post on [sqldf](https://stackoverflow.com/questions/1169551/sql-like-functionality-in-r)). MS Access (or even Excel) will work just as well. The idea here is you want to create a table that maps a number (as you say, of 5/6 digits) to a geographical region (75 or however many you have). Then, you join your table of 10000 records onto your reference table. Let table `mydata` contain your 10000 records and holds at least 1 column: - ID - contains your 'cell number' Let table `myreftable` contain your reference table, which should have exactly 1 row for each geographic region, and holds 2 columns: - ID - contains the relevent 5/6 digits of your 'cell number' - Geo - contains the description of the geographic region The table you'd want would be generate by the following SQL: ``` select m.ID as cell_number ,r.Geo as geo_region from mydata m inner join myreftable r on left(m.ID,6)=r.ID ``` ... where 'left()' is any function that takes the first 'n' characters of a string. Each database has different text/string functions you can use for this purpose.
null
CC BY-SA 2.5
null
2011-03-05T05:12:14.717
2011-03-05T05:12:14.717
2017-05-23T12:39:26.593
-1
3551
null
7911
2
null
7899
30
null
Does the picture below look like what you want to achieve? ![enter image description here](https://i.stack.imgur.com/NDRc2.png) Here's the updated R code, following your comments: ``` do.it <- function(df, type="confidence", ...) { require(ellipse) lm0 <- lm(y ~ x, data=df) xc <- with(df, xyTable(x, y)) df.new <- data.frame(x=seq(min(df$x), max(df$x), 0.1)) pred.ulb <- predict(lm0, df.new, interval=type) pred.lo <- predict(loess(y ~ x, data=df), df.new) plot(xc$x, xc$y, cex=xc$number*2/3, xlab="x", ylab="y", ...) abline(lm0, col="red") lines(df.new$x, pred.lo, col="green", lwd=1.5) lines(df.new$x, pred.ulb[,"lwr"], lty=2, col="red") lines(df.new$x, pred.ulb[,"upr"], lty=2, col="red") lines(ellipse(cor(df$x, df$y), scale=c(sd(df$x),sd(df$y)), centre=c(mean(df$x),mean(df$y))), lwd=1.5, col="green") invisible(lm0) } set.seed(101) n <- 1000 x <- rnorm(n, mean=2) y <- 1.5 + 0.4*x + rnorm(n) df <- data.frame(x=x, y=y) # take a bootstrap sample df <- df[sample(nrow(df), nrow(df), rep=TRUE),] do.it(df, pch=19, col=rgb(0,0,.7,.5)) ``` And here is the ggplotized version ![enter image description here](https://i.stack.imgur.com/MjLli.png) produced with the following piece of code: ``` xc <- with(df, xyTable(x, y)) df2 <- cbind.data.frame(x=xc$x, y=xc$y, n=xc$number) df.ell <- as.data.frame(with(df, ellipse(cor(x, y), scale=c(sd(x),sd(y)), centre=c(mean(x),mean(y))))) library(ggplot2) ggplot(data=df2, aes(x=x, y=y)) + geom_point(aes(size=n), alpha=.6) + stat_smooth(data=df, method="loess", se=FALSE, color="green") + stat_smooth(data=df, method="lm") + geom_path(data=df.ell, colour="green", size=1.2) ``` It could be customized a little bit more by adding model fit indices, like Cook's distance, with a color shading effect.
null
CC BY-SA 2.5
null
2011-03-05T09:57:01.973
2011-03-06T15:50:56.043
2011-03-06T15:50:56.043
930
930
null
7912
1
7921
null
10
9382
The question is pretty much contained in the title. What is the Mahalanobis distance for two distributions of different covariance matrices? What I have found till now assumes the same covariance for both distributions, i.e., something of this sort: $$\Delta^T \Sigma^{-1} \Delta$$ What if I have two different $\Sigma$s? Note:- The problem is this: there are two bivariate distributions that have the same dimensions but that are rotated and translated with respect to each other (sorry I come from a pure mathematical background, not a statistics one). I need to measure their degree of overlap/distance. *Update: * What might or might not be implicit in what I'm asking is that I need a distance between the means of the two distributions. I know where the means are, but since the two distributions are rotated with respect to one another, I need to assign different weights to different orientations and therefore a simple Euclidean distance between the means does not work. Now, as I have understood it, the Mahalanobis distance cannot be used to measure this information if the distributions are differently shaped (apparently it works with two multivariate normal distributions of identical covariances, but not in the general case). Is there a good measure that encodes this wish to encode orientations with different weights?
Mahalanobis distance between two bivariate distributions with different covariances
CC BY-SA 2.5
null
2011-03-05T10:48:05.390
2014-03-12T20:30:59.290
2011-03-07T14:24:08.353
223
3586
[ "normal-distribution", "multivariate-analysis", "distance-functions" ]
7913
2
null
7903
3
null
If you want to get into some more complex approaches, you might look at item response theory models. You could then model the difficulty of each question. Students who got difficult items correct while missing easier ones would, I think, be more likely to be cheating than those who did the reverse. It's been more than a decade since I did this sort of thing, but I think it could be promising. For more detail, check out psychometrics books
null
CC BY-SA 2.5
null
2011-03-05T11:33:10.813
2011-03-05T11:33:10.813
null
null
686
null
7914
2
null
7826
2
null
So you have a population each of whom can have zero or more conditions. To answer the question: How many hospital patients have A? It seems to me that the best you can do is take your favourite proportion estimator and offer it up with your favourite confidence interval. There are lots of choices, which will make a difference for very high or very low proportions. If you have such a situation, the estimator above may not be optimal. If you are interested in the population of just your hospital then you can, as SheldonCooper points out, dispense with the statistics altogether. I suspect however that you are interested in hospital patients more generally, so your standard errors and intervals might be interpreted relative to this population. In your suggested estimator the identity of the population will determine what 1-F is. Certainly hospital patients don't look like non-hospital patients with respect to the conditions you're counting, but that need not matter. Following Sheldon's second observation, it is probable that the conditions correlate. But as far as I can see this is only useful information if you are asking conditional questions, e.g. the prevalence of A among B sufferers. In probabilistic terms your question is about estimating marginals, and correlation information only tells you about estimating conditionals. If you were interested in these sorts of subgroups, you'd certainly want to model this information. You'd also want it if there were differential measurement errors or sample selection issues, etc. e.g. only getting tested for A if you have a B diagnosis... That might also make certain sample marginals problematic as estimates of population marginals. Thankfully, I don't know much about hospital populations, but I'd be willing to bet that there are some of these issues around. Finally, about reporting: If you in fact want to report confidence regions rather than condition-wise intervals, then again the correlation structure matters, and things get considerably trickier. I seem to remember that Agresti had a paper on simultaneous confidence intervals for multivariate Binomial proportions, which might be helpful for this approach.
null
CC BY-SA 2.5
null
2011-03-05T12:46:54.847
2011-03-05T13:02:13.523
2011-03-05T13:02:13.523
1739
1739
null
7915
1
7922
null
6
2427
I'm doing a binary classification using SVM classfier, libsvm, where roughly 95% belongs to one class. The parameters C and gamma are to be set before the actual training takes place. I followed [the tutorial](http://www.csie.ntu.edu.tw/~cjlin/libsvm/) but still can't get any good results. There is a script that comes with the library that is supposed to help with choosing the right values for parameters but what this script is doing is basically maximizing the accuracy metric (TP+TN)/ALL, so in my case it chooses the parameters to label all data with prevailing class label. I would like to choose parameters with recall and precision based metrics. How could I approach this problem. Accuracy is a meaningless metric for what I'm doing. Also I'm keen on changing the library libsvm to any other one that can help me with this problem as long as it takes data in the same format. 1 1:0.3 2:0.4 ... -1 1.0.4 2:0.23 and so on Can anybody help? UPDATE: yes I did try both grid.py and easy.py but even though grid search uses logarithmic scale it is extremely slow.I mean even if I run it on just small chunk of my data it takes tens of hours to finish. Is this the most efficient way to use SVM?? Have also tried svmlight but it does exactly the same it labels all data with one label. UPDATE2: I reformed my question the better reflect what sort of issues I am facing
Efficient way to classify with SVM
CC BY-SA 2.5
null
2011-03-05T13:09:10.763
2011-03-05T21:56:07.913
2011-03-05T15:49:16.910
1371
1371
[ "machine-learning", "svm" ]
7916
2
null
7915
1
null
By Default libSVM find the optimal hyper-parameters, for the SVM model using cross validation methods and by using Accuracy (for classification), or Mean Square Error (for regression) as a measure for evaluation. Weka has several other evaluation metric to find the optimal parameters (using the gridSearch) If the metric you are interested is not there, the simplest solution that comes to mind is to write a little program that would perform cross validation, and optimize the parameters of the model based on your measure of choice.
null
CC BY-SA 2.5
null
2011-03-05T14:19:04.687
2011-03-05T14:19:04.687
null
null
21360
null
7917
2
null
7774
16
null
I'm providing this as a second answer since the analysis is completely elementary and provides exactly the desired result. Proposition For $c > 0$ and $n \geq 1$, $$ \mathbb{P}(T < n \log n - c n ) < e^{-c} \>. $$ The idea behind the proof is simple: - Represent the time until all coupons are collected as $T = \sum_{i=1}^n T_i$, where $T_i$ is the time that the $i$th (heretofore) unique coupon is collected. The $T_i$ are geometric random variables with mean times of $\frac{n}{n-i+1}$. - Apply a version of the Chernoff bound and simplify. Proof For any $t$ and any $s > 0$, we have that $$ \mathbb{P}(T < t) = \mathbb{P}( e^{-s T} > e^{-s t} ) \leq e^{s t} \mathbb{E} e^{-s T} \> . $$ Since $T = \sum_i T_i$ and the $T_i$ are independent, we can write $$ \mathbb{E} e^{-s T} = \prod_{i=1}^n \mathbb{E} e^{- s T_i} $$ Now since $T_i$ is geometric, let's say with probability of success $p_i$, then a simple calculation shows $$ \mathbb{E} e^{-s T_i} = \frac{p_i}{e^s - 1 + p_i} . $$ The $p_i$ for our problem are $p_1 = 1$, $p_2 = 1 - 1/n$, $p_3 = 1 - 2/n$, etc. Hence, $$ \prod_{i=1}^n \mathbb{E} e^{-s T_i} = \prod_{i=1}^n \frac{i/n}{e^s - 1 + i/n}. $$ Let's choose $s = 1/n$ and $t = n \log n - c n$ for some $c > 0$. Then $$ e^{s t} = n e^{-c} $$ and $e^s = e^{1/n} \geq 1 + 1/n$, yielding $$ \prod_{i=1}^n \frac{i/n}{e^s - 1 + i/n} \leq \prod_{i=1}^n \frac{i}{i+1} = \frac{1}{n+1} \> . $$ Putting this together, we get that $$ P(T < n \log n - c n) \leq \frac{n}{n+1} e^{-c} < e^{-c} $$ as desired.
null
CC BY-SA 2.5
null
2011-03-05T14:28:38.093
2011-03-05T14:34:54.470
2011-03-05T14:34:54.470
2970
2970
null
7919
1
7924
null
8
10431
I am working on a linear regression with R and there are many 0 values in my predictor variables. How are these handled in R's `lm()` function? Should I remove this data for more accurate analysis? Any advice is appreciated. Thanks.
How are zero values handled in lm()?
CC BY-SA 2.5
null
2011-03-05T18:30:43.537
2014-01-20T15:50:06.813
2011-03-05T19:07:45.807
930
1422
[ "r", "regression" ]
7920
2
null
7919
2
null
What % of the predictor is 0, and what other values does it take on? The concern is whether a predictor with such little variation (vast majority being the value of 0) would be useful in a regression model. To approach this, you can first stratify and do one analysis with the subset of the data where predictor is 0, and another analysis where the predictor is != 0. Once you get a sense of the structure of the data, you can decide whether to proceed with analysis using the entire dataset, and whether the predictor variable should stay in the model.
null
CC BY-SA 2.5
null
2011-03-05T18:49:20.003
2011-03-05T18:49:20.003
null
null
812
null
7921
2
null
7912
6
null
There are many notions of distance between probability distributions. Which one to use depends on your goals. [Total variation distance](http://en.wikipedia.org/wiki/Total_variation_distance) is a natural way of measuring overlap between distributions. If you are working with multivariate Normals, the [Kullback-Leibler Divergence](http://en.wikipedia.org/wiki/Multivariate_normal_distribution#Kullback.E2.80.93Leibler_divergence) is mathematically convenient. Though it is not actually a distance (as it fails to be symmetric and fails to obey the triangle inequality), it upper bounds the total variation distance — see [Pinsker’s Inequality](http://en.wikipedia.org/wiki/Pinsker%27s_inequality).
null
CC BY-SA 2.5
null
2011-03-05T20:38:55.867
2011-03-05T20:38:55.867
null
null
1670
null
7922
2
null
7915
5
null
I would do two things. First, to address your issue with accuracy due to imbalanced data, you need to set the cost of misclassifying positive and negative examples separately. A reasonable rule of thumb in your case would be to set the cost to 5 for the larger class and to 95 for the smaller class. This way misclassifying 10% of the smaller class will have the same cost as misclassifying 10% of the larger class even though the latter 10% is a much larger number of points. If you use the command line, the command is something like -w0 5 -w1 95. I feel this needs to be done anyway (even though you are using F score for now) because this is what SVM optimizes, so unless you do it, all your F scores will be poor. Second, to address the issue of speed, I would try pre-computing the kernel. For 26k points this is borderline infeasible, but if you are willing to subsample, you can precompute the kernel once for each gamma and reuse it across all C's.
null
CC BY-SA 2.5
null
2011-03-05T21:56:07.913
2011-03-05T21:56:07.913
null
null
3369
null
7923
1
null
null
2
2817
In SPSS, I want to compare two clusters of management sciences department faculty members in two universities. - Which test should I use? - Can you explain how to do it in SPSS?
How can I compare Likert scale data of two clusters in SPSS?
CC BY-SA 2.5
null
2011-03-06T08:21:51.870
2011-03-30T18:33:50.403
2011-03-30T18:33:50.403
930
3570
[ "clustering", "spss", "scales", "likert" ]
7924
2
null
7919
5
null
The problem you described here is known as limited dependent variable problem usually represented by truncated or censored data (the former could be seen as a special case of the later). In this case application of `lm()` function would not be the best choice, since it in general will produce biased and inconsistent estimates of the true regression line. However, truncation (dropping zeroes from the sample, as you suggested in the comment) will make this bias even larger. Likely the problem is well known and there are usually two common options to solve it either to use a [Tobit model](http://en.wikipedia.org/wiki/Tobit_model) or a [Heckman](http://en.wikipedia.org/wiki/Heckman_correction)'s two step approach, it would be useful to study any common econometric textbook on the topic (this Cross Validated [link](https://stats.stackexchange.com/questions/4612/good-econometrics-textbooks) will be useful). The difference in two models is that Heckman's method allows for either explanatory variables or parameter estimates to differ across the estimated parts that influence the zeros and the magnitude of the observed non zero values. To implement the Tobit and Heckman models in R you will need `sampleSelection` or `censReg` packages. There are also nice Vignettes corresponding to these packages, so read them first.
null
CC BY-SA 2.5
null
2011-03-06T09:56:37.253
2011-03-06T09:56:37.253
2017-04-13T12:44:33.977
-1
2645
null
7925
1
7927
null
17
3614
I am currently working on a project where I generate random values using [low discrepancy / quasi-random point sets](http://www.puc-rio.br/marco.ind/quasi_mc.html), such as Halton and Sobol point sets. These are essentially $d$-dimensional vectors that mimic a $d$-dimensional uniform(0,1) variables, but have a better spread. In theory, they are supposed to help reduce the variance of my estimates in another part of the project. Unfortunately, I've been running into issues working with them and much of the literature on them is dense. I was therefore hoping to get some insight from someone who has experience with them, or at least figure out a way to empirically assess what is going on: If you have worked with them: - What exactly is scrambling? And what effect does it have on the stream of points that are generated? In particular, is there an effect when the dimension of the points that are generated increases? - Why is it that if I generate two streams of Sobol points with MatousekAffineOwen scrambling, I get two different streams of points. Why is this not the case when I use reverse-radix scrambling with Halton points? Are there other scrambling methods that exist for these point sets - and if so, is there a MATLAB implementation of them? If you have not worked with them: - Say I have $n$ sequences $S_1, S_2, \ldots,S_n$ of supposedly random numbers, what type of statistics should I use to show that they are not correlated to each other? And what number $n$ would I need to prove that my result is statistically significant? Also, how could I do the same thing if I had $n$ sequences $S_1, S_2, \ldots,S_n$ of $d$-dimensional random $[0,1]$ vectors? --- Follow-Up Questions on Cardinal's Answer - Theoretically speaking, can we pair any scrambling method with any low discrepany sequence? MATLAB only allows me to apply reverse-radix scrambling on Halton sequences, and am wondering whether that's simply an implementation issue or a compability issue. - I am looking for a way that will allow me to generate two (t,m,s) nets that are uncorrelated with each another. Will MatouseAffineOwen allow me to do this? How about if I used a deterministic scrambling algorithm and simply decided to choose every 'kth' value where k was a prime?
Scrambling and correlation in low discrepancy sequences (Halton/Sobol)
CC BY-SA 2.5
null
2011-03-06T10:43:38.317
2011-03-07T02:48:43.523
2011-03-07T02:48:43.523
3572
3572
[ "hypothesis-testing", "monte-carlo", "random-generation", "randomness" ]
7926
2
null
7873
3
null
It does sound like you are in a bit of a quandary because you only have 1 response variable for each individual measurement. I was initially going to recommend a multi-level approach. But in order for that to work you need to observe the response at the lowest level - which you do not - you observe your response at the individual level (which would be level 2 in a MLM) 1)Taking the average of x means losing information in the within-individual variability of x. You are losing variability of the covariate x, but this only matters if the other information contained in X is related to the response. There is nothing from stopping you from putting the variance of X in as a covariate either. 2) The mean is itself a statistic, so by putting it in the model we end up doing statistics on statistics. A statistic is a function of the observed data. So any covariate is a "statistic". So you are already doing "statistics on statistics" whether you like it or not. However, it does make a difference to how you should interpret the slope coefficient - as an average value, and not a value in the individual birth. If you don't care about the individual births, then this matters little. If you do, then this approach can be misleading. 3) The number of offspring an individual had is in the model, but it is also used to calculate the mean of variable x, which I think could cause trouble. It would only matter if the mean of X was functionally/deterministically related to number of offspring. One way this can happen is if the value of X is the same for each individual who had the same number of births. Usually this isn't the case. You could specify a model which includes each value of X as a covariate. But this would probably involve some new methodological research on your part I would imagine. Your likelihood function would be different for different individuals, due to the different number of measurements within individuals. I don't think multi-level modeling applies in this case conceptually. This is simply because the births are not a subset or sample within individuals. Although the maths may be the same. One way you could incorporate this structure is to create a model like: $$(Y_{ij}|x_{ij}) \sim Bin(Y_{ij}|n_{ij},p_{ij})$$ Where $Y_{ij}$ is the binomial response for individual $i$ and $j$ denotes the number of births, $x_{ij}$ is the covariates, and $n_{ij}$ is the number of individuals with the same covariate values, and also had the same number of births. $p_{ij}$ is the probability, which you normally model as: $$g(p_{ij}) = x_{ij}^{T}\beta$$ For some monotonic/invertible function $g(.)$. The "tricky" part comes in because the dimension of $x_{ij}$ varies with $j$. The log-likelihood in this case is: $$L=L(\beta)=\sum_{j\in B}\Bigg[\sum_{i=1}^{N_{j}} log[Bin(Y_{ij}|n_{ij},g^{-1}(x_{ij}^{T}\beta))]\Bigg]$$ Where $B$ is just the set of the number of births which you have available in your data set. To maximise it is likely to be a nontrivial task, and you probably won't get the usual IRLS equations from doing a taylor series expansions about the current estimate. Taylor series is the way I would go from here - I just don't have the energy to run through the process at this time. I would suggest you try to re-arrange your answer so that it looks like an "ordinary" binomial GLM. This will allow you to take advantage of the standard software available. What I can tell you is that when you differentiate with respect to a beta which depends on $j$ (e.g. the coefficient for the metabolic rate for the third birth), some of terms in this summation will drop out. This is basically the likelihood "telling you" that certain observations contribute nothing to estimating certain parameters (e.g. individuals who give birth to two or less offspring contribute nothing to the estimated slope for the metabolic rate for the third birth). So in summary, your intuition is spot on when you suggest that something is being lost. However, the price for "purity" could be high - especially if you need to write your own algorithm to get your estimates.
null
CC BY-SA 2.5
null
2011-03-06T11:31:31.633
2011-03-06T11:31:31.633
null
null
2392
null
7927
2
null
7925
13
null
Scrambling is usually an operation applied to a $(t,m,s)$ digital net which uses some base $b$. Sobol' nets use $b = 2$, for example, while Faure nets use a prime number for $b$. The purpose of scrambling is to (hopefully) get an even more uniform distribution, especially if you can only use a small number of points. A good example to see why this works is to look at the Halton sequence in $d = 2$ and choose two "largish" primes, like 29 and 31. The square gets filled in very slowly using the standard Halton sequence. But, with scrambling, it is filled in more uniformly much more quickly. Here is a plot for the first couple hundred points using a deterministic scrambling approach. ![enter image description here](https://i.stack.imgur.com/TDvSO.png) The most basic forms of scrambling essentially permute the base $b$ digit expansions of the original $n$ points among themselves. For more details, here is a [clear exposition](http://www-stat.stanford.edu/~owen/reports/altscram.pdf). The nice thing about scrambling is that if you start with a $(t,m,s)$ net and scramble it, you get a $(t,m,s)$ net back out. So, there is a closure property involved. Since you want to use the theoretical benefits of a $(t,m,s)$ net in the first place, this is very desirable. Regarding types of scrambling, the reverse-radix scramble is a deterministic scrambling. The Matousek scrambling algorithm is a random scramble, done, again, in a way to maintain the closure property. If you set the random seed before you make the call to the scrambling function, you should always get the same net back. You might also be interested in the [MinT project](http://mint.sbg.ac.at/index.php).
null
CC BY-SA 2.5
null
2011-03-06T16:07:27.103
2011-03-06T20:52:24.660
2011-03-06T20:52:24.660
2970
2970
null
7928
2
null
7868
3
null
Ah! "Cell" apparently means "cell phone" (rather than a generic cell such as a square on a map grid). Thus, for each prefix, you would like to identify a geographic region in which that prefix is found. These regions are not predefined; rather, you would like to estimate their extents from the data you have. (This is why it's not a simple database tabulation problem.) If this is right, then you probably would like to apply a "[concave hull](https://gis.stackexchange.com/search?q=concave+hull)" algorithm with a GIS. Each prefix gives a set of points in your database which translates to a polygonal region (the concave hull). By applying this algorithm separately for each prefix, you would obtain a set of possibly overlapping geographic regions. [Statistical clustering](http://en.wikipedia.org/wiki/Cluster_analysis) algorithms look promising in this context, but they don't seem appropriate for several reasons. First, you already have the clusters: the cell phone number prefix gives the cluster identifiers. Second, the clusters might not be spatially separated, because sets of distinct ids can be geographically overlaid. If you want to pursue the concave hull idea, we can migrate this question to the GIS site.
null
CC BY-SA 2.5
null
2011-03-06T16:39:48.037
2011-03-06T16:39:48.037
2017-04-13T12:33:47.693
-1
919
null
7929
1
7930
null
16
30919
I played around with some unit root testing in R and I am not entirely sure what to make of the k lag parameter. I used the augmented Dickey Fuller test and the Philipps Perron test from the [tseries](http://cran.r-project.org/web/packages/tseries/index.html) package. Obviously the default $k$ parameter (for the `adf.test`) depends only on the length of the series. If I choose different $k$-values I get pretty different results wrt. rejecting the null: ``` Dickey-Fuller = -3.9828, Lag order = 4, p-value = 0.01272 alternative hypothesis: stationary # 103^(1/3)=k=4 Dickey-Fuller = -2.7776, Lag order = 0, p-value = 0.2543 alternative hypothesis: stationary # k=0 Dickey-Fuller = -2.5365, Lag order = 6, p-value = 0.3542 alternative hypothesis: stationary # k=6 ``` plus the PP test result: ``` Dickey-Fuller Z(alpha) = -18.1799, Truncation lag parameter = 4, p-value = 0.08954 alternative hypothesis: stationary ``` Looking at the data, I would think the underlying data is non-stationary, but still I do not consider these results a strong backup, in particular since I do not understand the role of the $k$ parameter. If I look at decompose / stl I see that the trend has strong impact as opposed to only small contribution from remainder or seasonal variation. My series is of quarterly frequency. Any hints?
Understanding the k lag in R's augmented Dickey Fuller test
CC BY-SA 2.5
null
2011-03-06T17:18:02.327
2011-08-24T10:29:44.190
2011-03-06T18:47:18.750
930
704
[ "r", "time-series", "trend" ]
7930
2
null
7929
6
null
It's been a while since I looked at ADF tests, however I do remember at least two versions of the adf test. [http://www.stat.ucl.ac.be/ISdidactique/Rhelp/library/tseries/html/adf.test.html](http://www.stat.ucl.ac.be/ISdidactique/Rhelp/library/tseries/html/adf.test.html) [http://cran.r-project.org/web/packages/fUnitRoots/](http://cran.r-project.org/web/packages/fUnitRoots/) The fUnitRoots package has a function called adfTest(). I think the "trend" issue is handled differently in those packages. Edit ------ From page 14 of the following link, there were 4 versions (uroot discontinued) of the adf test: [http://math.uncc.edu/~zcai/FinTS.pdf](http://math.uncc.edu/~zcai/FinTS.pdf) One more link. Read section 6.3 in the following link. It does a far btter job than I could do in explaining the lag term: [http://www.yats.com/doc/cointegration-en.html](http://www.yats.com/doc/cointegration-en.html) Also, I would be careful with any seasonal model. Unless you're sure there's some seasonality present, I would avoid using seasonal terms. Why? Anything can be broken down into seasonal terms, even if it's not. Here are two examples: ``` #First example: White noise x <- rnorm(200) #Use stl() to separate the trend and seasonal term x.ts <- ts(x, freq=4) x.stl <- stl(x.ts, s.window = "periodic") plot(x.stl) #Use decompose() to separate the trend and seasonal term x.dec <- decompose(x.ts) plot(x.dec) #=========================================== #Second example, MA process x1 <- cumsum(x) #Use stl() to separate the trend and seasonal term x1.ts <- ts(x1, freq=4) x1.stl <- stl(x1.ts, s.window = "periodic") plot(x1.stl) #Use decompose() to separate the trend and seasonal term x1.dec <- decompose(x1.ts) plot(x1.dec) ``` The graph below is from the above plot(x.stl) statement. stl() found a small seasonal term in white noise. You might say that term is so small that it's really not an issue. The problem is, in real data, you don't know if that term is a problem or not. In the example below, notice that the trend data series has segments where it looks like a filtered version of the raw data, and other segments where it might be considered significantly different than the raw data. ![enter image description here](https://i.stack.imgur.com/wtW2G.jpg)
null
CC BY-SA 2.5
null
2011-03-06T19:07:39.157
2011-03-06T19:40:52.517
2011-03-06T19:40:52.517
2775
2775
null
7931
2
null
7815
6
null
I would also add that the large scale data also introduces the problem of potential "Bad data". Not only missing data, but data errors and inconsistent definitions introduced by every piece of a system which ever touched the data. So, in additional to statistical skills, you need to become an expert data cleaner, unless someone else is doing it for you. -Ralph Winters
null
CC BY-SA 2.5
null
2011-03-06T20:31:50.597
2011-03-06T20:31:50.597
null
null
3489
null
7932
2
null
7815
12
null
Good programming skills are a must. You need to be able to write efficient code that can deal with huge amounts of data without choking, and maybe be able to parallelize said code to get it to run in a reasonable amount of time.
null
CC BY-SA 2.5
null
2011-03-06T21:38:24.733
2011-03-06T21:38:24.733
null
null
1347
null
7933
2
null
7815
127
null
Good answers have already appeared. I will therefore just share some thoughts based on personal experience: adapt the relevant ones to your own situation as needed. For background and context--so you can account for any personal biases that might creep in to this message--much of my work has been in helping people make important decisions based on relatively small datasets. They are small because the data can be expensive to collect (10K dollars for the first sample of a groundwater monitoring well, for instance, or several thousand dollars for analyses of unusual chemicals). I'm used to getting as much as possible out of any data that are available, to exploring them to death, and to inventing new methods to analyze them if necessary. However, in the last few years I have been engaged to work on some fairly large databases, such as one of socioeconomic and engineering data covering the entire US at the Census block level (8.5 million records, 300 fields) and various large GIS databases (which nowadays can run from gigabytes to hundreds of gigabytes in size). With very large datasets one's entire approach and mindset change. There are now too much data to analyze. Some of the immediate (and, in retrospect) obvious implications (with emphasis on regression modeling) include - Any analysis you think about doing can take a lot of time and computation. You will need to develop methods of subsampling and working on partial datasets so you can plan your workflow when computing with the entire dataset. (Subsampling can be complicated, because you need a representative subset of the data that is as rich as the entire dataset. And don't forget about cross-validating your models with the held-out data.) Because of this, you will spend more time documenting what you do and scripting everything (so that it can be repeated). As @dsimcha has just noted, good programming skills are useful. Actually, you don't need much in the way of experience with programming environments, but you need a willingness to program, the ability to recognize when programming will help (at just about every step, really) and a good understanding of basic elements of computer science, such as design of appropriate data structures and how to analyze computational complexity of algorithms. That's useful for knowing in advance whether code you plan to write will scale up to the full dataset. Some datasets are large because they have many variables (thousands or tens of thousands, all of them different). Expect to spend a great deal of time just summarizing and understanding the data. A codebook or data dictionary, and other forms of metadata, become essential. - Much of your time is spent simply moving data around and reformatting them. You need skills with processing large databases and skills with summarizing and graphing large amounts of data. (Tufte's Small Multiple comes to the fore here.) - Some of your favorite software tools will fail. Forget spreadsheets, for instance. A lot of open source and academic software will just not be up to handling large datasets: the processing will take forever or the software will crash. Expect this and make sure you have multiple ways to accomplish your key tasks. - Almost any statistical test you run will be so powerful that it's almost sure to identify a "significant" effect. You have to focus much more on statistical importance, such as effect size, rather than significance. - Similarly, model selection is troublesome because almost any variable and any interaction you might contemplate is going to look significant. You have to focus more on the meaningfulness of the variables you choose to analyze. - There will be more than enough information to identify appropriate nonlinear transformations of the variables. Know how to do this. - You will have enough data to detect nonlinear relationships, changes in trends, nonstationarity, heteroscedasticity, etc. - You will never be finished. There are so much data you could study them forever. It's important, therefore, to establish your analytical objectives at the outset and constantly keep them in mind. I'll end with a short anecdote which illustrates one unexpected difference between regression modeling with a large dataset compared to a smaller one. At the end of that project with the Census data, a regression model I had developed needed to be implemented in the client's computing system, which meant writing SQL code in a relational database. This is a routine step but the code generated by the database programmers involved thousands of lines of SQL. This made it almost impossible to guarantee it was bug free--although we could detect the bugs (it gave different results on test data), finding them was another matter. (All you need is one typographical error in a coefficient...) Part of the solution was to write a program that generated the SQL commands directly from the model estimates. This assured that what came out of the statistics package was exactly what went into the RDBMS. As a bonus, a few hours spent on writing this script replaced possibly several weeks of SQL coding and testing. This is a small part of what it means for the statistician to be able to communicate their results.
null
CC BY-SA 2.5
null
2011-03-06T22:16:42.373
2011-03-07T00:07:36.283
2011-03-07T00:07:36.283
919
919
null
7935
1
7938
null
87
75939
From what I know, using lasso for variable selection handles the problem of correlated inputs. Also, since it is equivalent to Least Angle Regression, it is not slow computationally. However, many people (for example people I know doing bio-statistics) still seem to favour stepwise or stagewise variable selection. Are there any practical disadvantages of using the lasso that makes it unfavourable?
What are disadvantages of using the lasso for variable selection for regression?
CC BY-SA 2.5
null
2011-03-06T23:21:24.703
2021-02-08T18:35:51.057
2012-12-14T18:21:00.483
4856
2973
[ "regression", "feature-selection", "lasso" ]
7936
2
null
7935
6
null
One practical disadvantage of lasso and other regularization techniques is finding the optimal regularization coefficient, lambda. Using cross validation to find this value can be just as expensive as stepwise selection techniques.
null
CC BY-SA 2.5
null
2011-03-07T00:23:32.770
2011-03-07T00:23:32.770
null
null
2965
null
7937
2
null
7935
0
null
One big one is the difficulty of doing hypothesis testing. You can't easily figure out which variables are statistically significant with Lasso. With stepwise regression, you can do hypothesis testing to some degree, if you're careful about your treatment of multiple testing.
null
CC BY-SA 2.5
null
2011-03-07T00:38:38.003
2011-03-07T00:38:38.003
null
null
1347
null
7938
2
null
7935
39
null
There is NO reason to do stepwise selection. It's just wrong. LASSO/LAR are the best automatic methods. But they are automatic methods. They let the analyst not think. In many analyses, some variables should be in the model REGARDLESS of any measure of significance. Sometimes they are necessary control variables. Other times, finding a small effect can be substantively important.
null
CC BY-SA 2.5
null
2011-03-07T00:58:47.170
2011-03-07T00:58:47.170
null
null
686
null
7939
1
8310
null
6
2059
I'm working on a problem from "The Elements of Statistical Learning" (prob. 6.8): > Suppose that for continuous response $Y$ and predictor $X$, we model the joint density of $X, Y$ using a multivariate Gaussian kernel estimator. Note that the kernel in this case would be the product kernel $\phi_{\lambda}(X) \phi_{\lambda}(Y)$. (a) Show that the conditional mean $E(Y|X)$ derived from this estimate is a Nadaraya-Watson estimator. (b) Extend this result to classification by providing a suitable kernel for the estimation of the joint distribution of a continuous $X$ and discrete $Y$. I know that the Nadaraya-Watson estimator is just the weighted average (equation 2.41 and 6.2 in ESL): > $$\hat f (x_0) = \frac{\sum_{i=0}^N K_{\lambda}(x_0, x_i) y_i}{\sum_{i=0}^N K_{\lambda}(x_0, x_i)}$$ Where $K$ in this case would be the multivariate Gaussian kernel function. I can think about how to extend this to a classification problem, but am not sure how to approach the first part of this question. Any pointers would be greatly appreciated!
Gaussian kernel estimator as Nadaraya-Watson estimator?
CC BY-SA 2.5
null
2011-03-07T01:06:05.880
2015-04-23T05:57:31.650
2015-04-23T05:57:31.650
9964
988
[ "self-study", "classification", "kernel-smoothing" ]
7940
2
null
6225
3
null
Yes, it is possible to prove the null--in exactly the same sense that it is possible to prove any alternative to the null. In a Bayesian analysis, it is perfectly possible for the odds in favor of the null versus any of the proposed alternatives to it to become arbitrarily large. Moreover, it is false to assert, as some of the above answers assert, that one can only prove the null if the alternatives to it are disjoint (do not overlap with the null). In a Bayesian analysis every hypothesis has a prior probability distribution. This distribution spreads a unit mass of prior probability out over the proposed alternatives. The null hypothesis puts all of the prior probability on a single alternative. In principle, alternatives to the null may put all of the prior probability on some non-null alternative (on another "point"), but this is rare. In general, alternatives hedge, that is, they spread the same mass of prior probability out over other alternatives--either to the exclusion of the null alternative, or, more commonly, including the null alternative. The question then becomes which hypothesis puts the most prior probability where the experimental data actually fall. If the data fall tightly around where the null says they should fall, then it will be the odds-on favority (among the proposed hypotheses) EVEN THOUGH IT IS INCLUDED IN (NESTED IN, NOT MUTUALLY EXCLUSIVE WITH) THE ALTERNATIVES TO IT. The believe that it is not possible for a nested alternative to be more likely than the set in which it is nested reflects the failure to distinguish between probability and likelihood. While it is impossible for a component of a set to be less probable than the entire set, it is perfectly possible for the posterior likelihood of a component of a set of hypotheses to be greater than the posterior likelihood of the set as a whole. The posterior likelihood of an hypothesis is the product of the likelihood function and the prior probability distribution that the hypothesis posits. If an hypothesis puts all of the prior probability in the right place (e.g., on the null), then it will have a higher posterior likelihood than an hypothesis that puts some of the prior probability in the wrong place (not on the null).
null
CC BY-SA 2.5
null
2011-03-07T01:52:33.090
2011-03-07T01:52:33.090
null
null
null
null
7941
1
7942
null
18
35826
I am not in statistics field. I have seen the word "tied data" while reading about Rank Correlation Coefficients. - What is tied data? - What is an example of tied data?
What is tied data in the context of a rank correlation coefficient?
CC BY-SA 2.5
null
2011-03-07T02:49:18.230
2019-07-17T22:46:26.403
2019-07-17T22:46:26.403
3277
3584
[ "correlation", "nonparametric", "ranks" ]
7942
2
null
7941
7
null
It means data that have the same value; for instance if you have 1,2,3,3,4 as the dataset then the two 3's are tied data. If you have 1,2,3,4,5,5,5,6,7,7 as the dataset then the 5's and the 7's are tied data.
null
CC BY-SA 2.5
null
2011-03-07T02:57:35.370
2011-03-07T02:57:35.370
null
null
3585
null
7943
2
null
7941
5
null
It's simply two identical data values, such as observing 7 twice in the same data set. This comes up in the context of statistical methods that assume data has a continuous and so identical measurements are impossible (or technically, the probability identical values is zero). Practical complications arise when these methods are applied to data that are rounded or clipped so that identical measurements are not only possible but fairly common.
null
CC BY-SA 2.5
null
2011-03-07T02:58:09.177
2011-03-07T02:58:09.177
null
null
319
null
7944
2
null
7941
16
null
"Tied data" comes up in the context of rank-based non-parametric statistical tests. Non-parametric tests: testing that does not assume a particular probability distribution, eg it does not assume a bell-shaped curve. rank-based: a large class of non-parametric tests start by converting the numbers (eg "3 days", "5 days", and "4 days") into ranks (eg "shortest duration (3rd)", "longest duration (1st)", "second longest duration (2nd)"). A traditional parametric testing method is then applied to these ranks. Tied data is an issue since numbers that are identical now need to be converted into rank. Sometimes ranks are randomly assigned, sometimes an average rank is used. Most importantly, a protocol for breaking tied ranks needs to be described for reproducibility of the result.
null
CC BY-SA 2.5
null
2011-03-07T03:02:06.560
2011-03-07T03:49:19.673
2011-03-07T03:49:19.673
919
812
null
7945
2
null
7727
13
null
The question concerns calculating the correlation between two irregularly sampled time series (one-dimensional stochastic processes) and using that to find the time offset where they are maximally correlated (their "phase difference"). This problem is not usually addressed in time series analysis, because time series data are presumed to be collected systematically (at regular intervals of time). It is rather the province of [geostatistics](http://en.wikipedia.org/wiki/Geostatistics), which concerns the multidimensional generalizations of time series. The archetypal geostatistical dataset consists of measurements of geological samples at irregularly spaced locations. With irregular spacing, the distances among pairs of locations vary: no two distances may be the same. Geostatistics overcomes this with the [empirical variogram](http://en.wikipedia.org/wiki/Variogram#Empirical_variogram). This computes a "typical" (often the mean or median) value of $(z(p) - z(q))^2 / 2$--the "semivariance"--where $z(p)$ denotes a measured value at point $p$ and the distance between $p$ and $q$ is constrained to lie within an interval called a "lag". If we assume the process $Z$ is stationary and has a covariance, then the expectation of the semivariance equals the maximum covariance (equal to $Var(Z(p))$ for any $p$) minus the covariance between $Z(p)$ and $Z(q)$. This binning into lags copes with the irregular spacing problem. When an ordered pair of measurements $(z(p), w(p))$ is made at each point, one can similarly compute the empirical cross-variogram between the $z$'s and the $w$'s and thereby estimate the covariance at any lag. You want the one-dimensional version of the cross-variogram. The R packages [gstat](http://www.gstat.org/) and [sgeostat](http://www.stat.ucl.ac.be/ISdidactique/Rhelp/library/sgeostat/html/est.variogram.html), among others, will estimate cross-variograms. Don't worry that your data are one-dimensional; if the software won't work with them directly, just introduce a constant second coordinate: that will make them appear two-dimensional. With two million points you should be able to detect small deviations from stationarity. It's possible the phase difference between the two time series could vary over time, too. Cope with this by computing the cross-variogram separately for different windows spaced throughout the time period. @cardinal has already brought up most of these points in comments. The main contribution of this reply is to point towards the use of spatial statistics packages to do your work for you and to use techniques of geostatistics to analyze these data. As far as computational efficiency goes, note that the full convolution (cross-variogram) is not needed: you only need its values near the phase difference. This makes the effort $O(nk)$, not $O(n^2)$, where $k$ is the number of lags to compute, so it might be feasible even with out-of-the-box software. If not, the direct convolution algorithm is easy to implement.
null
CC BY-SA 2.5
null
2011-03-07T04:29:43.617
2011-03-07T04:29:43.617
null
null
919
null
7946
1
8208
null
4
3774
I'm not in Statistics field. I conducted the case study and collected the data as shown below I have data as shown in the table below: ![enter image description here](https://i.stack.imgur.com/mH1tC.jpg) ![enter image description here](https://i.stack.imgur.com/ExRl1.jpg) I would like to find correlation coefficient from this two table data(between NOA and HVOC, and between NOA and HVOL). I conducted the case study with the source code. I measured software metrics named "NOA" and "HVOL" for all the method/function before I modified this source code. And then, after I modified the code, I again measureed the same metrics for all the method. NOA Diff field in the table is calculated from NOA (after modifying the code) minus NOA (before modifying the code). That is "NOA Diff = NOA(after)-NOA(before)". The same way was applied to HVOC metric; HVOC Diff = HVOC(after)-HVOC(before) My questions are - What type of correlation coefficient should I use? - What kind of graph should I create to illustrate my data? - The table above is all data, i mean it's population not a sample, can I use the method that is used with a sample - Is Spearman is for non normally distributed data?
What correlation coefficient and graph is appropriate with this data?
CC BY-SA 2.5
null
2011-03-07T05:39:06.227
2011-11-07T14:35:15.457
2011-03-19T04:38:14.770
3584
3584
[ "data-visualization", "spearman-rho", "ranks" ]
7947
2
null
7923
1
null
As chl says, the general issue of what statistical test to use when the dependent variable is a scale based on likert items has been discussed [elsewhere on this site](https://stats.stackexchange.com/questions/203/group-differences-on-a-five-point-likert-item). For the pragmatic task of running such analyses in SPSS, typing "spss t-test" or "spss Mann-Whitney" into Google will give you many SPSS tutorial options. Check out for example, some of the following tutorial sites: - gsu - uofs
null
CC BY-SA 2.5
null
2011-03-07T06:58:33.697
2011-03-07T06:58:33.697
2017-04-13T12:44:20.840
-1
183
null
7948
1
7950
null
177
211155
I am running linear regression models and wondering what the conditions are for removing the intercept term. In comparing results from two different regressions where one has the intercept and the other does not, I notice that the $R^2$ of the function without the intercept is much higher. Are there certain conditions or assumptions I should be following to make sure the removal of the intercept term is valid?
When is it ok to remove the intercept in a linear regression model?
CC BY-SA 3.0
null
2011-03-07T09:14:00.487
2022-09-22T01:13:33.580
2022-09-22T01:13:33.580
11887
1422
[ "regression", "linear-model", "r-squared", "intercept", "faq" ]
7949
2
null
7912
3
null
Intro As @vqv mentionned Total variation and Kullback Leibler are two interesting distance. The first one is meaningfull because it can be directly related to first and second type errors in hypothesis testing. The problem with the Total variation distance is that it can be difficult to compute. The Kullback Leibler distance is easier to compute and I will come to that later. It is not symetric but can be made symetric (somehow a little bit artificially). Answer Something I mention [here](https://mathoverflow.net/questions/29054/l1-distance-between-gaussian-measures) is that if $\mathcal{L}$ is the log likelihood ratio between your two gaussian measures $P_0,P_1$ (say that for $i=0,1$ $P_i$ has mean $\mu_i$ and covariance $C_i$) error measure that is also interseting (in the gaussian case I found it quite central actually) is $$ \|\mathcal{L}\|^2_{L_2(P_{1/2})} $$ for a well chosen $P_{1/2}$. In simple words: - there might be different interesting "directions" rotations, that are obtained using your formula with one of the "interpolated" covariance matrices $\Sigma=C_{i,1/2}$ ($i=1,2,3,4$ or $5$) defined at the end of this post (the number $5$ is the one you propose in your comment to your question). - since your two distributions have different covariances, it is not sufficiant to compare the means, you also need to compare the covariances. Let me explain you why this is my feeling, how you can compute this in the case of $C_1\neq C_0$ and how to choose $P_{1/2}$. Linear case If $C_1=C_0=\Sigma$. $$\sigma= \Delta \Sigma^{-1} \Delta=\|2\mathcal{L}\|^2_{L_2(P_{1/2})}$$ where $P_{1/2}$ is the "interpolate" between $P_1$ and $P_0$ (gaussian with covariance $\Sigma$ and mean $(\mu_1+\mu_0)/2$). Note that in this case, the Hellinger distance, the total variation distance can all be written using $\sigma$. How to compute $\mathcal{L}$ in the general case A natural question that arises from your question (and [mine](https://mathoverflow.net/questions/29054/l1-distance-between-gaussian-measures)) is what is a natural "interpolate" between $P_1$ and $P_0$ when $C_1\neq C_0$. Here the word natural may be user specific but for example it may be related to the best interpolation to have a tight upper bound with another distance (e.g. $L_1$ distance [here](https://mathoverflow.net/questions/29054/l1-distance-between-gaussian-measures)) Writting $$ \mathcal{L}= \phi (C^{-1/2}_i(x-\mu_i))-\phi (C^{-1/2}_j(x-\mu_j))-\frac{1}{2}\log \left ( C_iC_j^{-}\right ) $$ ($i=0,j=1$) may help to see where is the interpolation task, but : $$\mathcal{L}(x)=-\frac{1}{2}\langle A_{ij}(x-s_{ij}),x-s_{ij}\rangle_{\mathbb{R}^p}+\langle G_{ij},x-s_{ij}\rangle_{\mathbb{R}^p}-c_{ij}, \;[1]$$ with $$A_{ij}=C_i^{-}-C_j^{-},\;\; G_{ij}=S_{ij}m_{ij},\;\; S_{ij}=\frac{C_i^{-}+C_j^{-}}{2}, $$ $$ c_{ij}=\frac{1}{8}\langle A_{ij} m_{ij},m_{ij}\rangle_{\mathbb{R}^p}+\frac{1}{2}\log|\det(C_j^{-}C_i)| $$ and $$ m_{ij}=\mu_i-\mu_j \;\; and\;\; s_{ij}=\frac{\mu_i+\mu_j}{2}$$ is more relevant for computational purpose. For any gaussian $P_{1/2}$ with mean $s_{01}$ and covariance $C$ the calculation of $\|\mathcal{L}\|^2_{L_2(P_{1/2})}$ from Equation $1$ is a bit technical but faisible. You might also use it to compute the Kulback leibler distance. What interpolation should we choose (i.e. how to choose $P_{1/2}$) It is clearly understood from Equation $1$ that there are many different candidates for $P_{1/2}$ (interpolate) in the "quadratic" case. The two candidates I found "most natural" (subjective:) ) arise from defining for $t\in [0,1]$ a gaussian distribution $P_t$ with mean $t\mu_1+(1-t)\mu_0$: - $P^1_t$ as the distribution of $$ \xi_t=t\xi_1+(1-t)\xi_0$$ (where $\xi_i$ is drawn from $P_i$ $i=0,1$) which has covariance $C_{t,1}=(tC_1^{1/2}+(1-t)C_0^{1/2})^2$). - $P^2_t$ with inverse covariance $C_{t,2}^{-1}=tC_{1}^{-1}+(1-t)C_0^{-1}$ - $P^3_t$ with covariance $C_{t,3}=tC_1+(1-t)C_0$ - $P^4_t$ with inverse covariance $C_{t,4}^{-1}=(tC^{-1/2}_1+(1-t)C^{-1/2}_0)^{2}$ EDIT: The one you propose in a comment to your question could be $C_{t,5}=C_1^{t}C_0^{1-t}$, why not ... I have my favorite choice which is not the first one :) don't have much time to discuss that here. Maybe I'll edit this answer later...
null
CC BY-SA 2.5
null
2011-03-07T09:19:14.240
2011-03-11T12:33:23.773
2017-04-13T12:58:32.177
-1
223
null
7950
2
null
7948
120
null
The shortest answer: never, unless you are sure that your linear approximation of the data generating process (linear regression model) either by some theoretical or any other reasons is forced to go through the origin. If not the other regression parameters will be biased even if intercept is statistically insignificant (strange but it is so, consult [Brooks](http://www.cambridge.org/features/economics/brooks/) Introductory Econometrics for instance). Finally, as I do often explain to my students, by leaving the intercept term you insure that the residual term is zero-mean. For your two models case we need more context. It may happen that linear model is not suitable here. For example, you need to log transform first if the model is multiplicative. Having exponentially growing processes it may occasionally happen that $R^2$ for the model without the intercept is "much" higher. Screen the data, test the model with RESET test or any other linear specification test, this may help to see if my guess is true. And, building the models highest $R^2$ is one of the last statistical properties I do really concern about, but it is nice to present to the people who are not so well familiar with econometrics (there are many dirty tricks to make determination close to 1 :)).
null
CC BY-SA 2.5
null
2011-03-07T10:16:46.850
2011-03-07T10:16:46.850
null
null
2645
null
7951
1
null
null
2
476
I would like to find the correlation between two variables. It was suggested to me that two variables should be independent; otherwise it is not meaningful statistically to calculate a correlation. For example, variable is x and another is $y$ which won't be calculated from $x$, e.g., $y=ab/c+x$. $a$, $b$, and $c$ are some constant value that are the same for all $y$. ### Questions - Is what I understand correct? - The data set is $y = P+Q+x$. $P$ and $Q$ depends on each $x$, and the same $x$ value can have different value of $P$ and $Q$. Is it meaningful to find a correlation given this data?
Do two variables need to be independent in order to obtain a correlation?
CC BY-SA 3.0
null
2011-03-07T10:22:05.520
2015-03-21T16:32:58.547
2011-09-29T12:05:13.090
183
3584
[ "correlation", "dataset", "independence", "non-independent" ]
7952
1
7953
null
17
9338
I am using Sweave and xtable to generate a report. I would like to add some coloring on a table. But I have not managed to find any way to generate colored tables with xtable. Is there any other option?
How to create coloured tables with Sweave and xtable?
CC BY-SA 2.5
null
2011-03-07T11:15:38.937
2017-05-18T21:16:54.990
2017-05-18T21:16:54.990
28666
1709
[ "r", "reproducible-research" ]
7953
2
null
7952
20
null
Although I didn't try this explicitly from with R (I usually post-process the Tables in Latex directly with `\rowcolor`, `\rowcolors`, or the [colortbl](http://ctan.org/pkg/colortbl) package), I think it would be easy to do this by playing with the `add.to.row` arguments in `print.xtable()`. It basically expect two components (passed as `list`): (1) row number, and (2) $\LaTeX$ command. Please note that command are added at the end of the specified row(s). It seems to work, with the `colortbl` package. So, something like this ``` <<result=tex>> library(xtable) m <- matrix(sample(1:10,10), nr=2) print(xtable(m), add.to.row=list(list(1),"\\rowcolor[gray]{.8} ")) @ ``` gives me ![enter image description here](https://i.stack.imgur.com/BkcUT.png) (This is a customized Beamer template, but this should work with a standard document. With Beamer, you'll probably want to add the `table` option when loading the package.) Update: Following @Conjugate's suggestion, you can also rely on [Hmisc](http://cran.r-project.org/web/packages/Hmisc/index.html) facilities for handling $\TeX$ output, see the many options of the `latex()` function. Here is an example of use: ``` library(Hmisc) ## print the second row in bold (including row label) form.mat <- matrix(c(rep("", 5), rep("bfseries", 5)), nr=2, byrow=TRUE) w1 <- latex(m, rownamesTexCmd=c("","bfseries"), cellTexCmds=form.mat, numeric.dollar=FALSE, file='/tmp/out1.tex') w1 # call latex on /tmp/out1.tex ## highlight the second row in gray (as above) w2 <- latex(m, rownamesTexCmd=c("","rowcolor[gray]{.8}"), numeric.dollar=FALSE, file='/tmp/out2.tex') w2 ```
null
CC BY-SA 2.5
null
2011-03-07T11:59:48.103
2011-03-08T10:30:23.020
2011-03-08T10:30:23.020
930
930
null
7954
2
null
7607
3
null
I think this is a good question and I don't kown much about implementations. Since wavelet is 'mutli-resolution' you have two types of solutions (which are somehow connected): - Modify your signal for example extend you signal over the actual boundary to have meaningfull coefficients. Exemples of that are : periodic wavelet on the interval Zero padding (extend the signal by zero outside ist domain finer prodecure are extensions of zero padding with smoothness condition at the boundary. - Modify the wavelet (somehow equivalent to threshold or lower wavelet coefficient that are near the boundary). More generally, there are procedures I know there have been many work since that of A Cohen I Daubechies et P Vial 1993. For example, in (Monasse and Perrier, 1995), wavelet that forms a basis adapted to conditions such as Dirichlet or Neumann are constructed. I guess some are implemented ? If you found implementations, I am interested. References: Monasse and Perrier : 1995 CRAS Ondelettes sur lintervalle pour la prise en compte de conditions aux limites A Cohen I Daubechies et P Vial Wavelets on the interval and fast wavelet transforms Appl Comp Harmonic Analysis (1993)
null
CC BY-SA 2.5
null
2011-03-07T13:17:28.140
2011-03-07T13:17:28.140
null
null
223
null
7955
1
null
null
2
277
Would you please give an intuitive illustration of Newton's Method, when we deal with nonlinear regression? Basically I understand that if we can use Taylor's theorem to expand the RSS function of parameter beta, we can change it into quadratic form, and minimize RSS w.r.t parameter. Please give me a multivariate example by using gradient and Hessian matrix. Sorry I intend to input equation and function here, but I have no idea how to use LaTeX code here.
The usage of Newton's method in nonlinear regression
CC BY-SA 2.5
null
2011-03-07T14:10:05.547
2018-08-15T08:08:17.300
2018-08-15T08:08:17.300
11887
3525
[ "optimization", "econometrics", "nonlinear-regression" ]
7956
1
7957
null
7
16740
Starting out with arima models in R, I do not understand why fitted.values (of an AR(2) process for example) are not part of the output like they are in regressions. Did I miss them when running `str(result)` or did I get something completely wrong?
Why are fitted.values not part the R object returned from arima?
CC BY-SA 2.5
null
2011-03-07T14:24:12.427
2011-03-08T01:26:36.423
null
null
704
[ "r", "time-series", "arima" ]
7957
2
null
7956
13
null
Use `fitted()` function from the `forecast` package. Since arima object saves residuals it is easy to compute fitted values from it.
null
CC BY-SA 2.5
null
2011-03-07T15:10:22.120
2011-03-08T01:26:36.423
2011-03-08T01:26:36.423
159
2645
null
7959
1
null
null
34
9045
I want to estimate the quantile of some data. The data are so huge that they can not be accommodated in the memory. And data are not static, new data keep coming. Does anyone know any algorithm to monitor the quantiles of the data observed so far with very limited memory and computation? I find [P2 algorithm](http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=742445C867F4FC0A1658981316B79CC3?doi=10.1.1.74.2672&rep=rep1&type=pdf) useful, but it does not work very well for my data, which are extremely heavy-tailed distributed.
Algorithm to dynamically monitor quantiles
CC BY-SA 2.5
null
2011-03-07T15:53:27.493
2015-07-14T18:44:14.450
2015-07-14T18:44:14.450
919
3594
[ "algorithms", "quantiles" ]
7960
1
7967
null
25
598
The New York Times has a long comment on the 'value-added' teacher evaluation system being used to give feedback to New York City educators. The lede is the equation used to calculate the scores - presented without context. The rhetorical strategy appears to be intimidation via math: ![alt text](https://graphics8.nytimes.com/images/2011/03/07/education/07winerip_graphic/07winerip_graphic-articleLarge-v2.jpg) The full text of the article is available at: [http://www.nytimes.com/2011/03/07/education/07winerip.html](http://www.nytimes.com/2011/03/07/education/07winerip.html) The author, Michael Winerip, argues that the meaning of the equation is beyond the capacity of anyone other than, um, Matt Damon to understand, much less an average teacher: > "The calculation for Ms. Isaacson’s 3.69 predicted score is even more daunting. It is based on 32 variables — including whether a student was “retained in grade before pretest year” and whether a student is “new to city in pretest or post-test year.” Those 32 variables are plugged into a statistical model that looks like one of those equations that in “Good Will Hunting” only Matt Damon was capable of solving. The process appears transparent, but it is clear as mud, even for smart lay people like teachers, principals and — I hesitate to say this — journalists. Ms. Isaacson may have two Ivy League degrees, but she is lost. “I find this impossible to understand,” she said. In plain English, Ms. Isaacson’s best guess about what the department is trying to tell her is: Even though 65 of her 66 students scored proficient on the state test, more of her 3s should have been 4s. But that is only a guess." How would you explain the model to a layperson? FYI, the full technical report is at: [http://schools.nyc.gov/NR/rdonlyres/A62750A4-B5F5-43C7-B9A3-F2B55CDF8949/87046/TDINYCTechnicalReportFinal072010.pdf](http://schools.nyc.gov/NR/rdonlyres/A62750A4-B5F5-43C7-B9A3-F2B55CDF8949/87046/TDINYCTechnicalReportFinal072010.pdf) Update: Andrew Gelman offers his thoughts here: [http://www.stat.columbia.edu/~cook/movabletype/archives/2011/03/its_no_fun_bein.html](http://www.stat.columbia.edu/%7Ecook/movabletype/archives/2011/03/its_no_fun_bein.html)
Equations in the news: Translating a multi-level model to a general audience
CC BY-SA 2.5
null
2011-03-07T16:07:26.140
2016-09-15T00:27:59.043
2020-06-11T14:32:37.003
-1
3320
[ "regression", "multilevel-analysis", "statistics-in-media" ]
7961
2
null
6896
4
null
What are you trying to do with the nearest neighbor information? I would answer that question, and then compare the different distance measures in light of that. For example, say you are trying to classify poses based on the joint configuration, and would like joint vectors from the same pose to be close together. A straightforward way to evaluate the suitability of different distance metrics is to use each of them in a KNN classifier, and compare the out-of-sample accuracies of each of the resulting models.
null
CC BY-SA 2.5
null
2011-03-07T17:08:51.720
2011-03-07T17:08:51.720
null
null
3595
null
7962
2
null
7959
2
null
This can be adapted from algorithms that determine the median of a dataset online. For more information, see this stackoverflow post - [https://stackoverflow.com/questions/1387497/find-median-value-from-a-growing-set](https://stackoverflow.com/questions/1387497/find-median-value-from-a-growing-set)
null
CC BY-SA 2.5
null
2011-03-07T17:16:58.733
2011-03-07T17:16:58.733
2017-05-23T12:39:26.593
-1
3595
null
7963
2
null
7959
19
null
The P2 algorithm is a nice find. It works by making several estimates of the quantile, updating them periodically, and using quadratic (not linear, not cubic) interpolation to estimate the quantile. The authors claim quadratic interpolation works better in the tails than linear interpolation and cubic would get too fussy and difficult. You do not state exactly how this approach fails for your "heavy-tailed" data, but it's easy to guess: estimates of extreme quantiles for heavy-tailed distributions will be unstable until a large amount of data are collected. But this is going to be a problem (to a lesser extent) even if you were to store all the data, so don't expect miracles! At any rate, why not set auxiliary markers--let's call them $x_0$ and $x_6$--within which you are highly certain the quantile will lie, and store all data that lie between $x_0$ and $x_6$? When your buffer fills you will have to update these markers, always keeping $x_0 \le x_6$. A simple algorithm to do this can be devised from a combination of (a) the current P2 estimate of the quantile and (b) stored counts of the number of data less than $x_0$ and the number of data greater than $x_6$. In this fashion you can, with high certainty, estimate the quantile just as well as if you had the entire dataset always available, but you only need a relatively small buffer. Specifically, I am proposing a data structure $(k, \mathbf{y}, n)$ to maintain partial information about a sequence of $n$ data values $x_1, x_2, \ldots, x_n$. Here, $\mathbf{y}$ is a linked list $$\mathbf{y} = (x^{(n)}_{[k+1]} \le x^{(n)}_{[k+2]} \le \cdots \le x^{(n)}_{[k+m]}).$$ In this notation $x^{(n)}_{[i]}$ denotes the $i^\text{th}$ smallest of the $n$ $x$ values read so far. $m$ is a constant, the size of the buffer $\mathbf{y}$. The algorithm begins by filling $\mathbf{y}$ with the first $m$ data values encountered and placing them in sorted order, smallest to largest. Let $q$ be the quantile to be estimated; e.g., $q$ = 0.99. Upon reading $x_{n+1}$ there are three possible actions: - If $x_{n+1} \lt x^{(n)}_{[k+1]}$, increment $k$. - If $x_{n+1} \gt x^{(n)}_{[k+m]}$, do nothing. - Otherwise, insert $x_{n+1}$ into $\mathbf{y}$. In any event, increment $n$. The insert procedure puts $x_{n+1}$ into $\mathbf{y}$ in sorted order and then eliminates one of the extreme values in $\mathbf{y}$: - If $k + m/2 \lt n q$, then remove $x^{(n)}_{[k+1]}$ from $\mathbf{y}$ and increment $k$; - Otherwise, remove $x^{(n)}_{[k+m]}$ from $\mathbf{y}$. Provided $m$ is sufficiently large, this procedure will bracket the true quantile of the distribution with high probability. At any stage $n$ it can be estimated in the usual way in terms of $x^{(n)}_{[\lfloor{q n}\rfloor]}$ and $x^{(n)}_{[\lceil{q n}\rceil]}$, which will likely lie in $\mathbf{y}$. (I believe $m$ only has to scale like the square root of the maximum amount of data ($N$), but I have not carried out a rigorous analysis to prove that.) At any rate, the algorithm will detect whether it has succeeded (by comparing $k/n$ and $(k+m)/n$ to $q$). Testing with up to 100,000 values, using $m = 2\sqrt{N}$ and $q=.5$ (the most difficult case) indicates this algorithm has a 99.5% success rate in obtaining the correct value of $x^{(n)}_{[\lfloor{q n}\rfloor]}$. For a stream of $N=10^{12}$ values, that would require a buffer of only two million (but three or four million would be a better choice). Using a sorted doubly linked list for the buffer requires $O(\log(\sqrt{N}))$ = $O(\log(N))$ effort while identifying and deleting the max or min are $O(1)$ operations. The relatively expensive insertion typically needs to be done only $O(\sqrt{N})$ times. Thus the computational costs of this algorithm are $O(N + \sqrt{N} \log(N)) = O(N)$ in time and $O(\sqrt{N})$ in storage.
null
CC BY-SA 2.5
null
2011-03-07T17:29:15.047
2011-03-07T17:29:15.047
null
null
919
null
7964
2
null
7774
10
null
Although @cardinal has already given an answer that gives precisely the bound I was looking for, I have found a similar Chernoff-style argument that can give a stronger bound: Proposition: $$ Pr (T \leq n \log n - c n) \leq \exp(- \frac{3c^2}{\pi^2} ) \> . $$ (this is stronger for $c > \frac{\pi^2}{3}$ ) Proof: As in @cardinal's answer, we can use the fact that $T$ is a sum of independent geometric random variables $T_i$ with success probabilities $p_i = 1 - i/n$. It follows that $E[T_i] = 1/p_i$ and $E[T] = \sum_{i=1}^{n} E[T_i] = n \sum_{i=1}^n \frac{1}{i}\geq n \log n$. Define now new variables $S_i : = T_i - E[T_i]$, and $S : = \sum_i S_i$. We can then write $$ \Pr (T \leq n \log n - c n) \leq \Pr (T \leq E[T] - c n) = \Pr (S \leq - c n) $$ $$ = \Pr\left(\exp(-s S ) \geq \exp( s cn) \right) \leq e^{-s c n} E\left[ e^{-s S} \right] $$ Computing the averages, we have $$ E[e^{-s S}] = \prod_i E[e^{-s S_i}] = \prod_i \frac{e^{s / p_i} } {1 + \frac{1}{p_i} (e^s -1)} \leq e^{\frac{1}{2}s^2\sum_i p_i^{-2}} $$ where the inequality follows from the facts that $e^s - 1\geq s$ and also $\frac{e^z}{1+z}\leq e^{\frac{1}{2}z^2}$ for $z\geq 0$. Thus, since $\sum_i p_i ^{-2} = n^2 \sum_{i=1}^{n-1} \frac{1}{i^2} \leq n^2 \pi^2/6$, we can write \begin{align*} \Pr( T \leq n \log n - c n ) \leq e^{\frac{1}{12} (n \pi s)^2 - s c n}. \end{align*} Minimizing over $s>0$, we finally obtain $$ \Pr( T \leq n\log n -cn ) \leq e^{-\frac{3 c^2 }{\pi^2}} $$
null
CC BY-SA 2.5
null
2011-03-07T17:56:05.703
2011-03-08T01:58:42.140
2011-03-08T01:58:42.140
2970
3500
null
7965
2
null
7960
2
null
There is just nothing to understand here. Well, ok, it is just a standard linear regression model. It assumes that the score of a student can be described as a linear function of several factors, including school and teacher efficiency coefficients -- thus it shares all the standard problems of linear models, mainly the fact that it is a great approximation of a nonlinear world and may as well work perfectly or embarrassingly bad depending on a situation and on how far one would try to extrapolate with it. (However one should expect the authors of the tech rep checked it and found out that it's ok ;-) ). But the real problem is that this is an analytical tool and such shouldn't be used to assess people achievements -- this way (totally regardless if the marks are fair or not) every evaluee trying to understand her/his mark (probably in hope of optimizing it) will only meet hopeless confusion, as in this case.
null
CC BY-SA 2.5
null
2011-03-07T18:09:46.317
2011-03-07T19:35:16.397
2011-03-07T19:35:16.397
null
null
null
7966
2
null
7959
8
null
I think [whuber's suggestion](https://stats.stackexchange.com/questions/7959/algorithm-to-dynamically-monitor-quantiles/7963#7963) is great and I would try that first. However, if you find you really can't accomodate the $O(\sqrt N)$ storage or it doesn't work out for some other reason, here is an idea for a different generalization of P2. It's not as detailed as what whuber suggests - more like a research idea instead of as a solution. Instead of tracking the quantiles at $0$, $p/2$, $p$, $(1+p)/2$, and $1$, as the original P2 algorithm suggests, you could simply keep track of more quantiles (but still a constant number). It looks like the algorithm allows for that in a very straightforward manner; all you need to do is compute the correct "bucket" for incoming points, and the right way to update the quantiles (quadratically using adjacent numbers). Say you keep track of $25$ points. You could try tracking the quantile at $0$, $p/12$, $\dotsc$, $p \cdot 11/12$, $p$, $p + (1-p)/12$, $\dotsc$, $p + 11\cdot(1-p)/12$, $1$ (picking the points equidistantly in between $0$ and $p$, and between $p$ and $1$), or even using $22$ [Chebyshev nodes](http://en.wikipedia.org/wiki/Chebyshev_nodes) of the form $p/2 \cdot (1 + \cos \frac{(2 i - 1)\pi}{22})$ and $p + (1 - p)/2 \cdot (1 + \cos \frac{(2i-1)\pi}{22})$. If $p$ is close to $0$ or $1$, you could try putting fewer points on the side where there is less probability mass and more on the other side. If you decide to pursue this, I (and possibly others on this site) would be interested in knowing if it works...
null
CC BY-SA 2.5
null
2011-03-07T18:20:10.767
2011-03-07T18:20:10.767
2017-04-13T12:44:26.710
-1
2898
null
7967
2
null
7960
12
null
Here's one possibility. Assessing teacher performance has traditionally been difficult. One part of this difficulty is that different students have different levels of interest in a given subject. If a given student gets an A, this doesn't necessarily mean that teaching was excellent -- rather, it may mean that a very gifted and interested student did his best to succeed even despite poor teaching quality. Conversely, a student getting a D doesn't necessarily mean that the teaching was poor -- rather, it may mean that a disinterested student coasted despite the teacher's best efforts to educate and inspire. The difficulty is aggravated by the fact that student selection (and therefore the students' level of interest) is far from random. It is common for schools to emphasize one subject (or a group of subjects) over others. For example, a school may emphasize technical subjects over humanities. Students in such schools are probably so interested in technical areas that they will receive a passing grade even with the worst possible teacher. Thus the fraction of students passing math is not a good measure of teaching -- we expect good teachers to do much better than that with students who are so eager to learn. In contrast, those same students may not be interested at all in arts. It would be difficult to expect even from the best teacher to ensure all students get A's. Another difficulty is that not all success in a given class is attributable to that class's teacher directly. Rather, the success may be due to the school (or entire district) creating motivation and framework for achievement. To take into account all of these difficulties, researchers have created a model that evaluates teacher's 'added value'. In essence, the model takes into account the intrinsic characteristics of each student (overall level of interest and success in learning), as well as the school and district's contributions to student success, and predicts the student grades that would be expected with 'average' teaching in that environment. The model then compares the actual grades to the predicted ones and based on it decides whether teaching was adequate given all the other considerations, better than adequate, or worse. Although the model may seem complex to a non-mathematician, it is actually pretty simple and standard. Mathematicians have been using similar (and even more complex) models for decades. To summarize, Ms. Isaacson's guess is correct. Even though 65 of her 66 students scored proficient on the state test, they would have scored just the same even if a dog were their teacher. An actual good teacher would enable these students to achieve not merely 'proficient', but actually 'good' scores on the same test. --- At this point I could mention some of my concerns with the model. For example, the model developers claim it addresses some of the difficulties with evaluating teaching quality. Do I have enough reasons to believe them? Neighborhoods with lower-income population will have lower expected 'district' and 'school' scores. Say a neighborhood will have an expected score of 2.5. A teacher that will achieve an average of 3 will get a good evaluation. This may prompt teachers to aim for the score of 3, rather than for a score of, say, 4 or 5. In other words, teachers will aim for mediocrity rather than perfection. Do we want this to happen? Finally, even though the model is simple mathematically, it works in a way very different from how human intuition works. As a result, we have no obvious way to validate or dispute the model's decision. Ms. Isaacson's unfortunate example illustrates what this may lead to. Do we want to depend blindly on the computer in something so important? --- Note that this is an explanation to a layperson. I sidestepped several potentially controversial issues here. For example, I didn't want to say that school districts with low income demographics are expected to perform poorer, because this wouldn't sound good to a layperson. Also, I have assumed that the goal is actually to give a reasonably fair description of the model. But I'm pretty sure that this wasn't NYT's goal here. So at least part of the reason their explanation is poor is intentional FUD, in my opinion.
null
CC BY-SA 2.5
null
2011-03-07T19:26:04.770
2011-03-07T23:43:58.997
2011-03-07T23:43:58.997
3369
3369
null
7968
2
null
7960
12
null
"Your teaching score depends on how well your students did compared to a prediction made based on - What they knew beforehand, as measured by a pretest, - How well we think the students can learn based on what we know about them individually (their "characteristics"), - And how well students do on average in your district, school, and classroom (if there are other teachers in your classroom). "In other words, we are evaluating you based on the amount of learning that was measured, after factoring in the preparation and characteristics of your students and the typical performances of all students in settings like yours with the resources that were available to you. "In this way your score reflects what you contributed to the student performances, insofar as we can determine that. Of course we cannot know everything: we know you had unique and special students and that the situation you faced could never be duplicated. Therefore we know this score is only an estimate that imperfectly reflects how well you taught, but it is a fairer and more accurate estimate than one based solely on post testing or on raw test gains made by your class."
null
CC BY-SA 2.5
null
2011-03-07T19:31:19.560
2011-03-07T19:48:59.913
2011-03-07T19:48:59.913
919
919
null
7969
1
7971
null
14
1759
I have a simple sampling problem, where my inner loop looks like: ``` v = sample_gamma(k, a) ``` where `sample_gamma` samples from the Gamma distribution to form a Dirichlet sample. It works well, but for some values of k/a, some of the downstream computation underflows. I adapted it to use log space variables: ``` v = log(sample_gamma(k, a)) ``` After adapting all of the rest of the programme, it works correctly (at least it gives me the same exact results on test cases). However, it is slower than before. Is there a way to directly sample $X, \exp(X) \sim \text{Gamma}$ without using slow functions like $\log()$? I tried googling for this, but I do not even know if this distribution has a common name (log-gamma?).
How to quickly sample X if exp(X) ~ Gamma?
CC BY-SA 2.5
null
2011-03-07T19:44:05.827
2011-03-16T21:15:33.023
2011-03-08T08:19:12.243
2116
2067
[ "sampling", "gamma-distribution" ]
7970
2
null
5115
2
null
Bill James for his work in statistics that evaluate MLB player performance. His work spawned the term Sabermetics. He has created numerous statistics that can be found throughout the baseball world. His ideas stem from how to capture a player's overall impact on a game through run production (offense) and runs saved (defense). His work has led to less emphaisis on statistics that have low correlation to run production (batting average) and more on OPS (on-base + slugging). He works as an advisor to the Boston Red Sox and is credited to the World Series Championships in 2004 and 2007. His work has influenced the book and upcoming feature film Moneyball.
null
CC BY-SA 2.5
null
2011-03-07T20:52:49.007
2011-03-07T20:52:49.007
null
null
null
null
7971
2
null
7969
9
null
Consider a small shape parameter $\alpha$ near 0, such as $\alpha = 1/100$. In the range between 0 and $\alpha$, $e^{-\alpha}$ is approximately $1$, so the Gamma pdf is approximately $x^{\alpha-1}dx / \Gamma(\alpha)$. This can be integrated to an approximate CDF, $F_\alpha(x) = \frac{x^\alpha}{\alpha \Gamma(\alpha)}$. Inverting it, we see a $1/\alpha$ power: a huge exponent. For $\alpha = 1/100$ this causes some chance of underflow (a double precision value less than $10^{-300}$, more or less). Here is a plot of the chance of getting underflow as a function of the base-ten logarithm of $\alpha$: ![enter image description here](https://i.stack.imgur.com/AB5ML.png) One solution is to exploit this approximation for generating log(Gamma) variates: in effect, try to generate a Gamma variate and if it's too small, generate its logarithm from this approximate power distribution (as shown below). (Do this repeatedly until the log is within the underflow range, so that it is a valid substitute for the original underflowing variate.) For the Dirichlet calculation, subtract the maximum of all the logarithms from each of the log values: this implicitly rescales all the Gamma variates so it won't affect the Dirichlet values. Treat any resulting log that is too small (say, less than -100) as being the log of a true zero. Exponentiate the other logs. Now you can proceed without underflow. This is going to take even longer than before, but at least it will work! To generate an approximate log Gamma variate with shape parameter $\alpha$, precompute $C = \log(\Gamma(\alpha)) + \log(\alpha)$. This is easy, because there are algorithms to [compute values of log Gamma directly](http://en.wikipedia.org/wiki/Factorial#Computation). Generate a uniform random float between 0 and 1, take its logarithm, divide by $\alpha$, and add $C$ to it. Because the scale parameter merely rescales the variate, there is no problem accommodating it in these procedures. You don't even need it if all scale parameters are the same. ### Edit In another reply the OP describes a method in which the $1/\alpha$ power of a uniform variate (a $B(\alpha)$ variate) is multiplied by a $\Gamma(\alpha+1)$ variate. This works because the pdf of the joint distribution of these two variates equals $\left(\alpha x^{\alpha-1}\right) \left(y^{\alpha}e^{-y}dy/\Gamma(\alpha+1)\right)$. To find the pdf of $z = xy$ we substitute $y \to z/x$, divide by the Jacobean $x$, and integrate out $x$. The integral must range from $z$ to $\infty$ because $0 \le y \le 1$, whence $$\text{pdf}(z)=\frac{\alpha}{\Gamma(\alpha+1)}\int_z^{\infty}{\left(x^\alpha / x\right) e^{-x} (z/x)^{\alpha-1} dx} dz = \frac{1}{\Gamma(\alpha)}z^{\alpha-1}e^{-z}dz,$$ which is the pdf of a $\Gamma(\alpha)$ distribution. The whole point is that when $0 \lt \alpha \lt 1$, a value drawn from $\Gamma(\alpha+1)$ is unlikely to underflow and by summing its log and $1/\alpha$ times the log of an independent uniform variate we will have the log of a $\Gamma(\alpha)$ variate. The log is likely to be very negative, but we will have bypassed the construction of its antilog, which will underflow in a floating point representation.
null
CC BY-SA 2.5
null
2011-03-07T21:14:37.513
2011-03-16T21:15:33.023
2020-06-11T14:32:37.003
-1
919
null
7972
1
7978
null
6
298
I have a number of rasters of environmental data (~10) which may be important predictors for modelling species presence and abundance at ~10 different locations. I would like to know which of the rasters are important in explaining the variance in observed results. Is it appropriate to look for principal components of the rasters? Is this done across the entire extent, or only with respect to my observed sites? Any reference on this would be appreciated. EDIT: It turns out that PCs are not useful in this case. Even though two rasters may have very large correlation across the landscape, if they are different at the narrow range of sites occupied by the modelled species, this is what really matters.
Principal components of spatial variables
CC BY-SA 3.0
null
2011-03-07T21:32:50.493
2011-04-12T08:55:39.260
2011-04-12T08:55:39.260
2993
2993
[ "pca", "spatial" ]
7973
1
null
null
4
259
Imagine you have a set of four elements (A-D) with some numeric values of a measured property (several observations for each element): ``` A: 26 25 29 21 B: 24 17 16 C: 32 34 29 19 25 27 28 D: 23 29 26 20 14 ``` I have to detect if there are significant differences on the average levels. So I run a one way ANOVA to determine if differences are found. It works fine but, when I get a new sample, I need to execute the ANOVA again for just one new sample. Is there any way to do an "incremental one-way ANOVA" that works without redoing the whole computations ? I have a lot of data and the process is expensive in terms of time and memory consumption.
Incremental one-way ANOVA
CC BY-SA 2.5
null
2011-03-07T21:52:23.480
2011-03-09T18:50:07.717
2011-03-08T16:45:09.477
3576
3576
[ "anova" ]
7974
2
null
5115
9
null
[Samuel S. Wilks](http://en.wikipedia.org/wiki/Samuel_S._Wilks) was a leader in the development of mathematical statistics. He developed the [theorem on the distribution of the likelihood ratio](http://en.wikipedia.org/wiki/Likelihood-ratio_test#Distribution%3a_Wilks.27_theorem), a fundamental result that is used in a wide variety of situations. He also helped found the Princeton statistics department, where he was Fred Mosteller's advisor, among others, and has a prestigious [ASA award](http://www.amstat.org/careers/samuelwilksaward.cfm) named after him.
null
CC BY-SA 2.5
null
2011-03-07T22:00:16.830
2011-03-07T22:00:16.830
null
null
3601
null
7975
1
8037
null
14
6642
Having worked mostly with cross sectional data so far and very very recently browsing, scanning stumbling through a bunch of introductory time series literature I wonder what which role explanatory variables are playing in time series analysis. I would like to explain a trend instead of de-trending. Most of what I read as an introduction assumes that the series is stemming from some stochastic process. I read about AR(p) and MA processes as well as ARIMA modelling. Wanting to deal with more information than only autoregressive processes I found VAR / VECM and ran some examples, but still I wonder if there is some case that is related closer to what explanatories do in cross sections. The motivation behind this is that decomposition of my series shows that the trend is the major contributor while remainder and seasonal effect hardly play a role. I would like to explain this trend. Can / should I regress my series on multiple different series? Intuitively I would use gls because of serial correlation (I am not so sure about the cor structure). I heard about spurious regression and understand that this is a pitfall, nevertheless I am looking for a way to explain a trend. Is this completely wrong or uncommon? Or have I just missed the right chapter so far?
What to make of explanatories in time series?
CC BY-SA 2.5
null
2011-03-07T22:42:12.067
2019-11-25T10:35:58.813
2011-03-08T09:12:10.913
2116
704
[ "r", "time-series", "multivariate-analysis" ]
7976
1
7981
null
3
301
Can we do that? If yes, then what are the conditions which should be met?
Can we compute bivariate from marginal distributions?
CC BY-SA 4.0
0
2011-03-07T22:50:42.937
2019-01-30T01:07:32.923
2019-01-30T01:07:32.923
44269
null
[ "distributions", "multivariate-analysis", "marginal-distribution" ]
7977
1
7984
null
92
72907
I am wondering how to generate uniformly distributed points on the surface of the 3-d unit sphere? Also after generating those points, what is the best way to visualize and check whether they are truly uniform on the surface $x^2+y^2+z^2=1$?
How to generate uniformly distributed points on the surface of the 3-d unit sphere?
CC BY-SA 2.5
null
2011-03-07T22:57:20.690
2018-04-02T15:32:48.477
2011-03-07T23:42:01.520
930
3552
[ "random-generation" ]
7978
2
null
7972
4
null
Your idea about the "rasters" is not very clearly stated, but you might have a look at the paper by Borcard and Legendre (1994) and [their later works](http://www.bio.umontreal.ca/legendre/reprints/index.html) on spatial eigenvector-based analyses to see if one of the approaches will fit to your problem. [Borcard, D., Legendre, P., (1994) Environmental control and spatial structure in ecological communities: an example using oribatid mites (Acari, Oribatei). Environmental and Ecological Statistics 1, 37–61.](http://www.bio.umontreal.ca/legendre/reprints/B&L_EES1994.pdf)
null
CC BY-SA 2.5
null
2011-03-07T23:13:09.110
2011-03-07T23:28:39.240
2011-03-07T23:28:39.240
3467
3467
null
7979
1
null
null
13
1122
Suppose $X \sim \mathcal{N}(\mu_x, \sigma^2_x)$ and $Y \sim \mathcal{N}(\mu_y, \sigma^2_y)$ I am interested in $z = \min(\mu_x, \mu_y)$. Is there an unbiased estimator for $z$? The simple estimator of $\min(\bar{x}, \bar{y})$ where $\bar{x}$ and $\bar{y}$ are sample means of $X$ and $Y$, for example, is biased (though consistent). It tends to undershoot $z$. I can't think of an unbiased estimator for $z$. Does one exist?
Unbiased estimator for the smaller of two random variables
CC BY-SA 4.0
null
2011-03-07T23:15:04.000
2023-04-01T07:16:01.120
2023-04-01T07:16:01.120
362671
3602
[ "random-variable", "unbiased-estimator", "extreme-value", "point-estimation" ]
7980
2
null
7977
0
null
My best guess would be to first generate a set of uniformly distributed points in 2 dimensional space and to then project those points onto the surface of a sphere using some sort of projection. You will probably have to mix and match the way you generate the points with the way that you map them. In terms of the 2D point generationgeneration, I think that scrambled low-discrepancy sequences would be a good place to start (i.e. a scrambled Sobol sequence) since it usually produces points that are not "clumped together". I'm not as sure about which type of mapping to use, but Woflram popped up the [Gnonomic projection](http://mathworld.wolfram.com/GnomonicProjection.html)... so maybe that could work? MATLAB has a decent implementation of low discrepancy sequences which you can generate using `q = sobolset(2)` and scramble using `q = scramble(q)`. There is also a mapping toolbox in MATLAB with a bunch of different projection functions you could use in case you did not want to code the mapping and graphics yourself.
null
CC BY-SA 2.5
null
2011-03-07T23:32:00.477
2011-03-07T23:32:00.477
null
null
3572
null
7981
2
null
7976
6
null
Just knowing the marginal distributions of two variables isn't sufficient to specify their bivariate distribution. You need more information about their joint relationship. Simple example: two random normals can have any particular correlation with each other, but still have the same marginal distributions. It sounds like you may want to consider [copulas](http://en.wikipedia.org/wiki/Copula).
null
CC BY-SA 2.5
null
2011-03-07T23:44:19.147
2011-03-07T23:44:19.147
null
null
1569
null
7982
1
8139
null
7
1388
I'm working on a binary classification problem, with about 1000 binary features in total. The problem is that for each datapoint, I only know the values of a small subset of the features (around 10-50), and the features in this subset are pretty much random. What's a good way to deal with the problem of the missing features? Is there a particular classification algorithm that handles missing features well? (Naive Bayes should work, but is there anything else?) I'm guessing I don't want to do some kind of variable imputation, since I have so many missing features.
Binary classification when many binary features are missing
CC BY-SA 2.5
null
2011-03-08T00:05:58.443
2018-12-29T00:11:05.583
2018-12-29T00:11:05.583
11887
1106
[ "classification", "missing-data", "semi-supervised-learning" ]
7983
2
null
7977
8
null
I had a similar problem (n-sphere) during my PhD and one of the local 'experts' suggested rejection sampling from a n-cube! This, of course, would have taken the age of the universe as I was looking at n in the order of hunderds. The algorithm I ended up using is very simple and published in: W.P. Petersen and A. Bernasconic Uniform sampling from an n-sphere: Isotropic method Technical Report, TR-97-06, Swiss Centre for Scientific Computing I also have this paper in my bibliography that I havent looked at. You may find it useful. Harman, R. & Lacko, V. On decompositional algorithms for uniform sampling from $n$-spheres and $n$-balls Journal of Multivariate Analysis, 2010
null
CC BY-SA 2.5
null
2011-03-08T00:22:36.827
2011-03-08T00:22:36.827
null
null
530
null
7984
2
null
7977
95
null
A standard method is to generate three standard normals and construct a unit vector from them. That is, when $X_i \sim N(0,1)$ and $\lambda^2 = X_1^2 + X_2^2 + X_3^2$, then $(X_1/\lambda, X_2/\lambda, X_3/\lambda)$ is uniformly distributed on the sphere. This method works well for $d$-dimensional spheres, too. In 3D you can use rejection sampling: draw $X_i$ from a uniform$[-1,1]$ distribution until the length of $(X_1, X_2, X_3)$ is less than or equal to 1, then--just as with the preceding method--normalize the vector to unit length. The expected number of trials per spherical point equals $2^3/(4 \pi / 3)$ = 1.91. In higher dimensions the expected number of trials gets so large this rapidly becomes impracticable. There are many ways to check uniformity. A neat way, although somewhat computationally intensive, is with [Ripley's K function](http://www.public.iastate.edu/~pcaragea/S40608/Notes/Dixon_Ripley_K.pdf). The expected number of points within (3D Euclidean) distance $\rho$ of any location on the sphere is proportional to the area of the sphere within distance $\rho$, which equals $\pi\rho^2$. By computing all interpoint distances you can compare the data to this ideal. General principles of constructing statistical graphics suggest a good way to make the comparison is to plot variance-stabilized residuals $e_i(d_{[i]} - e_i)$ against $i = 1, 2, \ldots, n(n-1)/2=m$ where $d_{[i]}$ is the $i^\text{th}$ smallest of the mutual distances and $e_i = 2\sqrt{i/m}$. The plot should be close to zero. (This approach is unconventional.) Here is a picture of 100 independent draws from a uniform spherical distribution obtained with the first method: ![100 uniform spherical points](https://i.stack.imgur.com/EZEcr.png) Here is the diagnostic plot of the distances: ![Diagnostic plot](https://i.stack.imgur.com/8y1H9.png) The y scale suggests these values are all close to zero. Here is the accumulation of 100 such plots to suggest what size deviations might actually be significant indicators of non-uniformity: ![Simulated values](https://i.stack.imgur.com/h4kec.png) (These plots look an awful lot like [Brownian bridges](http://en.wikipedia.org/wiki/Brownian_bridge)...there may be some interesting theoretical discoveries lurking here.) Finally, here is the diagnostic plot for a set of 100 uniform random points plus another 41 points uniformly distributed in the upper hemisphere only: ![Simulated non-uniform values](https://i.stack.imgur.com/d5z4u.png) Relative to the uniform distribution, it shows a significant decrease in average interpoint distances out to a range of one hemisphere. That in itself is meaningless, but the useful information here is that something is non-uniform on the scale of one hemisphere. In effect, this plot readily detects that one hemisphere has a different density than the other. (A simpler chi-square test would do this with more power if you knew in advance which hemisphere to test out of the infinitely many possible ones.)
null
CC BY-SA 3.0
null
2011-03-08T00:30:43.540
2011-11-10T15:32:58.263
2011-11-10T15:32:58.263
919
919
null
7985
2
null
7884
17
null
Following up to Steve's reply, there is a much faster way in data.table : ``` > # Preamble > dx <- data.frame( + ID = sort(sample(1:7000, 400000, TRUE)) + , AGE = sample(18:65, 400000, TRUE) + , FEM = sample(0:1, 400000, TRUE) + ) > dxt <- data.table(dx, key='ID') > # fast self join > system.time(ans2<-dxt[J(unique(ID)),mult="first"]) user system elapsed 0.048 0.016 0.064 > # slower using .SD > system.time(ans1<-dxt[, .SD[1], by=ID]) user system elapsed 14.209 0.012 14.281 > mapply(identical,ans1,ans2) # ans1 is keyed but ans2 isn't, otherwise identical ID AGE FEM TRUE TRUE TRUE ``` If you merely need the first row of each group, it's much faster to join to that row directly. Why create the .SD object each time, only to use the first row of it? Compare the 0.064 of data.table to "Matt Parker's alternative to Chase's solution" (which seemed to be the fastest so far) : ``` > system.time(ans3<-dxt[c(TRUE, dxt$ID[-1] != dxt$ID[-length(dxt$ID)]), ]) user system elapsed 0.284 0.028 0.310 > identical(ans1,ans3) [1] TRUE ``` So ~5 times faster, but it's a tiny table at under 1 million rows. As size increases, so does the difference.
null
CC BY-SA 2.5
null
2011-03-08T00:53:05.203
2011-03-08T00:53:05.203
null
null
3589
null
7986
1
null
null
2
115
This problem is basically the classic asset selling problem but with imperfect state information. In the classical problem, we have an asset that we wish to sell, we receive offers w(0) to w(N-1). If we accept the offer at a given period, we can invest that money with a certain rate of interest r > 0. We suppose that at period N-1, we will accept the offer if we haven't accepted any already. In the problem I am trying to solve, the offers w(k) are independent and identically distributed. However the common distribution of the w(k) is unknown. Instead it is known that this distribution is one out of two given distributions F1 and F2, and that the a priori probability that F1 is the correct distribution is a given scalar q, with 0 < q < 1. a)Formulate this as an imperfect state information problem and identify the state, control, system disturbance, observation, and observation disturbance. b)Show that (x(k), q(k)), where q(k) = P(distribution is F1 | w(0), ... , w(k-1)) is a suitable sufficient statistic, write a corresponding DP algorithm, and derive the form of the optimal selling policy. The state would represent whether or not we have already sold our asset and if we haven't, the offer we have at this period. The control would be to either accept or refuse the offer at this period. The system disturbance would be the offer. As for the rest of the question, I'm not too sure.
Optimal stopping under partially observable state
CC BY-SA 2.5
null
2011-03-08T01:01:16.077
2011-03-08T01:01:16.077
null
null
null
[ "optimal-stopping" ]
7987
2
null
7979
9
null
This is just a couple of comments not an answer (don't have enough rep. point). (1). There is an explicit formula for the bias of the simple estimator $\min(\bar{x},\bar{y})$ here: Clark, C. E. 1961, Mar-Apr. The greatest of a finite set of random variables. Operations Research 9 (2): 145–162. Not sure how this helps though (2). This is just intuition, but I think such an estimator doesn't exist. If there is such an estimator, it should also be unbiased when $\mu_x=\mu_y=\mu$. Thus any 'downgrading' which makes the estimator less than say the weighted average of the two sample means make the estimator biased for this case.
null
CC BY-SA 4.0
null
2011-03-08T01:06:59.107
2023-04-01T07:14:38.713
2023-04-01T07:14:38.713
362671
3036
null
7988
2
null
7977
20
null
Here is some rather simple R code ``` n <- 100000 # large enough for meaningful tests z <- 2*runif(n) - 1 # uniform on [-1, 1] theta <- 2*pi*runif(n) - pi # uniform on [-pi, pi] x <- sin(theta)*sqrt(1-z^2) # based on angle y <- cos(theta)*sqrt(1-z^2) ``` It is very simple to see from the construction that $x^2+y^2 = 1- z^2$ and so $x^2+y^2+z^2=1$ but if it needs to be tested then ``` mean(x^2+y^2+z^2) # should be 1 var(x^2+y^2+z^2) # should be 0 ``` and easy to test that each of $x$ and $y$ are uniformly distributed on $[-1,1]$ ($z$ obviously is) with ``` plot.ecdf(x) # should be uniform on [-1, 1] plot.ecdf(y) plot.ecdf(z) ``` Clearly, given a value of $z$, $x$ and $y$ are uniformly distributed around a circle of radius $\sqrt{1-z^2}$ and this can be tested by looking at the distribution of the arctangent of their ratio. But since $z$ has the same marginal distribution as $x$ and as $y$, a similar statement is true for any pair, and this too can be tested. ``` plot.ecdf(atan2(x,y)) # should be uniform on [-pi, pi] plot.ecdf(atan2(y,z)) plot.ecdf(atan2(z,x)) ``` If still unconvinced, the next steps would be to look at some arbitrary 3-D rotation or how many points fell within a given solid angle, but that starts to get more complicated, and I think is unnecessary.
null
CC BY-SA 2.5
null
2011-03-08T01:19:47.993
2011-03-08T01:19:47.993
null
null
2958
null
7989
1
null
null
3
250
I have a general question. What kind of noise is additive, and what about multiplicative noise? How to determine the nature of noise? Thanks a lot for your help.
Determining the nature of noise
CC BY-SA 2.5
null
2011-03-08T01:48:03.257
2011-04-29T01:04:54.580
2011-04-29T01:04:54.580
3911
3552
[ "regression" ]
7990
1
8051
null
2
288
Suppose we have the following random process. We start with two vectors $a_1=(0)$ and $b_1=(0)$. In going from $i$ to $i+1$, we will a perturbation to $a_i$ and $b_i$. With probability $p$, we perform Case 1, otherwise perform Case 2. - Case 1: We pick an element of $a_i$, say at coordinate $x$ (chosen uniformly at random from all coordinates $1,2,\ldots,|a_i|$, where $|a_i|$ is the length of $a_i$), and append it to the end of $a_i$ to form $a_{i+1}$. Similarly, to form $b_{i+1}$ append the element at coordinate $x$ from $b_i$ to the end of $b_i$. The remainder of $a_{i+1}$ and $b_{i+1}$ is the same as $a_i$ and $b_i$, respectively. [The copied element is at coordinate $x$ in both $a_i$ and $b_i$.] - Case 2: Choose $x$ and $y$ to be two coordinates chosen uniformly at random from $1,2,\ldots,|a_i|$. If $x \neq y$ then set $a_{i+1}[x]=a_i[x]+1$ and $b_{i+1}[y]=b_i[y]+1$ (if $x=y$ then don't do anything [it's not clear at this point whether or not I want to enforce $x \neq y$]). Again, the remainder of $a_{i+1}$ and $b_{i+1}$ is the same as $a_i$ and $b_i$, respectively. > Question: What is the expected dot product of $a_i$ and $b_i$? That is, what is $\mathrm{E}(\sum_x a_i[x] \cdot b_i[x])$? So here's an example of what these vectors could look like: ``` i a_i b_i 1 (0) (0) 2 (0,0) (0,0) [case 1 x=1] 3 (0,0,0) (0,0,0) [case 1 x=1] 4 (1,0,0) (0,1,0) [case 2 x=1 y=2] 5 (1,0,0,1) (0,1,0,0) [case 1 x=1] 6 (1,0,0,1) (0,1,0,0) [case 2 x=4 y=4] 7 (1,0,0,2) (0,1,1,0) [case 2 x=4 y=3] ``` I'm looking at a related problem studying evolving random graphs. In the random graph problem, the two vectors represent the in-degrees and out-degrees of an evolving network over time. In the graph, we can duplicate vertices or add or delete an edge (which is related to cases 1 and 2 above). In this case, the dot product therefore counts the number of directed paths of length 3. However, the above problem differs from the one I'm considering since (a) it's not necessarily the case that $\sum_x a_i[x]=\sum_x b_i[x]$ and (b) there's nothing to stop parallel edges arising here. Although, I'm hoping that this simplified question will be more answerable and the techniques can be re-used.
What is the expected dot product of two evolving vectors?
CC BY-SA 2.5
null
2011-03-08T01:51:53.857
2011-03-09T10:28:37.683
2011-03-08T03:55:47.993
386
386
[ "stochastic-processes", "expected-value" ]
7994
1
null
null
3
866
This is problem 12.10 in ["The Elements of Statistical Learning"](http://www-stat.stanford.edu/~tibs/ElemStatLearn/): > Suppose you wish to carry out a linear discriminant analysis (two classes) using a vector of transformations of the input variables $h(x)$. Since $h(x)$ is high-dimensional, you will use a regularized within-class covariance matrix $W_h + \gamma I$. Show that the model can be estimated using only the inner products $K(x_i, x_{i'}) = \left < h(x_i), h(x_{i'}) \right >$. How can I go about showing that regularized linear discriminant analysis can be estimated using only inner products, as in the "kernel trick" that is often used with SVM's?
Linear discriminant analysis and the "kernel trick"?
CC BY-SA 2.5
null
2011-03-08T04:08:37.843
2015-04-19T20:47:08.097
2015-04-19T20:47:08.097
9964
988
[ "self-study", "classification", "kernel-trick" ]
7995
2
null
7989
4
null
- Do you know where does the noise comes from? Before doing any statistical test, you think about the origin of the noise you want to remove. Additive noise is independent from the level of the signal, whereas multiplicative noise is proportional to the level of the signal. - If you are not able to know, or if you want to check, try for instance to plot the sd of the noise as a function of the level of the signal.
null
CC BY-SA 2.5
null
2011-03-08T04:09:54.563
2011-03-08T13:51:49.943
2011-03-08T13:51:49.943
2116
1709
null
7996
1
8042
null
5
198
I am evaluating a scenario's output parameter's dependence on three parameters: A, B and C. For this, I am conducting the following experiments: - Fix A+B, Vary C - Total four sets of (A+B) each having 4 variations of C - Fix B+C, Vary A - Total four sets of (B+C) each having 3 variations of C - Fix C+A, Vary B - Total four sets of (C+A) each having 6 variations of C The output of any simulation is the value of a variable over time. For instance, A could be the area, B could be the velocity and C could be the number of vehicles. The output variable I am observing is the number of car crashes over time. I am trying to determine which parameter(s) dominate the outcome of the experiment. By dominate, I mean that sometimes, the outcomes just does not change when one of the parameters change but when some other parameter is changed even by a small amount, a large change in the output is observed. I need to capture this effect and output some analysis from which I can understand the dependence of the output on the input parameters. A friend suggested Sensitivity Analysis but am not sure if there are simpler ways of doing it. Can someone please help me with a good (possibly easy because I don't have a Stats background) technique? It would be great if all this can be done in R. Update: I used linear regression to obtain the following: ``` lm(formula = T ~ A + S + V) Residuals: Min 1Q Median 3Q Max -0.35928 -0.06842 -0.00698 0.05591 0.42844 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.01606 0.16437 -0.098 0.923391 A 0.80199 0.15792 5.078 0.000112 *** S -0.27440 0.13160 -2.085 0.053441 . V -0.31898 0.14889 -2.142 0.047892 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.1665 on 16 degrees of freedom Multiple R-squared: 0.6563, Adjusted R-squared: 0.5919 F-statistic: 10.18 on 3 and 16 DF, p-value: 0.0005416 ``` Does this mean that the output depends mostly on A and less on V?
What is a good way of estimating the dependence of an output variable on the input parameters?
CC BY-SA 2.5
null
2011-03-08T05:24:45.063
2011-03-09T12:46:51.747
2011-03-09T12:46:51.747
8
2164
[ "r", "regression", "experiment-design" ]
7997
2
null
7155
-2
null
Outliers are important only in the frequentist realm. If a single datapoint adds bias to your model which is defined by an underlying distribution predeterimined by your theory, then it is an outlier for that model. The subjectivity lies in the fact that if your theory posits a different model, then you can have a different set of points as outliers.
null
CC BY-SA 2.5
null
2011-03-08T05:30:48.113
2011-03-08T05:30:48.113
null
null
null
null
7998
2
null
7979
-1
null
I'd be fairly sure an unbiased estimator does not exist. But unbiased estimators don't exist for most quantities, and unbiasedness is not a particularly desirable property in the first place. Why do you want one here?
null
CC BY-SA 2.5
null
2011-03-08T05:36:05.240
2011-03-08T05:36:05.240
null
null
null
null
7999
1
8002
null
5
8101
Let's say I have a function `funToRepeat`: ``` funToRepeat <- function(df){ rows <- nrow(df) cols <- ncol(df) err <- runif( rows * cols ) dfNew <- df + as.data.frame( matrix(err, nrow=rows, ncol=cols) ) dfNew } ``` I want to operate `funToRepeat` on `dfTest` (a toy data set), multiple times, say `n`. ``` dfTest <- structure(list(V1 = 1:2, V2 = 3:4), .Names = c("V1", "V2"), row.names = c(NA,-2L), class = "data.frame") ``` How would I do this and average the `n` outputs of `funToRepeat` in `R`, efficiently? A very bad example with `n=10` may be: ``` dfAvg <- as.data.frame(list(V1=c(0, 0), V2=c(0, 0))) for(i in 1:10){ dfAvg <- dfAvg + funToRepeat(dfTest) } dfAvg <- dfAvg/10 ``` Please notice, I am not trying to bootstrap. I want to operate `funToRepeat` on my data set and take the average of the `n` outputs. I want to do this as I am adding `err`, a random term, to my data.frame and want to provide a reasonable answer. (I am not doing exactly this in real life, but something very similar)
How to efficiently repeat a function on a data set in R?
CC BY-SA 2.5
null
2011-03-08T06:32:46.820
2011-03-08T17:04:35.610
null
null
1307
[ "r" ]
8000
1
8014
null
70
47794
Recurrent neural networks differ from "regular" ones by the fact that they have a "memory" layer. Due to this layer, recurrent NN's are supposed to be useful in time series modelling. However, I'm not sure I understand correctly how to use them. Let's say I have the following time series (from left to right): `[0, 1, 2, 3, 4, 5, 6, 7]`, my goal is to predict `i`-th point using points `i-1` and `i-2` as an input (for each `i>2`). In a "regular", non-recurring ANN I would do process the data as follows: > target| input 2| 1 0 3| 2 1 4| 3 2 5| 4 3 6| 5 4 7| 6 5 I would then create a net with two input and one output node and train it with the data above. How does one need to alter this process (if at all) in the case of recurrent networks?
Proper way of using recurrent neural network for time series analysis
CC BY-SA 2.5
null
2011-03-08T07:16:01.813
2021-03-15T11:43:31.750
null
null
1496
[ "time-series", "machine-learning", "neural-networks" ]
8001
1
null
null
5
800
I want to discover which genes are expressed in only one of five treatments. This is my pipeline: - ANOVA between the five treatments - Holm multiple testing correction - Tukey for significant genes discovered in step 2 My question is: Should I also correct Tukey p-values for example multiplying the p-value by the number of significant ANOVA p-values (Bonferroni correction on the number of tests performed), or should I only correct at the level of ANOVA? Thanks a lot in advance, Rossella
ANOVA and multiple testing correction in gene screening
CC BY-SA 2.5
null
2011-03-08T07:37:42.527
2011-10-23T14:57:11.427
2011-03-08T07:44:41.863
930
3614
[ "anova", "multiple-comparisons", "genetics" ]
8002
2
null
7999
6
null
I for one would put everything to the list and then use `Reduce`. ``` funToRepeat <- function(dims) { err <- runif(prod(dims)) matrix(err,nrow=dims[1],ncol=dims[2]) } ll <- alply(cbind(rep(10,10),rep(20,10)),1,funToRepeat) sum.ll <- Reduce("+",ll) ``` Then convert to data.frame you need after dividing by length of the list to get the average. Note that your function does not need whole of a `data.frame` to operate, only its dimensions. The conversion to data.frame is unnecessary cost here, so it is better not to use it.
null
CC BY-SA 2.5
null
2011-03-08T08:37:39.897
2011-03-08T17:04:35.610
2011-03-08T17:04:35.610
2116
2116
null
8004
2
null
7975
7
null
The same intuition as in cross-section regression can be used in time-series regression. It is perfectly valid to try to explain the trend using other variables. The main difference is that it is implicitly assumed that the regressors are random variables. So in regression model: $$Y_t=\beta_0+X_{t1}\beta_1+...+X_{tk}\beta_k+\varepsilon_t$$ we require $E(\varepsilon_t|X_{t1},...,X_{tk})=0$ instead of $E\varepsilon_t=0$ and $E(\varepsilon_t^2|X_{t1},...,X_{tk})=\sigma^2$ instead of $E\varepsilon_t^2=\sigma^2$. The practical part of regression stays the same, all the usual statistics and methods apply. The hard part is to show for which types of random variables, or in this cases stochastic processes $X_{tk}$ we can use classical methods. The usual central limit theorem cannot be applied, since it involves independent random variables. Time series processes are usually not independent. This is where importance of stationarity comes into play. It is shown that for large part of stationary processes the central limit theorem can be applied, so classical regression analysis can be applied. The main caveat of time-series regression is that it can massively fail when the regressors are not stationary. Then usual regression methods can show that the trend is explained, when in fact it is not. So if you want to explain trend you must check for non-stationarity before proceeding. Otherwise you might arrive at false conclusions.
null
CC BY-SA 2.5
null
2011-03-08T09:35:09.840
2011-03-08T09:35:09.840
null
null
2116
null
8005
1
8054
null
5
129
I have a hypothetical experiment where I am comparing scores on some measure at 1 year to scores on the measure at 4 years. I use a non-paired t-test to see if there is a significant difference between the two. Then, I do: ``` (4 year mean) - (1 year mean) ``` Now, I want to take that value, a function of two different distributions that have different n and SD, and compare it to another value that was obtained in the same way from two similar distributions (1st & 4th year but under a different independent variable). How can I do this? And is it valid do 4yrMean - 1yrMean to obtain the value, or should I do p-value4 - p-value1 or something?
Hypothesis testing on values that are functions of multiple distributions
CC BY-SA 2.5
null
2011-03-08T10:06:32.450
2011-03-09T02:52:53.240
2011-03-09T02:41:24.117
3443
3443
[ "hypothesis-testing" ]
8006
1
null
null
4
1488
How much data is needed to properly fit a GARCH(1,1) model?
Fitting a GARCH(1,1) model
CC BY-SA 2.5
null
2011-03-08T10:48:41.757
2017-08-06T09:43:22.013
2017-08-06T09:43:22.013
53690
3588
[ "time-series", "sample-size", "garch" ]
8007
1
null
null
6
210
Workers in a factory are assembling items built from several parts. For each item they fetch the parts from the warehouse and then assemble the items. I suppose they need a certain time per item for assembly plus a certain time per part for fetching them. $D_{order} = n_{items} * D_{fetch} + n_{parts} * D_{assemble}$ I have a list of past orders which tells me - $n_{items}$: how many items there are in that order - $n_{parts}$: how many parts there are in that order and - $D_{order}$: how long it took the worker to complete the order. I want to estimate, based on the number of items and parts, how long a future order will take. So I need to find out $D_{fetch}$ and $D_{assemble}$. Which is the best way to plot the data to estimate those two factors? Here is some sample data (order completion time on the Y-axis): This chart shows the time to complete an order vs. the overall number of parts in it: ![time_vs_parts](https://i.stack.imgur.com/CA5wM.jpg) This chart shows the time to complete an order vs. the number of assembled items in it: ![time_vs_items](https://i.stack.imgur.com/3qPbG.jpg) What I didn't tell before: Usually several of the items are the same product, so maybe there is a overhead per product for getting/reading the appropriate blueprints. So this chart shows the time to complete an order vs. the number of unique products in it: ![time_vs_unique_products](https://i.stack.imgur.com/cgQFf.jpg)
How to estimate time-per-product in a factory?
CC BY-SA 3.0
null
2011-03-08T10:50:34.163
2011-07-13T22:11:16.897
2011-04-14T19:27:35.580
null
3615
[ "data-visualization", "modeling" ]
8008
2
null
5926
3
null
Why not use Cohen's (1988, 1992) guidelines for effect size values? He defines a "small" $(0.1 \leq r \leq 0.23)$, "medium" $(0.24 \leq r \leq 0.36)$ and "large" $(r \geq 0.37)$ effect. This would suggest to use MANOVA with variables whose $r$ is below $0.37$. ### References Cohen, J. (1988) Statistical Power Analysis for the Behavioral Sciences. 2nd Ed. Routledge Academic, 567 pp. Cohen, J (1992). A power primer. Psychological Bulletin 112, 155–159.
null
CC BY-SA 2.5
null
2011-03-08T11:07:54.290
2011-03-16T08:44:56.973
2011-03-16T08:44:56.973
3467
3467
null
8009
1
null
null
7
285
We are measuring conversion rates (% of visitors who bought) on an e-commerce site. The test apply to a segment of visitors who meet specific criteria (for example people from a certain country). The people from the segment are divided into 2 groups. Part of them see a banner and the other don't (control group). Usually the control group is 30% of visitors. The test begin after the banner is shown to all users in the segment for a while so the data of exposed people extends much longer than the data of the control group. So at a given time we have for example X people exposed and Xb of them converted; likewise for Y people who were not exposed, Yb of them were converted. Y and Yb are much smaller than X and Xb. The conversion Rate for X is Xb/X and for Y is Yb/Y. My first question is how to determine statistical validity. We used Chi-square test to do it, and got results similar to those from this [on-line calculator](http://www.usereffect.com/split-test-calculator) (implemented similar to [table1 here](http://math.hws.edu/javamath/ryan/ChiSquare.html)). However, sometimes it looks like the number of purchases is extremely small yet the chi-square test says its valid (>95%). Here is a real life example: ``` X=189 Xb=1 Y=93 Yb=3 Conversion X= 0.5% Conversion Y=3.2% Statistical confidence (chi-square based) 92.8% ``` Altough the confidence is below 95%, it seems too close for determining its confidence while there was only 1 conversion for X. My second question is then: Should there also be a requirement for a minimal number of conversions for the confidence to be valid? If so, how do we calculate it? Thanks!
Minimal number of samples/conversions for statistical validity
CC BY-SA 2.5
null
2011-03-08T11:16:35.207
2017-09-22T13:15:13.903
2011-03-08T11:31:35.020
930
3506
[ "confidence-interval", "chi-squared-test", "cross-validation", "validation" ]
8010
1
8016
null
4
1308
Here is the dual problem for L2 support vector machine: $$\max_{\alpha\in\mathbb{R}^{n}} 2\alpha^{T}y-\alpha^{T}\left(K+n\lambda Id_{\mathbb{R}^{n}}\right)\alpha$$ $$\forall i\in\left\{ 1,\ldots,n\right\} ,\,\alpha_{i}y_{i} \geq0$$ However, using the Lagrangian formulation, I get the following. Is there an argument missing? $$\max_{\mu\in\mathbb{R}^{n},\nu\in\mathbb{R}^{n}} \sum_{i=1}^{n}\mu_{i}-\frac{1}{4\lambda}\sum_{i=1}^{n}\sum_{j=1}^{n}\mu_{i}y_{i}\mu_{j}y_{j}K\left(x_{i},x_{j}\right)-\frac{n}{4}\sum_{i=1}^{n}\left(\mu_{i}+\nu_{i}\right)^{2}$$ $$\forall i\in\left\{ 1,\ldots,n\right\} ,\,\mu_i\geq0$$ $$\forall i\in\left\{ 1,\ldots,n\right\} ,\,\nu_i\geq0$$ Edit: We want the max, which explains $\nu=0$.
Dual problem for L2 support vector machine
CC BY-SA 2.5
0
2011-03-08T12:01:10.703
2011-03-09T18:19:41.447
2011-03-09T10:09:17.420
930
1351
[ "self-study", "svm", "loss-functions", "proof" ]
8011
2
null
8006
5
null
Depends on the coefficients. Simple Monte-Carlo analysis suggests that a lot, about 1000, which is quite surprising. ``` N <- 1000 n <- 1000+N a <- c(0.2, 0.3, 0.4) # GARCH(1,1) coefficients e <- rnorm(n) x <- double(n) s <-double(n) x[1] <- rnorm(1) s[1] <- 0 for(i in 2:n) # Generate GARCH(1,1) process { s[i] <- a[1]+a[3]*s[i-1]+a[2]*x[i-1]^2 x[i] <- e[i]*sqrt(s[i]) } x <- ts(x[1000+1:N]) x.garch <- garchFit(data=x) # Fit GARCH(1,1) summary(x.garch) ``` I modified example code from `garch` from tseries package, but I used `garchFit` from fGarch package, since it seemed that it gave better results. I used 1000 values for burn-in.
null
CC BY-SA 2.5
null
2011-03-08T12:08:20.493
2011-03-08T12:08:20.493
null
null
2116
null
8012
1
null
null
3
132
I start with a presumably non-stationary time series. By some criterium $k$ I extract $n_k$ pairs of times $(t_{k,i,2},t_{k,i,1})$. Now I compute the following quantity: $\bar{\tau} = \frac{1}{K} \sum_k \frac{1}{n_k} \sum_i (t_{k,i,2}-t_{k,i,1})$ What I'm looking for is the error of that quantity. I first began by summing up the variances of the "inner" mean, but I guess thats far off because of the high correlations. So I guess, somehow the autocorrelation function has to be incorporated. But as I'm not too familiar with time series, I don't even really know what to look for. Updated with info from comments: Here is the example of how the pairs of times are chosen: ![enter image description here](https://i.stack.imgur.com/nH0Dq.jpg) $A(t)$ is some observed quantity. $A_k$ is an upper threshold and $A_0$ some fixed lower one $(0\notin k)$. In physics we call the $y_{ki}=t_{k,i,2}-t_{k,i,1}$ first passage times. Edit: I've been thinking on this for a while and I've got some ideas, but I'm not sure. First I came up with this (acv is the autocovariance function): $\sigma^2(\bar\tau) = \frac{1}{K^2}\sum_{\forall t_{k_1,i_1,j_1}, \forall t_{k_2,i_2,j_2}} a(j_1,j_2) \frac{1}{n_{k_1} n_{k_2}} acv(|t_{k_1,i_1,j_1}-t_{k_2,i_2,j_2}|)$. $a(j_1, j_2)$ is supposed to be some function that is -1 if either of the arguments is 1 (meaning that the corresponding $t$ is subtracted in the formula in my original post). But I don't see that this reduces to $\sigma^2(\bar\tau) = \frac{1}{K^2}\sum_k\frac{1}{n_k^2}\sigma_k^2$, which was my original guess assuming no correlation. $\sigma_k^2$ denotes the variance of the $k$th "inner mean". So finally I thought about something like this: $\sigma^2(\bar\tau) = \frac{1}{K^2}\sum_k\frac{1}{n_k^2}\sigma_k^2 + \frac{1}{K^2}\sum_{\forall t_{k_1,i_1,j_1}, \forall t_{k_2,i_2,j_2}, t_{k_1,i_1,j_1}\neq t_{k_2,i_2,j_2}} a(j_1, j_2) \frac{1}{n_{k_1} n_{k_2}} acv(|t_{k_1,i_1,j_1}-t_{k_2,i_2,j_2}|)$ Can you help me? Am I thinking at least in the right direction? Thanks, Julius
Error of a quantity computed from a time series
CC BY-SA 2.5
null
2011-03-08T12:13:11.033
2011-03-10T15:51:47.273
2011-03-10T15:51:47.273
null
3104
[ "time-series", "correlation", "error", "error-propagation" ]