Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
βŒ€
ParentId
stringlengths
1
6
βŒ€
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
βŒ€
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
βŒ€
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
βŒ€
LastEditorUserId
stringlengths
1
6
βŒ€
OwnerUserId
stringlengths
1
6
βŒ€
Tags
list
5034
2
null
5025
9
null
You can test this hypothesis with a full versus reduced model test. Here is how you do this. First, fit the model $Z = aX + bY$ and get the residuals from that model. Square the residuals and sum them up. This is the sum of square error for the full model. Let's call this $SSE_f$. Next, calculate $Z - \hat{Z}$ where $\hat{Z} = 1/2*X + 1/2*Y$. These are your residuals under the null hypothesis. Square them and sum them up. This is the sum of square error for the reduced model. Let's call this $SSE_r$. Now compute: F = $((SSE_r - SSE_f)/2) / (SSE_f / (n-2))$, where $n$ is the sample size. Under $H_0$, this F-statistic follows an F-distribution with $2$ and $n-2$ degrees of freedom. Here is an example using R: ``` x <- rnorm(n) y <- rnorm(n) z <- 1/2*x + 1/2*y + rnorm(n) ### note I am simulating under H0 here res <- lm(z ~ x + y - 1) summary(res) SSE.f <- sum(resid(res)^2) zhat <- 1/2*x + 1/2*y SSE.r <- sum((z-zhat)^2) F <- ((SSE.r - SSE.f) / 2) / (SSE.f / (n-2)) pf(F, 2, n-2, lower.tail=FALSE) ### this is the p-value ``` Reject the null if the p-value is below .05 (if your $\alpha$ is indeed .05). I assume you really meant for your model not to contain an intercept. In other words, I assume you are really working with the model $Z = aX + bY$ and not $Z = c + aX + bY$.
null
CC BY-SA 2.5
null
2010-11-30T17:14:08.923
2010-11-30T17:14:08.923
null
null
1934
null
5035
1
5036
null
1
267
I have a study which gives the hazard ratios for certain interventions like: - Intervention 1, HR: .9 - Intervention 2, HR: .8 - ... Is there any way in which I could predict the relative risk for e.g. someone who has intervention 1 and 2? I think if I assume that the interventions are independent, then the HR of doing both is just the product of the individuals' HRs, right?
Can covariates in a Cox Proportional Hazard model be combined in any meaningful way?
CC BY-SA 2.5
null
2010-11-30T19:19:28.500
2010-11-30T20:20:48.150
null
null
900
[ "survival", "hazard" ]
5036
2
null
5035
5
null
The HR of doing both is the product of the separate HRs if you assume that there's no [interaction](http://en.wikipedia.org/wiki/Interaction_%28statistics%29) between the two effects on the multiplicative scale. [Independence](http://en.wikipedia.org/wiki/Independence_%28probability_theory%29) is a separate issue from interaction.
null
CC BY-SA 2.5
null
2010-11-30T20:20:48.150
2010-11-30T20:20:48.150
null
null
449
null
5038
1
5039
null
8
1200
I'm experimenting with R and found that an anova() needs an object of type lm. But why should I continue with an anova after this: ``` > x <- data.frame(rand=rnorm(100), factor=sample(c("A","B","C"),100,replace=TRUE)) > head(x) rand factor 1 0.9640502 B 2 -0.5038238 C 3 -1.5699734 A 4 -0.8422324 B 5 0.2489113 B 6 -1.4685439 A > model <- lm(x$rand ~ x$factor)) > summary(model) Call: lm(formula = x$rand ~ x$factor) Residuals: Min 1Q Median 3Q Max -2.74118 -0.89259 0.02904 0.59726 3.19762 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.1878 0.1845 -1.018 0.311 x$factorB -0.1284 0.2689 -0.477 0.634 x$factorC 0.4246 0.2689 1.579 0.118 Residual standard error: 1.107 on 97 degrees of freedom Multiple R-squared: 0.04345, Adjusted R-squared: 0.02372 F-statistic: 2.203 on 2 and 97 DF, p-value: 0.1160 ``` This tells me everything I need, or does it not? I'm curious why you want to continue with an anova(model)
What are the ANOVA's benefits over a normal linear model?
CC BY-SA 2.5
null
2010-11-30T21:01:07.213
2010-12-01T00:12:14.820
2010-12-01T00:12:14.820
null
2091
[ "r", "anova" ]
5039
2
null
5038
16
null
Let's look at what you get when you actually use the anova() function (the numbers are different than in your example, since I don't know what seed you used for generating the random numbers, but the point remains the same): ``` > anova(model) Analysis of Variance Table Response: x$rand Df Sum Sq Mean Sq F value Pr(>F) x$factor 2 4.142 2.0708 1.8948 0.1559 Residuals 97 106.009 1.0929 ``` The F-test for the factor is testing simultaneously $H_0: \beta_1 = \beta_2 = 0$, i.e., the hypothesis that the factor in general is not significant. A common strategy is to first test this omnibus hypothesis before digging down which of the levels of the factor are different from each other. Also, you can use the anova() function for full versus reduced model tests. For example: ``` > x <- data.frame(rand=rnorm(100), factor=sample(c("A","B","C"),100,replace=TRUE), y1=rnorm(100), y2=rnorm(100)) > model1 <- lm(x$rand ~ x$factor + x$y1 + x$y2) > model2 <- lm(x$rand ~ x$factor) > anova(model2, model1) Analysis of Variance Table Model 1: x$rand ~ x$factor Model 2: x$rand ~ x$factor + x$y1 + x$y2 Res.Df RSS Df Sum of Sq F Pr(>F) 1 97 105.06 2 95 104.92 2 0.13651 0.0618 0.9401 ``` which is a comparison of the full model with the factor and two covariates (y1 and y2) and the reduced model, where we assume that the slopes of the two covariates are both simultaneously equal to zero.
null
CC BY-SA 2.5
null
2010-11-30T21:46:50.907
2010-11-30T21:46:50.907
null
null
1934
null
5040
2
null
5004
4
null
The Unscented Kalman Filter is a type of non linear Kalman filter. (ie when the transition and observation functions are non linear) If these functions are differentiable, one can simply use the Extended Kalman Filter (EKF). But when the functions are highly non linear, one might need to use an Unscented Kalman Filter (UKF), which is based on the [Unscented transform](http://en.wikipedia.org/wiki/Unscented_transform). The original paper introducing UKF is [here](http://www.cs.berkeley.edu/~pabbeel/cs287-fa13/optreadings/JulierUhlmann-UKF.pdf). Modify your question if you have something more precise to ask. What don't you understand in the UKF?
null
CC BY-SA 3.0
null
2010-12-01T01:59:00.507
2015-03-16T04:57:20.763
2015-03-16T04:57:20.763
805
1709
null
5041
1
5066
null
7
346
For classification, what theoretical results are between cross-validation estimate of accuracy and generalisation accuracy? I particularly asking about results in a PAC-like framework where no assumptions are made that your function class contains the "true" function. I would love to know if there are theorems of the form: If your leave-one-out cross validation error-rate is $\theta$ on $N$ examples, then your generalisation error rate is lower than $\theta+\varepsilon$ with probability $f(\theta, \varepsilon, N)$. If so, what are the general proof techniques to obtain them? What is the theoretical framework? If a fully general theorem is impossible, what extra conditions, if any, allow you to arrive at this type of conclusion?
Theoretical results for cross-validation estimation of classification accuracy?
CC BY-SA 2.5
null
2010-12-01T05:22:59.070
2017-04-08T15:35:36.667
2017-04-08T15:35:36.667
11887
2067
[ "classification", "cross-validation", "pac-learning" ]
5042
1
5044
null
10
11286
So, I have 16 trials in which I am trying to authenticate a person from a biometric trait using Hamming Distance. My threshold is set to 3.5. My data is below and only trial 1 is a True Positive: ``` Trial Hamming Distance 1 0.34 2 0.37 3 0.34 4 0.29 5 0.55 6 0.47 7 0.47 8 0.32 9 0.39 10 0.45 11 0.42 12 0.37 13 0.66 14 0.39 15 0.44 16 0.39 ``` My point of confusion is that I am really unsure about how to make an ROC curve (FPR vs. TPR OR FAR vs. FRR) from this data. It doesn't really matter which one, but I'm just really confused about how to go about calculating it. Any help would be appreciated.
Calculate ROC curve for data
CC BY-SA 2.5
null
2010-12-01T06:49:03.427
2015-12-03T16:48:07.387
null
null
1224
[ "mathematical-statistics", "roc" ]
5043
1
null
null
9
2305
I'm currently looking at the unconstrained primal form of the one-vs-all classifier $$\sum\limits_{i=1}^{N_I} \sum\limits_{k=1,\atop k \neq y_i}^{N_K} L(1+ \mathbf{w_k}\cdot\mathbf{x_i}-\mathbf{w_{y_i}}\cdot\mathbf{x_i})$$ where $N_I$ is the number of instances, $N_K$ is the number of classes, $N_F$ is the number of features, $X$ is a $N_K \times N_F$ data matrix, $y$ is a vector of class labels, $W$ is an $N_K \times N_I$ matrix where each corresponds to the weights for the hyperplane splitting one class from the rest, $L$ is some arbitrary loss function. My understanding is that the functional above tries to find a hyperplane for each class that maximizes the distance between the samples within the associated class to all other samples. If the hyperplanes are correctly positioned then $\mathbf{w_k}\cdot\mathbf{x_i}$ should always be negative, $\mathbf{w_{y_i}}\cdot\mathbf{x_i}$ should always be positive and our loss function should come back fairly low. I'm trying to implement this using the hinge loss which I believe in this case will end up being $\max(0,1+\mathbf{w_k}\cdot\mathbf{x_i}-\mathbf{w_{y_i}}\cdot\mathbf{x_i}$). However, in the above couldn't we end up with a situation where the hyperplanes classify all samples as belonging to every class. For example, if we are looking at the hyperplane seperating class 1 from all other classes, provided that $1+\mathbf{w_k}\cdot\mathbf{x_i}<\mathbf{w_{y_i}}\cdot\mathbf{x_i}$ then the incurred loss will be 0 despite $\mathbf{x_i}$ being classified as the wrong class. Where have I gone wrong? Or does it not matter whether $\mathbf{w_k}\cdot\mathbf{x_i}$ is negative or positive provided that $\mathbf{w_{y_i}}\cdot\mathbf{x_i}$ ends up with a higher score? I have a feeling that my use of the hinge function as I've described here is incorrect but my use of Google today has only led to more confusion. On a related note, why is there a 1 in the functional above? I would think that it would have little impact.
Hinge loss with one-vs-all classifier
CC BY-SA 3.0
null
2010-12-01T06:57:43.170
2012-06-02T22:12:56.977
2012-06-02T22:12:56.977
null
2023
[ "classification", "loss-functions" ]
5044
2
null
5042
7
null
I suggest [ROC Graphs: Notes and Practical Considerations for Reasearchers](http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.10.9777) by Tom Fawcett, really an excellent read. As far as I understand your question, you will find everything you need in this paper. Edit: Inspired by Adam I also want to recommend my favorite R-package for this task: [ROCR](http://rocr.bioinf.mpi-sb.mpg.de/).
null
CC BY-SA 3.0
null
2010-12-01T07:32:11.380
2015-12-03T16:48:07.387
2015-12-03T16:48:07.387
264
264
null
5045
1
null
null
4
235
Consider a series like CPI (inflation), which I know is composed of a series of component prices (e.g. meat prices, grain prices, non-food prices, etc.), which in turn are also composed of a series of component prices (e.g. average meat prices are a combination of pork, beef, and chicken prices). If I wanted to use a regression to find the components of CPI and their weightings, then is it better to use the final components (regress CPI against pork, beef, and chicken prices), or is it better to create fitted version of the middle components, then regress against CPI (regress average meat prices against pork, beef, and chicken prices, then fit a meat price series, then regress CPI against the fitted meat price series)? Also, I should note that some series which were significant when fitting the interim components in the layered method - the latter method - lose their significance when the regression is flattened. So is is possible that the latter, layered method will return a better result because it includes the effect of more components? I have tried this with actual data and the latter method gives better results.
In a linear regression whose components can also be broken down, is it better to do multi-layered regression, or flatten to final components?
CC BY-SA 2.5
null
2010-12-01T07:50:58.993
2011-01-10T16:46:43.897
2010-12-01T09:39:25.810
1195
1195
[ "time-series", "regression" ]
5046
2
null
4556
-1
null
find steady state gain of transfer function H(s), we let s=0. since z=exp(sT), to find steady state gain of H(z), let z=exp(0)=1.
null
CC BY-SA 2.5
null
2010-12-01T08:04:02.930
2010-12-01T08:04:02.930
null
null
null
null
5047
2
null
5042
4
null
Why do you want to make an ROC curve? Do you want to graph the curve for your dependent variables, or are you looking to use it as a test statistic in order to gauge the accuracy of your probability predictions (in which case you're looking for the AUC [area under the curve]). If you're familiar with R, the verification package in R has two functions that you will find useful: roc.plot(), which will allow you to plot your ROC curve, and roc.area() which will allow you to calculate the AUC.
null
CC BY-SA 2.5
null
2010-12-01T08:46:39.030
2010-12-01T08:46:39.030
null
null
2166
null
5048
1
5091
null
2
1257
I have a time serie that I want to analyse through a wavelet decomposition. I am using the R package WaveThres. I am interested in the wavelet autocorrelation, but I struggle to understand what does it mean precisely. I have from the book [Wavelet Methods in Statistics with R](http://www.springer.com/statistics/statistical+theory+and+methods/book/978-0-387-75960-9) the following formula $\Psi_j(\tau)=\sum_{k}\phi_{j,k}(0)\phi_{j,k}(\tau)$ $\tau\in\mathbb{Z}$ being the lag of the autocorrelation and $\left \{\phi_{j,k}(t)=\phi_{j,k-t} \right \}_{j,k}$ a set of non decimated wavelets I would really appreciate to understand the meaning of this formula, and (why/if) it is different from performing a multi resolution analysis (MRA) and computing the Pearson autocorrelation coefficient on a detail. fRed
Wavelet auto correlation
CC BY-SA 2.5
null
2010-12-01T10:05:09.643
2010-12-02T22:25:41.843
2010-12-01T10:18:28.720
930
1709
[ "correlation", "autocorrelation", "wavelet" ]
5049
1
null
null
8
610
Let $A$ be a finite set and suppose we want to compute the size of some subset $X$. Motivation: If we can generate elements $x$ of $A$ uniformly at random, then we can estimate the size of $A$ by random sampling. That is, we take $n$ random samples from $A$, if $m$ of them belong to $X$, then $|X|/|A| \approx m/n$. Unfortunately, for what I do, usually $|A|$ is massive and $|X|$ (while massive) is quite small with respect to $|A|$. So if I attempt to perform the above estimation, I'm likely to get $m=0$, which, while not useless, is not really all that satisfying. So, I have an idea that I'm hoping will speed up this process. Instead of throwing darts at a massive dart board, why don't I throw balls? That is, instead of sampling elements $x \in A$, we sample subsets of $A$. Surely I should be able to infer something about the density of $X$ in $A$ from this experiment. Suppose $A$ is equipped with a metric $d(x,y)$ (I have in mind the Hamming distance). For any $y \in A$ let $Y(y)=\{x \in A:d(x,y) \leq k\}$ be the closed ball of radius $k$ in $A$ centered at $t$. Since we can sample elements $x \in A$ uniformly at random, we can sample $k$-balls $Y_k(t)$ uniformly at random. Suppose (a) every $x \in A$ belong to exactly the same number of $k$-balls and (b) every $k$-ball has the same size $r$. Now suppose I generate $k$-balls $Y_1,Y_2,\ldots,Y_n$ uniformly at random and suppose $m=\sum_{i=1}^n |Y_i \cap X|$. It seems we can estimate $|A|$ in a similar way, that is $|X|/|A| \approx \frac{m}{rn}$. So my questions are: > Am I correct in that we can approximate $|X|$ this way? If so, I doubt I'm the first to think of this, so is there a name for this method? I did actually test this on some sets, and it seems to match what I claim. > Are there any drawbacks to this approach? (e.g. is it less accurate? do I need more samples?)
Can we estimate the size of a subset X of a set A, by randomly sampling subsets of A?
CC BY-SA 2.5
null
2010-12-01T11:28:41.573
2010-12-01T16:29:30.777
2010-12-01T12:25:50.300
386
386
[ "estimation" ]
5050
2
null
5049
3
null
OK, try reading the wikipedia page for [Monte Carlo integration](http://en.wikipedia.org/wiki/Monte_Carlo_integration). You'll see they mention a stratified version. Stratification is the technical term in statistics for what you attempt: subdividing in subsets (subsamples). I guess the references can help you further.
null
CC BY-SA 2.5
null
2010-12-01T12:23:22.067
2010-12-01T12:23:22.067
null
null
2036
null
5051
2
null
5049
2
null
I assume your measure is finite. WLOG it can be a probability. The first procedure you mention is the good old empirical probability estimate: $\hat{P}(Y\in X)= | \{ x_i \in X\} | /n $ (there montecarlo estimate of an inetgral is a good interpretation also). In high dimension it does not work since $\{x_i\in X\}$ is likely to be empty for typical A. As you have noticed, you need regularization. How sophisticate regularisation you need is related to the dimension of your space. An idea is to enlarge $X$ or even give a weight to $x_i$ that is not in $X$ according to its distance to $X$, this is what I ould call kernel probability estimate (by analogy with [kernel density estimate](http://en.wikipedia.org/wiki/Kernel_density_estimation)): $\hat{P}(Y\in X)= 1/(c(k) n)\sum_{i} K(d(x_i,X)/k) $ where $K$ is a kernel that integrates to $1$ (in your case it can be $K(x)=1\{x\leq 1\}$ but gaussian kernel has good properties) and $c(k)$ a well chosen normalization constant (i.e such that $\hat{P}(Y\in A)=1$).
null
CC BY-SA 2.5
null
2010-12-01T12:34:07.033
2010-12-01T16:08:12.340
2010-12-01T16:08:12.340
223
223
null
5052
2
null
4892
1
null
The answer is 6. See the edits above for details.
null
CC BY-SA 2.5
null
2010-12-01T13:10:35.207
2010-12-01T13:10:35.207
null
null
1351
null
5053
2
null
4983
0
null
``` sv_num: one added to the number of support vectors b: offset alpha: 0, then "a" corresponding to the support vectors, then only zeros index: link index between "a" and alpha, "x" and supvec supvec: 0, then "x" corresponding to the support vectors a: weights_i (between 0 and C) times y_i ```
null
CC BY-SA 2.5
null
2010-12-01T13:23:03.733
2010-12-01T13:23:03.733
null
null
1351
null
5054
1
null
null
4
290
Say we are studying Twitter hashtags over time. We monitor how popular they are day to day. Some hashtags may be volatile (i.e. "lunch", "Celtics", "Friday"). Their popularity rises and falls frequently. Some hashtags may be in the process of becoming unpopular (i.e. "Gulf oil spill", "Transformers 2", "Christine O'Donnell"). Is there a mathematical model that can distinguish between a hashtag that has temporarily fallen in popularity but is likely to go up in popularity later and a hashtag that is sinking and likely to stay sunk in popularity? thanks
Is there a mathematical model that distinguishes between volatility and trend?
CC BY-SA 2.5
null
2010-12-01T16:14:58.283
2011-05-12T20:00:40.270
null
null
null
[ "volatility-forecasting" ]
5055
2
null
5049
3
null
For any subset $Y$ of $A$, let $\pi(Y)$ be the probability you will select it in your sampling. You have described a random variable $$f(Y) = |Y \cap X|.$$ The total of $f$ in the population of subsets of $A$ is $$\tau(X) = \sum_{Y \subset A}|Y \cap X| = 2^{|A|-1}|X|.$$ From a sample (with replacement) of subsets of $A$, say $Y_1, Y_2, \ldots, Y_m$, the [Hansen-Hurwitz Estimator](http://www.math.umt.edu/patterson/549/Unequalp.pdf) obtains an unbiased estimate of this total as $$\hat{f}_\pi = \sum_{i=1}^{m} \frac{|Y_i \cap X|}{\pi(Y_i)} .$$ Dividing this by $2^{|A|-1}|A|$ therefore estimates $|X|/|A|$. The variance of $\hat{f}_\pi$ is $$\text{Var}(\hat{f}_\pi) = \frac{1}{m} \sum_{Y \subset A} \pi(Y) \left( \frac{|Y \cap X|}{\pi(Y)} - 2^{|A|-1}|X| \right)^2\text{.}$$ Dividing this by $2^{2(|A|-1)}|A|^2$ yields the sampling variance of $|X|/|A|$. Given $A$, $X$, and a proposed sampling procedure (which specifies $\pi(Y)$ for all $Y \subset A$), choose a value of $m$ (the sample size) for which the estimation variance becomes acceptably small.
null
CC BY-SA 2.5
null
2010-12-01T16:29:30.777
2010-12-01T16:29:30.777
null
null
919
null
5056
1
5080
null
21
22620
Given the support vectors of a linear SVM, how can I compute the equation of the decision boundary?
Computing the decision boundary of a linear SVM model
CC BY-SA 2.5
null
2010-12-01T18:25:03.223
2017-11-08T12:10:41.377
2010-12-03T08:06:06.830
930
2221
[ "machine-learning", "svm" ]
5057
2
null
5056
4
null
It's a linear combination of the support vectors where the coefficients are given by the Lagrange multipliers corresponding to these support vectors.
null
CC BY-SA 2.5
null
2010-12-01T20:04:56.970
2010-12-01T20:04:56.970
null
null
881
null
5058
1
null
null
6
1583
In a social experiment that I was conducting, I was trying to count the number of people each user contacted in a period of 10 days. The population size was 100 for the experiment. Based on the values that I calculated, I fit a negative binomial distribution to the data (the Q-Q plot is given below). Conventional wisdom says that most networks amongst humans follow a power law distribution. I am guessing that my population size is too small to make a full conclusion about anything but is there any kind of relation between a negative binomial distribution and a power law distribution? I am asking this because I read a few days back that Normal Distribution and Gamma distribution (whose discrete analogue is the negative binomial) have a special role in that many other distributions can be derived from the Gamma distribution. I am wondering if this is true even with the power law distribution. I am a beginner in statistics so kindly point me in the right direction if I am out of track. ![alt text](https://i.stack.imgur.com/MMla7.png)
Is there any relation between Power Law and Negative Binomial distribution?
CC BY-SA 2.5
null
2010-12-01T20:16:27.117
2010-12-02T04:00:56.167
null
null
2164
[ "r", "distributions", "probability", "modeling" ]
5059
2
null
4551
43
null
interpreting `Probability(data | hypothesis)` as `Probability(hypothesis | data)` without the application of Bayes' theorem.
null
CC BY-SA 2.5
null
2010-12-01T20:17:08.233
2010-12-01T20:17:08.233
null
null
961
null
5060
2
null
4551
6
null
Probably not as applicable to psych stats (or is it? I'm not sure) but failing to account for a split plot design in an analysis of an experiment. I've seem way too many people do this.
null
CC BY-SA 2.5
null
2010-12-01T21:09:50.050
2010-12-01T21:09:50.050
null
null
1028
null
5061
2
null
5026
9
null
I'd add some observations to what's been said... AI is a very broad term for anything that has to do with machines doing reasoning-like or sentient-appearing activities, ranging from planning a task or cooperating with other entities, to learning to operate limbs to walk. A pithy definition is that AI is anything computer-related that we don't know how to do well yet. (Once we know how to do it well, it generally gets its own name and is no longer "AI".) It's my impression, contrary to Wikipedia, that Pattern Recognition and Machine Learning are the same field, but the former is practiced by computer-science folks while the latter is practiced by statisticians and engineers. (Many technical fields are discovered over and over by different subgroups, who often bring their own lingo and mindset to the table.) Data Mining, in my mind anyhow, takes Machine Learning/Pattern Recognition (the techniques that work with the data) and wrap them in database, infrastructure, and data validation/cleaning techniques.
null
CC BY-SA 2.5
null
2010-12-01T21:17:22.760
2010-12-01T21:17:22.760
null
null
1764
null
5062
1
5063
null
1
3114
I have data points from a half circle and I already know the radius. I want to find the circle which best fits the points using a fixed radius. How can I do this? If I solve the problem using a typical circle fit algorithm the radius is too unstable due to "noise".
Geometric circle fitting with known radius
CC BY-SA 2.5
null
2010-12-01T21:43:00.990
2010-12-01T22:06:55.510
null
null
2223
[ "nonlinear-regression" ]
5063
2
null
5062
3
null
It depends on how the points might depart from the circle. If they do so through measurement error, a natural model is that their locations are binormally distributed with the coordinates $x$ and $y$ uncorrelated, of equal variances. This leads to a difficult model to fit, but if the errors are not too great compared to the radius, we can approximate it closely as a normal distribution in the radial direction only (because tangential displacements do not move a point off the circle, to first order). This suggests a model of the form Distance($(x_i, y_i)$, $(\alpha, \beta)$) = $r + \epsilon_i$ where $r$ is given, the $(x_i, y_i)$ are the data, $(\alpha, \beta)$ are coordinates of the center (to be estimated), and the $\epsilon_i$ are identically and independently distributed with zero mean. To estimate the parameters you could use least squares or maximum likelihood (among other approaches). For example, least squares would seek values of $\alpha$ and $\beta$ minimizing $\sum_i \left( \sqrt{(x_i - \alpha)^2 + (y_i - \beta)^2} - r \right)^2 \text{.}$ This problem has no closed-form solution but can readily be solved numerically. There is a clever approach in the image processing world: create an image of the boundaries of the circles of radius $r$ centered at the data points. In so doing, add the intensities of the boundaries wherever they overlap. Any grid point of greatest intensity is a reasonable solution. (In effect this is a kernel density estimate. You are using a kernel of the form $K(x,y)$ that is zero except for values of $(x,y)$ for which $x^2+y^2$ is close to $r^2$ and you are convolving that kernel with the data.)
null
CC BY-SA 2.5
null
2010-12-01T22:06:55.510
2010-12-01T22:06:55.510
null
null
919
null
5064
2
null
5026
16
null
I'm most familiar with the machine-learning - data mining axis - so I'll concentrate on that: Machine learning tends to be interested in inference in non-standard situations, for instance non-i.i.d. data, active learning, semi-supervised learning, learning with structured data (for instance strings or graphs). ML also tends to be interested in theoretical bounds on what is learnable, which often forms the basis for the algorithms used (e.g. the support vector machine). ML tends to be of a Bayesian nature. Data mining is interested in finding patterns in data that you don't already know about. I'm not sure that is significantly different from exploratory data analysis in statistics, whereas in machine learning there is generally a more well-defined problem to solve. ML tends to be more interested in small datasets where over-fitting is the problem and data mining tends to be interested in large-scale datasets where the problem is dealing with the quantities of data. Statistics and machine learning provides many of the basic tools used by data miners.
null
CC BY-SA 2.5
null
2010-12-01T23:57:48.737
2010-12-01T23:57:48.737
null
null
887
null
5065
1
5067
null
2
2287
I want to find $\theta$ such that $ \theta = argmin_{\theta} \left( \left|\left| Y - \sum_{i=1}^k \theta_i X_i \right|\right| \right) $ where $X_i$ and $Y$ are N x N matrices and $\theta$ is a weight vector that specifies how to linearly combine the $k$ $X$'s to approximate Y. This smells like a linear optimization problem though I'm unskilled in this kind of math and can't seem to formulate it as a linear program. I'd also be curious if anyone has any advice on how to learn this kind of math and/or problem formulation. Thanks!
How can I find the best linear combination of a set of matrices to approximate a target matrix?
CC BY-SA 2.5
null
2010-12-02T01:24:18.253
2010-12-17T07:57:59.127
2010-12-17T07:57:59.127
223
2224
[ "regression", "optimization", "matrix" ]
5066
2
null
5041
3
null
I don't know much about these kinds of proofs, but I think John Langford's thesis might be a good reference. Here's a relevant page: [http://hunch.net/~jl/projects/prediction_bounds/prediction_bounds.html](http://hunch.net/~jl/projects/prediction_bounds/prediction_bounds.html) and the probably relevant section of the thesis: [http://hunch.net/~jl/projects/prediction_bounds/thesis/mathml/thesisse15.xml#x22-300004.1](http://hunch.net/~jl/projects/prediction_bounds/thesis/mathml/thesisse15.xml#x22-300004.1)
null
CC BY-SA 2.5
null
2010-12-02T02:45:59.310
2010-12-02T02:45:59.310
null
null
2077
null
5067
2
null
5065
5
null
$\theta = argmin_{\theta} (Y - \sum_{i=1}^k \theta_i X_i)$ would be an affine function in $\theta$ and hence an unconstrained linear program. But $\theta = argmin_{\theta} || Y - \sum_{i=1}^k \theta_i X_i ||$ has an arbitrary norm. Fortunately, norms are convex and convexity is preserved under compositions with affine functions, so the problem is simply an unconstrained convex optimization program. They can be solved easily. Here is an example from MATLAB using [cvx](http://cvxr.com/). Initialize 5 random normal 2x2 matrices and try to regress $Y$ on the 4 $X_i$'s in the Euclidean norm in this example (to change this, just use a different norm() function, of which cvx has many). If Y and the $X_i$'s are linearly independent, since there are only 4 degrees of freedom, we should be able to achieve 0 error, which we do. ``` N=2; Y=randn(N,N); X1=randn(N,N); X2=randn(N,N); X3=randn(N,N); X4=randn(N,N); cvx_begin variable theta(4); minimize(norm(Y-theta(1)*X1-theta(2)*X2-theta(3)*X3-theta(4)*X4)) cvx_end ``` Status: Solved Optimal value (cvx_optval): +1.50457e-11 > norm(Y) ans = 1.3731 The norm of Y is about 1.4, and the norm of the difference given our thetas is effectively zero.
null
CC BY-SA 2.5
null
2010-12-02T03:39:19.017
2010-12-02T03:47:30.290
2010-12-02T03:47:30.290
1815
1815
null
5069
2
null
5058
2
null
There are many power-law distributions, so you have a lot of possible models. You might start by trying to fit a [log-series distribution](http://en.wikipedia.org/wiki/Logarithmic_distribution), which is a limiting case of the negative binomial. Don't think a priori that you have a mixture distribution as suggested by whuber until you've estimated model parameters and done at least a goodness of fit test. Long-tail distributions, like power-law, log-series, Zipf, etc., typically have what look like outliers in the right-hand tail; their separation from the bulk of the observations is just an artifact of (relatively) small sample size. Mixtures are a pain in the butt to estimate, since some regions overlap. You can often avoid that sort of problem by stepping up your modeling one level with something like Poisson regression, assuming you have some covariate (predictor) data about each user -- this basically does the mixing for you. The Johnson, Kemp, and Kotz reference given at the end of the referenced Wikipedia article has everything you'd ever want to know about all these distributions, including many methods of parameter estimation.
null
CC BY-SA 2.5
null
2010-12-02T04:00:56.167
2010-12-02T04:00:56.167
null
null
5792
null
5070
1
5072
null
4
1134
If we have a loaded coin that plays out 75% heads, 25% tails, what would be the best way to bet on the outcome of each of $n$ trials? How could we maximize our probability of winning? Is it possible to generalize for a coin that's loaded $n:(100-n)$?
Maximizing probability of winning on loaded coin
CC BY-SA 2.5
null
2010-12-02T04:20:55.457
2013-01-02T18:51:39.693
2010-12-02T09:29:43.640
930
2226
[ "probability", "dice" ]
5071
2
null
5070
6
null
1/ do you already know the bias n? 2/ if yes (actually you need only to know which side of the coin is heavier) then you cannot do better than always bet on this side. In the long term, you'll have a n % success ratio. You cannot do better because the trials are independent.
null
CC BY-SA 2.5
null
2010-12-02T04:54:20.090
2010-12-02T04:54:20.090
null
null
1709
null
5072
2
null
5070
9
null
In what follows, I will assume you mean someone gives you 1:1 odds on the loaded coin. You are looking for the [Kelly criterion](http://en.wikipedia.org/wiki/Kelly_criterion), which states: $$ f^* = \frac{ b p - q }{ b } $$ where (just copying from wikipedia) $f^*$ is the fraction of your bank roll, $b$ is the fraction payout (in your case presumably $b=1$, i.e. a dollar investment gives you a dollar return if heads is thrown) and $q = 1-p$. For your example, the fraction the Kelly criterion says to invest is $f^* = .75 - .25 = 0.5$. i.e. The Kelly criterion tells you to invest half of your savings each time you play. As intuition, you want to make sure you don't invest all your money on a loaded coin as just one bad throw will deplete your savings. Understanding what function to maximize for is what the Kelly criterion is instructive for. The Kelly criterion requires that you maximize your winnings based on: $$ p\ \text{ln}(1 + b x) + (1-p)\ \text{ln}(1 - x) $$ Maximizing for $x$ yields the equation for $f^*$ above. Unfortunately I am a little unclear as to why this particular equation is being maximized as opposed to some other equation. I have heard an information theoretic argument for why this is so (notice that the equation above looks very much like an entropy equation) but, sadly, still remain puzzled. EDIT: Well, I feel pretty dumb. As whuber points out in the comments, the Kelly criterion is quite easy to derive. If we assume a you want to invest a constant proportion of your savings at each trial, call it $r$, then your winnings after $n$ trials, $W_n$, for an initial savings of $W_0$, will be: $$ W_n = (1 + b r)^{p n} (1 - r)^{(1-p) n} W_0 $$ Taking logarithms, noticing that this equation, as a function of $r$, has one maximum, then setting the derivative equal to 0 and solving gives the Kelly criterion formula for $r = f^*$ as above.
null
CC BY-SA 2.5
null
2010-12-02T05:34:57.873
2010-12-02T15:02:08.493
2010-12-02T15:02:08.493
2168
2168
null
5073
1
5075
null
7
7179
I have a data-frame whose first column is the name of an item and the second column is the frequency of that item in the dataset. ``` names freq 1 tomato 7 2 potato 4 3 cabbage 5 4 sukuma-wiki 8 5 terere 20 ``` I would like to have a stacked bar column that depicts the proportion of each entry on the chart. How do you handle coloring of the stacked bar when presented with over sixty entries? what is the easiest way to do this?
Stacked bar plot
CC BY-SA 3.0
null
2010-12-02T06:09:21.087
2015-02-09T13:34:20.480
2011-11-21T22:34:57.970
930
18462
[ "r", "data-visualization", "barplot" ]
5074
2
null
5073
6
null
For the coloring, either you specify a list of colors or you generate them. In the latter, I suggest you execute this code ``` n = 32; main.name = paste("color palettes; n=",n) ch.col = c("rainbow(n, start=.7, end=.1)", "heat.colors(n)", "terrain.colors(n)", "topo.colors(n)", "cm.colors(n)"); nt <- length(ch.col) i <- 1:n; j <- n/nt; d <- j/6; dy <- 2*d; plot(i,i+d, type="n", yaxt="n", xaxt="n", ylab="", , xlab ="", main=main.name) #yaxt="n" set no y axie label and tick. for (k in 1:nt) { rect(i-.5, (k-1)*j+ dy, i+.4, k*j, col = eval(parse(text=ch.col[k])), border = "grey"); text(2.5*j, k * j + dy/2, ch.col[k]) } ``` taken from the blog [http://statisticsr.blogspot.com/2008/07/color-scale-in-r.html](http://statisticsr.blogspot.com/2008/07/color-scale-in-r.html) Barplotting should be done with ?barplot ``` DF=data.frame(names=c("tomato", "potato", "cabbage", "sukuma-wiki", "terere"), freq=c(7,4,5,8,20)) barplot(as.matrix(DF[,2]), col=heat.colors(length(DF[,2])), legend=DF[,1], xlim=c(0,9), width=2) ```
null
CC BY-SA 2.5
null
2010-12-02T06:27:26.500
2010-12-02T07:33:43.230
2010-12-02T07:33:43.230
1709
1709
null
5075
2
null
5073
13
null
With 60 distinct categories, I feel you may have a hard time making that an effective graphic. You may want to consider a regular bar-chart that is sorted in ascending or descending order. Whether or not these are counts or percentages is up to you. Maybe something like this: ``` library(ggplot2) df$names <- reorder(df$names, -df$freq) #Reorders into ascending order qplot(x = names, y = freq, data = df, geom = "bar") + coord_flip() ``` EDIT: To make a stacked bar chart with `ggplot`, we set the `x = 1` since we will have only one column. We will use the `fill` argument to add color: ``` qplot(x = factor(1), y = freq, data = df, geom = "bar", fill = names) ``` Also of interest: a stacked bar chart is pretty darn close to being a pie chart. You can transform the coordinate system of ggplot charts with `+ coord_polar(theta = "y")` to make a pie chart from the stacked bar chart above.
null
CC BY-SA 2.5
null
2010-12-02T07:26:52.260
2010-12-02T07:56:40.507
2010-12-02T07:56:40.507
696
696
null
5076
2
null
5073
7
null
I doubt you fill find a suitable range of distinct colours with so much categories. Anyway, here are some ideas: - For stacked barchart, you need barplot() with beside=FALSE (which is the default) -- this is in base R (@Chase's solution with ggplot2 is good too) - For generating a color ramp, you can use the RColorBrewer package; the example shown by @fRed can be reproduced with brewer.pal and any one of the diverging or sequential palettes. However, the number of colour is limited, so you will need to recycle them (e.g., every 6 items) Here is an illustration: ``` library(RColorBrewer) x <- sample(LETTERS[1:20], 100, replace=TRUE) tab <- as.matrix(table(x)) my.col <- brewer.pal(6, "BrBG") # or brewer.pal(6, "Blues") barplot(tab, col=my.col) ``` There is also the [colorspace](http://cran.r-project.org/web/packages/colorspace/index.html) package, which has a nice accompagnying vignette about the design of good color schemes. Check also Ross Ihaka's course on [Topic in Computational Data Analysis and Graphics](http://www.stat.auckland.ac.nz/~ihaka/courses/787/). Now, a better way to display such data is probably to use a so-called Cleveland dot plot, i.e. ``` dotchart(tab) ```
null
CC BY-SA 2.5
null
2010-12-02T07:35:19.660
2010-12-02T07:35:19.660
null
null
930
null
5077
1
null
null
17
2386
I'm trying to implement the hit and run MCMC algorithm, but I'm having a bit of trouble understanding how to go about it. The general idea, is as follows: > To generate a proposal jump in MH, we: Generate a direction $d$ from a distribution on the surface of the unit sphere $\mathcal{O}$ Generate a signed distance $\lambda$ along the constrained space. However, I have no idea of how I should go about implementing this in R (or any other language). Does anyone have a snippet of code that would point me in the right direction? BTW, I not that interested in a library that does this method, I want to try and code it up myself. Many thanks.
Hit and run MCMC
CC BY-SA 2.5
null
2010-12-02T10:31:27.627
2017-04-24T07:45:30.210
2010-12-02T11:21:44.393
8
null
[ "r", "bayesian", "markov-chain-montecarlo" ]
5078
1
5085
null
3
5622
Sorry for a possibly ignorant question. I have fit a mixed-effects model using the lmer function from the lme4 package, and the main fixed effect (a factor with three levels) in the model was significant according to a run with pvals.fnc (from the languageR package). To illustrate the effect in an appealing way, I would like to plot three bars with the "baseline condition" (the intercept including the effect level 1), the "experimental condition 1" (the intercept + the effect level 2), the "experimental condition 2" (intercept + effect level 3), and the corresponding confidence intervals for these conditions. However, how do I do that? The function plotLMER.fnc used to do that, but has stopped working for me (lme4_0.999375-32; languageR_1.0). Model: mylmer <- lmer(outcome ~ (1|subject) + (1|item) + Factor, data) So I have MCMC output in the form of: ``` $fixed Estimate MCMCmean HPD95lower HPD95upper pMCMC Pr(>|t|) (Intercept) 0.4728 0.4718 0.2250 0.7368 0.0010 0.0008 Factor2 -0.0420 -0.0420 -0.1732 0.0931 0.5414 0.5710 Factor3 -0.1643 -0.1631 -0.3153 -0.0112 0.0328 0.0508 ``` My idea was to construct the conditions from this data by simply using the intercept as the baseline condition, and construct the experimental conditions by adding the effect of each factor level to the intercept. The new (intercept + effect combined) CI would be constructed by turning the HPD interval into a standard deviation and then use the square root of the sum of squared standard deviations ([like here](http://www.cartage.org.lb/en/themes/sciences/chemistry/miscellenous/helpfile/Erroranalysis/AdditionSubtraction/AdditionSubtraction.htm)) Is this approach appropriate? If I do it, the CIs become become fairly large and the effect no longer looks significant (conflicting with the pvals.fnc output). Is there a better way (code example would be great)? If not, is there a "least worst" solution that could satisfy readers who strongly request the standard bar plot with CIs? Problems in the back of my head, which I am too ignorant to properly formulate: CI not equal to HPD. Transforming CI to SD is inappropriate, as CI may be asymmetric. Problems with determining CI at all in mixed models ([like here](https://stats.stackexchange.com/questions/1797/what-would-a-confidence-interval-around-a-predicted-value-from-a-mixed-effects-mo)).
Confidence-intervals for conditions tested with a mixed-effects model
CC BY-SA 2.5
null
2010-12-02T10:32:07.580
2013-06-27T08:51:47.620
2017-04-13T12:44:24.947
-1
2228
[ "r", "confidence-interval", "mixed-model" ]
5079
1
5084
null
6
2162
I have some data which describe residential units for people with learning disability, variables like how nice the furnishings are, the level of psychiatric symptomology on the unit, how happy the staff are, stuff like that. I want to check to see if we are measuring the right things- e.g. do units with happier staff have nicer atmospheres, do units with nice furnishings have happy staff, that kind of thing. The problem is that I only have data for 8 units (averages within each, e.g. 10 staff give their responses on how happy they are, this is the average for the unit) so I can't really use linear regression to see whether the things that we have measured affect each other. I've drawn some scatterplots for all of the data and on the whole I would say there does appear to be a linear relationship in the way I would expect. But as I say with 8 units it's not really going to generate any statistics. I had the bright idea of ordering each unit across all variables, and then comparing the rank orders somehow. If we are measuring the right things then the ranks should be similar, like this: ``` Unit 1 (ranks across all variables): 1,1,1,1,1,1,1,1,1 Unit 2: 2,2,2,2,2,2,2,2,2 ``` etc. whereas if I'm wrong, and the variables aren't important to each other, I will get this: ``` Unit 1: 1,2,3,4,5,6,7,8 Unit 2: 8,7,6,5,4,3,2,1 Unit 3: 4,5,6,7,8,1,2,3 ``` etc. Here's what I get: ``` Unit1 7 5 5.0 5 3 4 5 3 Unit2 6 2 4.0 6 5 3 2 5 Unit3 3 7 7.5 1 4 1 1 1 Unit4 4 4 3.0 7 6 7 7 8 Unit5 5 3 1.0 4 2 5 6 7 Unit6 2 6 6.0 8 8 8 8 6 Unit7 1 8 7.5 3 7 6 4 4 Unit8 8 1 2.0 2 1 2 3 2 ``` It looks pretty good to me, except for the first column. Any thoughts on this? Is is a proper statistic that I haven't heard of? Or is there something reasonably robust I can do with these results? Sorry for the lengthy question, many thanks in advance!
Compare rank orders of population members across different variables
CC BY-SA 2.5
null
2010-12-02T10:33:12.123
2010-12-02T12:44:04.133
2010-12-02T11:07:53.903
930
199
[ "ranks" ]
5080
2
null
5056
30
null
The [Elements of Statistical Learning](https://web.stanford.edu/~hastie/ElemStatLearn/), from Hastie et al., has a complete chapter on support vector classifiers and SVMs (in your case, start page 418 on the 2nd edition). Another good tutorial is [Support Vector Machines in R](http://www.ci.tuwien.ac.at/~meyer/svm/final.pdf), by David Meyer. Unless I misunderstood your question, the decision boundary (or hyperplane) is defined by $x^T\beta + \beta_0=0$ (with $\|\beta\|=1$, and $\beta_0$ the intercept term), or as @ebony said a linear combination of the support vectors. The margin is then $2/\|\beta\|$, following Hastie et al. notations. From the on-line help of `ksvm()` in the [kernlab](http://cran.r-project.org/web/packages/kernlab/index.html) R package, but see also [kernlab – An S4 Package for Kernel Methods in R](http://www.jstatsoft.org/v11/i09/paper), here is a toy example: ``` set.seed(101) x <- rbind(matrix(rnorm(120),,2),matrix(rnorm(120,mean=3),,2)) y <- matrix(c(rep(1,60),rep(-1,60))) svp <- ksvm(x,y,type="C-svc") plot(svp,data=x) ``` Note that for the sake of clarity, we don't consider train and test samples. Results are shown below, where color shading helps visualizing the fitted decision values; values around 0 are on the decision boundary. ![alt text](https://i.stack.imgur.com/oNWf7.png) Calling `attributes(svp)` gives you attributes that you can access, e.g. ``` alpha(svp) # support vectors whose indices may be # found with alphaindex(svp) b(svp) # (negative) intercept ``` So, to display the decision boundary, with its corresponding margin, let's try the following (in the rescaled space), which is largely inspired from a tutorial on SVM made some time ago by [Jean-Philippe Vert](http://cbio.ensmp.fr/~jvert/): ``` plot(scale(x), col=y+2, pch=y+2, xlab="", ylab="") w <- colSums(coef(svp)[[1]] * x[unlist(alphaindex(svp)),]) b <- b(svp) abline(b/w[1],-w[2]/w[1]) abline((b+1)/w[1],-w[2]/w[1],lty=2) abline((b-1)/w[1],-w[2]/w[1],lty=2) ``` And here it is: ![alt text](https://i.stack.imgur.com/ZvQpJ.png)
null
CC BY-SA 3.0
null
2010-12-02T10:37:22.867
2017-11-08T12:10:41.377
2017-11-08T12:10:41.377
132810
930
null
5081
1
null
null
1
3609
As the headline suggest, I am looking for a java library for learning and inference of Bayesian Networks. I have already found some, but I am hoping for a recommendation. Requirements in a quick overview: - written in Java (my overlord tells me that this is no point of discussion) - configuration is possible via code (and not solely via a GUI). - source code available - project is still maintained - the more powerful, the better Which one do you recommend?
Java library for Bayesian Networks
CC BY-SA 3.0
null
2010-12-02T11:17:13.693
2017-11-22T14:24:28.683
2017-11-22T14:24:28.683
11887
264
[ "bayesian", "bayesian-network", "software", "java" ]
5082
2
null
5081
2
null
See the [packages](http://weka.sourceforge.net/doc.dev/overview-summary.html) of the open source project Weka, which is a collection of machine learning algorithms for data mining tasks. The algorithms can either be applied directly to a dataset or called from your own Java code. This is the class for [Bayes Network learning](http://weka.sourceforge.net/doc.dev/weka/classifiers/bayes/BayesNet.html)
null
CC BY-SA 2.5
null
2010-12-02T11:30:28.597
2010-12-02T11:30:28.597
null
null
339
null
5083
1
null
null
6
743
The signal to noise ratio is simple, and is usually defined in the context of simple Gaussian local-level models. In the cause of non-gaussian signal or noise models, do people do things more complicated then the ratio of the variances of the two distributions?
Generalization of the Signal-Noise ratio for non-Gaussian processes
CC BY-SA 2.5
null
2010-12-02T11:33:03.493
2012-05-03T16:17:11.510
null
null
996
[ "mixed-model", "signal-processing" ]
5084
2
null
5079
7
null
I don't know how useful the following approach is, but one might conceptualize the situation slightly differently: imagine the different variables are raters who simply order the units from "best" to "worst". You expect that the rank order will be similar among "raters". This seems to be an application for Kendall's concordance coefficient $W$ of inter-rater agreement. In R ``` > rtr1 <- c(1, 6, 3, 2, 5, 4) # rank order from "rater" 1 > rtr2 <- c(1, 5, 6, 2, 4, 3) # "rater" 2 > rtr3 <- c(2, 3, 6, 5, 4, 1) # "rater" 3 > ratings <- cbind(rtr1, rtr2, rtr3) > library(irr) # for kendall() > kendall(ratings) Kendall's coefficient of concordance W Subjects = 6 Raters = 3 W = 0.568 Chisq(5) = 8.52 p-value = 0.130 ``` Edit: This is equivalent to the Friedman-Test for dependent samples: ``` > rtrAll <- c(rtr1, rtr2, rtr3) > nBl <- 3 # number of blocks / raters > P <- 6 # number of dependent samples / units > IV <- factor(rep(1:P, nBl)) # factor sample / unit > blocks <- factor(rep(1:nBl, each=P)) # factor blocks / raters > friedman.test(rtrAll, IV, blocks) Friedman rank sum test data: rtrAll, IV and blocks Friedman chi-squared = 8.5238, df = 5, p-value = 0.1296 ```
null
CC BY-SA 2.5
null
2010-12-02T11:48:46.770
2010-12-02T12:44:04.133
2010-12-02T12:44:04.133
1909
1909
null
5085
2
null
5078
3
null
The [DRAFT r-sig-mixed-models FAQ](http://glmm.wikidot.com/faq) details (in the "Predictions and/or confidence (or prediction) intervals on predictions" section) how to obtain predictions and confidence intervals for cells in the design of a mixed effects model. The ezPredict() function in the ez package wraps the code for the lme4 case (well, obtaining predictions and variances, leaving the user to decide their own CI).
null
CC BY-SA 2.5
null
2010-12-02T12:27:31.840
2010-12-02T12:27:31.840
null
null
364
null
5086
2
null
5081
3
null
You can easily accomplish this with R though rJava ([JRI](http://rosuda.org/JRI/), to be precise). You should tell your overlord that you want to use the best tool for the job.
null
CC BY-SA 2.5
null
2010-12-02T14:46:14.977
2010-12-02T14:46:14.977
null
null
5
null
5087
1
5088
null
7
1642
There are numerous procedures for functional data clustering based on orthonormal basis functions. I have a series of models built with the GAMM models, using the `gamm()` from the mgcv package in R. For fitting a long-term trend, I use a thin plate regression spline. Next to that, I introduce a CAR1 model in the random component to correct for autocorrelation. For more info, see eg the paper of Simon Wood on [thin plate regression splines](http://r.789695.n4.nabble.com/attachment/2063352/0/tprs.pdf) or his [book on GAM models](http://rads.stackoverflow.com/amzn/click/1584884746). Now I'm a bit puzzled in how I get the correct coefficients out of the models. And I'm even less confident that the coefficients I can extract, are the ones I should use to cluster different models. A simple example, using: ``` #runnable code require(mgcv) require(nlme) library(RLRsim) library(RColorBrewer) x1 <- 1:1000 x2 <- runif(1000,10,500) fx1 <- -4*sin(x1/50) fx2 <- -10*(x2)^(1/4) y <- 60+ fx1 + fx2 + rnorm(1000,0,5) test <- gamm(y~s(x1)+s(x2)) # end runnable code ``` Then I can construct the original basis using smoothCon : ``` #runnable code um <- smoothCon(s(x1),data=data.frame(x1=x1), knots=NULL,absorb.cons=FALSE) #end runnable code ``` Now,when I look at the basis functions I can extract using ``` # runnable code X <- extract.lmeDesign(test$lme)$X Z <- extract.lmeDesign(test$lme)$Z op <- par(mfrow=c(2,5),mar=c(4,4,1,1)) plot(x1,X[,1],ylab="Basis function",xlab="X",type="l",lwd=2) plot(x1,X[,2],ylab="Basis function",xlab="X",type="l",lwd=2) plot(x1,Z[,8],ylab="Basis function",xlab="X",type="l",lwd=2) plot(x1,Z[,7],ylab="Basis function",xlab="X",type="l",lwd=2) plot(x1,Z[,6],ylab="Basis function",xlab="X",type="l",lwd=2) plot(x1,Z[,5],ylab="Basis function",xlab="X",type="l",lwd=2) plot(x1,Z[,4],ylab="Basis function",xlab="X",type="l",lwd=2) plot(x1,Z[,3],ylab="Basis function",xlab="X",type="l",lwd=2) plot(x1,Z[,2],ylab="Basis function",xlab="X",type="l",lwd=2) plot(x1,Z[,1],ylab="Basis function",xlab="X",type="l",lwd=2) par(op) # end runnable code ``` they look already quite different. I can get the final coefficients used to build the smoother by ``` #runnable code Fcoef <- test$lme$coef$fixed Rcoef <- unlist(test$lme$coef$random) #end runnable code ``` but I'm far from sure these are the coefficients I look for. I fear I can't just use those coefficients as data in a clustering procedure. I would really like to know which coefficients are used to transform the basis functions from the ones I get with `smoothCon()` to the ones I extract from the lme-part of the gamm-object. And if possible, where I can find them. I've read the related articles, but somehow I fail to figure it out myself. All help is appreciated.
Use coefficients of thin plate regression splines in a clustering method
CC BY-SA 2.5
null
2010-12-02T15:18:04.343
2010-12-03T09:12:20.757
null
null
1124
[ "r", "clustering", "mgcv" ]
5088
2
null
5087
4
null
If I understand correctly, I think you want the coefficients from the `$gam` component: ``` > coef(test$gam) (Intercept) s(x1).1 s(x1).2 s(x1).3 s(x1).4 s(x1).5 21.8323526 9.2169405 15.7504889 -3.4709907 16.9314057 -19.4909343 s(x1).6 s(x1).7 s(x1).8 s(x1).9 s(x2).1 s(x2).2 1.1124505 -3.3807996 21.7637766 -23.5791595 3.2303904 -3.0366406 s(x2).3 s(x2).4 s(x2).5 s(x2).6 s(x2).7 s(x2).8 -2.0725621 -0.6642467 0.7347857 1.7232242 -0.5078875 -7.7776700 s(x2).9 -12.0056347 ``` Update 1: To get at the basis functions we can use `predict(...., type = "lpmatrix")` to get $Xp$ the smoothing matrix: ``` Xp <- predict(test$gam, type = "lpmatrix") ``` The fitted spline (e.g. for `s(x1)`) can be recovered then using: ``` plot(Xp[,2:10] %*% coef(test$gam)[2:10], type = "l") ``` You can plot this ($Xp$) and see that it is similar to `um[[1]]$X` ``` layout(matrix(1:2, ncol = 2)) matplot(um[[1]]$X, type = "l") matplot(Xp[,1:10], type = "l") layout(1) ``` I pondered why these are not exactly the same. Is it because the original basis functions have been subject to the penalised regression during fitting??? Update 2: You can make them the same by including the identifiability constraints into your basis functions in `um`: ``` um2 <- smoothCon(s(x1), data=data.frame(x1=x1), knots=NULL, absorb.cons=TRUE) layout(matrix(1:2, ncol = 2)) matplot(um2[[1]]$X, type = "l", main = "smoothCon()") matplot(Xp[,2:10], type = "l", main = "Xp matrix") ##!## layout(1) ``` Note I have not got the intercept in the line marked `##!##`. You ought to be able to get $Xp$ directly from function `PredictMat()`, which is documented on same page as `smoothCon()`.
null
CC BY-SA 2.5
null
2010-12-02T15:49:45.877
2010-12-03T09:12:20.757
2010-12-03T09:12:20.757
1390
1390
null
5089
2
null
5065
3
null
If your norm is a hilbert space norm (for example root mean square error, also called hilbert schmidt norm in the case of matrices or l^2 norm if you take them as vectors) then obtaining the solution is a first year calculous exercise if you rephrase things using: $c=(\langle Y,X_i \rangle)_{i=1,\dots,k}$ and $A=(\langle X_j,X_i \rangle)_{j,i=1,\dots,k}$ ($c$ is a vector and $A$ a $k\times k$ matrix). The solution to your problem is given by solving $A\theta^*=c$ no need for optimization !
null
CC BY-SA 2.5
null
2010-12-02T16:19:17.053
2010-12-02T16:19:17.053
null
null
223
null
5090
1
5641
null
8
1933
The R package [dlm](http://cran.r-project.org/web/packages/dlm/index.html) implements filtering and smoothing (`dlmFilter` and `dlmSmooth`) for models with regression effects, but forecasting is not (yet) available for these models: ``` mod <- dlmModSeas(4)+dlmModReg(cbind(rnorm(100),rnorm(100))) fi <- dlmFilter(rnorm(100),mod) f <- dlmForecast(fi,nAhead=12) Error in dlmForecast(fi, nAhead = 12): dlmForecast only works with constant models ``` How can I do this in R? Thanks for your help!
Gaussian state space forecasting with regression effects
CC BY-SA 2.5
null
2010-12-02T21:44:09.290
2016-05-23T08:16:00.953
2010-12-02T22:55:07.853
159
null
[ "r", "forecasting", "dlm" ]
5091
2
null
5048
1
null
The correlation coefficient of two sets of values is one number. The auto-correlation of one set of values is a function (see e.g. [http://en.wikipedia.org/wiki/Autocorrelation](http://en.wikipedia.org/wiki/Autocorrelation) ). Let's call the argument of the function `t` (looks like it's the $\tau$ in your question), then the value of the auto-correlation function at `t` is the correlation coefficient of the set of values and the set of values shifted by `t` (I might be ignoring normalization factors here).
null
CC BY-SA 2.5
null
2010-12-02T22:25:41.843
2010-12-02T22:25:41.843
null
null
961
null
5092
1
null
null
9
2119
I am a stats newbie, so apologies in advance if I'm asking a braindead question. I have searched for answers to my question, but I find that many of the topics are either too specific, or quickly go beyond what I currently comprehend. I have some simulation work that includes large datasets which become infeasible to simulate exhaustively. For the smallest of my datasets, an exhaustive run presents the following distribution of results from a total of 9180900 tests. Result/Frequency: - 0 7183804 - 1 1887089 - 2 105296 - 3 4571 - 4 140 What the numbers mean does not matter; what matters is that the larger datasets I have can stretch into billions of tests, and become far too time consuming to run. I need to constrain the workload. I feel I ought to be able to sample from the full set of tests to derive a distribution for the sample, and infer (within some bounds) that the results of an exhaustive simulation would exhibit roughly the same distribution. There is no bias inherent in the tests which are run, so uniformly randomly choosing inputs ought to provide a valid sample. What I do not yet understand is how I should go about selecting my sample size. In particular, the distribution exhibits a strange tail, and I fear that sampling too small will lose the lower frequencies. (The 140 occurrences of '4' account for only 0.0015% of the population!) So, my question is, what is the best way of calculating a sample size with which I can assert some level of goodness in my results? Or, am I asking the wrong question?
How to calculate sample size for simulation in order to assert some level of goodness in my results?
CC BY-SA 3.0
null
2010-12-02T23:24:13.180
2011-10-04T23:33:55.633
2011-10-04T23:33:55.633
183
2246
[ "distributions", "sample-size", "experiment-design", "sampling" ]
5093
1
5095
null
6
1120
I am currently working on a RandomForest based prediction method using protein sequence data. I have generated two models first model (NF) using standard set of features and the second model (HF) using hybrid features. I have done Mathews Correlation Coefficient (MCC) and Accuracy calculation and the following are my results: Model 1 (NF): Training Accuracy - 62.85% Testing Accuracy - 56.38 MCC - 0.1673 Model 2 (HF): Training Accuracy - 60.34 Testing Accuracy - 61.78 MCC - 0.1856 The testing data is an independent dataset (means not included in the training data). Since there is a trade-off in accuracy and MCC between the models am confused about the prediction power of the models. Could you please share your thoughts on which model I should consider for further analysis? Apart from Accuracy and MCC is there any other measure that I should consider for validation? Thanks in advance.
Statistical validation of RandomForest models
CC BY-SA 2.5
null
2010-12-03T00:12:49.363
2010-12-03T00:53:44.013
2010-12-03T00:35:31.087
null
529
[ "machine-learning", "cross-validation", "random-forest" ]
5094
2
null
5093
4
null
It just seems those two variants are equivalent; yet some better test should be made to confirm this, at least cross validation. Also if this NF and HF sets have some attributes in common, it may suggest that only this common part is useful -- I would invest some time in making feature selection.
null
CC BY-SA 2.5
null
2010-12-03T00:34:34.147
2010-12-03T00:34:34.147
null
null
null
null
5095
2
null
5093
8
null
I like the idea of parsimony- the smaller the number of variables in the model, the better. Unless you are driven theoretically of course. Feature selection refers to the process of choosing which variables to use in the model (getting the best combination of variables). There are lots of different options for feature selection (worth a read). With that said, there should be inbuilt within the rf algorithm, a variable importance measure that you can generate as a starting point (with that said, be very very careful with this because there are noted biases in this) - see Strobl et al in the R journal. I trust you have varied the number of variables randomly sampled at each node (this is mtry in R) and the depth of the trees and splitting criteria etc. In terms of appearance, the second model looks slighly better to me, simply because of the reproduced accuracy in the test and train results. It always concerns me that if my test set accuracy is notably lower, there may be something wrong with the model. I trust you have made sure that your test and train set are balanced, at least on the dependent variable you are looking to classify. If this is binary (0,1) your models are not really doing much better than chance (50,50). An very important thing to look at is the sensitivity (the number of true positives in a binary task 0,1 that are correctly classified) and specificity (the number of true negatives in a binary task 0,1) that are both correctly classified. If possible, I would compare this model against other machine learning algorithms such as boosted trees, support vector machines (which do ok in gene data) etc. I am not sure what package you are using - hope that helps if If you are using r - look up caret in cran (really good intro to some of the ideas here and great for getting out some alternative measures of performance). Paul D
null
CC BY-SA 2.5
null
2010-12-03T00:53:44.013
2010-12-03T00:53:44.013
null
null
2238
null
5096
2
null
4904
0
null
I will repost the answer I gave on math.stackexchange: Your question needs some more information: How is their score generated (what kind of game is it)? What should your non-cheating data look like? How do people cheat? How will their score be different (in a statistical sense) when they are not cheating? Do you know roughly the proportion that are cheating? Or is that something you also want to find out? I would also look at outlier detection algorithms: wikipedia looks useful on this topic (link). Using a Q-Q plot on your data may also be useful if your non-cheating data should be approximately Normally distributed; points that are significantly above the line might be cheaters.
null
CC BY-SA 2.5
null
2010-12-03T04:58:22.370
2010-12-03T04:58:22.370
null
null
1146
null
5097
1
5099
null
4
3063
This is a follow-up of the question I posted [earlier](https://stats.stackexchange.com/questions/5093/statistical-validation-of-randomforest-models). I am assessing the two RF models which are generated using two different set of features > NF - Test_Accuracy > Training accuracy (500 features) HF - Test_Accuracy < Training accuracy (125 features) Testing and Training is done using independent data sets and the accuracy is derived from the average of a 5-cross validation. The difference between the models are in the number of features. I am afraid there could be a possible [overfitting](http://en.wikipedia.org/wiki/Overfitting) in one of the model (It is not clear to me which model could be overfitting, because I have used independent dataset and k-cross validation on the datasets). I would like to know what are the standard methods (tools/libraries) which can be used to assess overfitting.
How to assess overfitting?
CC BY-SA 2.5
null
2010-12-03T05:23:38.833
2016-04-25T20:50:50.197
2020-06-11T14:32:37.003
-1
529
[ "machine-learning", "random-forest" ]
5098
2
null
5092
6
null
I think the answer to your question is a couple other questions: how rare does a given test outcome need to be before you don't care about it? How certain do you want to be that you'll actually find at least test that comes out that way if it occurs right at the threshold where you've stopped caring about it. Given those values you can do a power analysis. I'm not 100% confident whether you need to do a multinomial (involving more than one outcome) power analysis or not, I'm guessing a binomial one (either the rare test or not) will work just fine, e.g. [http://statpages.org/proppowr.html](http://statpages.org/proppowr.html). Alpha = .05, Power = 80%, Group on proportion 0, Group 1 proportion .0015. Relative sample size, 1; total - just south of 13,000 tests. At which the expected number of test 4s is ~20. That will help you find the number of tests you need to have to detect one of those rare occurring results. However if what you really care about is relative frequency, the problem is harder. I'd conjecture that if you simply multiplied the resulting N from the power analysis by 20 or 30 you'd find a reasonable guess. In practice, if you don't really need to decide the number of tests ahead of time, you might consider running tests until you get 20 or 30 result 4s. By the time you've gotten that many 4s you should start to have a reasonable though not absolute estimate of their relative frequency IMO. Ultimately - there are trade-offs between number of tests run and accuracy. You need to know how precise you want your estimates to be before you can really determine how many is "enough".
null
CC BY-SA 2.5
null
2010-12-03T05:41:03.173
2010-12-03T05:41:03.173
null
null
196
null
5099
2
null
5097
9
null
This result does not mean that you have overfitting. First of all, CV is more reliable than test set -- you can have (bad) luck in selecting test, what results in (pessimistic) optimistic bias with respect to true accuracy. CV effectively smooths this problem by repeating the procedure of selecting test. What's worse for using the test, RF is a stochastic algorithm and so two runs with different seed will give you different test accuracies, and the difference may be even bigger than that between CV and test. Second, you may use standard deviation of accuracy from all CV runs to test whether: - Your CV accuracy is really different from this on test. - One of the feature sets you used is really better than the other.
null
CC BY-SA 2.5
null
2010-12-03T09:36:39.130
2010-12-03T09:36:39.130
null
null
null
null
5100
1
5103
null
2
444
this is my first question on this site, so please be patient with me. I am doing a random walk, where I build a timeseries curve. I do that a preset number of times ( let's say 100 times ). Now I was wondering what should I do with all the generated curves. Eventually I want to have 1 curve that is the best representation. I tried taking the mean and median of the values for each point of time, but that gives me a rather tame and flat curve. What other options do I have? Your input is appreciated!
Random walk data post processing
CC BY-SA 2.5
null
2010-12-03T12:32:23.160
2010-12-03T14:00:26.023
null
null
2218
[ "random-variable" ]
5102
2
null
5100
2
null
Plotting the mean or median for each timepoint sounds a sensible start. You could also plot a [reference range](http://en.wikipedia.org/wiki/Reference_range) for each timepoint to show the variability across curves at each timepoint. You could also add a few (perhaps 5 or 10) randomly-chosen curves to illustrate the variability across timepoints within each curve. Should be perfectly possible to show all of those things on the same plot with a suitable choice of colours and line weights. That should give a graphical depiction of the process's behaviou but doesn't really answer your requirement for 'one curve that is the best representation'. But to answer that we need to know what you mean by 'best' -- how will you use this 'best curve'? The mean or median may look too flat and boring to use it as the sole graphical display but may be the 'best' summary curve for quite a few purposes.
null
CC BY-SA 2.5
null
2010-12-03T13:22:49.450
2010-12-03T13:22:49.450
null
null
449
null
5103
2
null
5100
5
null
I'm somewhat confused - if these are random walks, isn't the expectation a flat, uninteresting line? ``` set.seed(123) nWalks <- 1000 nTimes <- 100 mat <- matrix(c(rep(0, nWalks), rnorm(nWalks*(nTimes-1))), ncol = nTimes) rwalks <- apply(mat, 1, cumsum) matplot(rwalks, type = "l") ## Stack the data for fitting a smoother df <- stack(data.frame(rwalks))[, 2:1] names(df) <- c("Walk","Xt") df <- within(df, Time <- rep(seq_len(nTimes), nWalks)) ## fit smother and predict mod <- with(df, smooth.spline(Time, Xt)) pred <- predict(mod, x = 1:100) lines(pred, col = "red", lwd = 4) ``` This gives: ![1000 simulated random walks from origin 0, with summary smoothing spline (think, red line)](https://i.stack.imgur.com/Oi2Sk.png) Whilst the expectation doesn't do an awful lot, the range and the variance do slightly more-interesting things: ``` layout(1:3) plot(apply(rwalks, 1, mean), type = "l", main = "Mean", ylab = "Mean") plot(apply(rwalks, 1, var), type = "l", main = "Variance", ylab = "Variance") plot(apply(rwalks, 1, max) - apply(rwalks, 1, min), type = "l", main = "Range", ylab = "Range") layout(1) ``` ![Expectation, variance and range of simulated random walks](https://i.stack.imgur.com/K5jfR.png) The tiny fluctuations in the mean trace are due to sampling variation for the set of random walks I generated. it should settle down to be 0 if you up the number of random walks generated. (In any case, the value of the mean is effectively zero given the magnitude of the values the walks take.) Perhaps you could explain a bit more what you mean by "best representation"?
null
CC BY-SA 2.5
null
2010-12-03T13:55:13.343
2010-12-03T13:55:13.343
null
null
1390
null
5104
2
null
5100
1
null
If you're simply looking for a way of representing a process for readers, look up "diffusion modelling" in the psychological literature and you'll find how those folks typically attempt to represent a stochastic process and typical exemplar.
null
CC BY-SA 2.5
null
2010-12-03T14:00:26.023
2010-12-03T14:00:26.023
null
null
364
null
5105
2
null
4259
1
null
The best introductory Bayesian book I have found is [Data Analysis - A Bayesian Tutorial](http://rads.stackoverflow.com/amzn/click/0198518897). It is quite practical.
null
CC BY-SA 2.5
null
2010-12-03T16:23:28.423
2010-12-03T16:23:28.423
null
null
1146
null
5107
1
5183
null
4
1436
Say I have a set of sample points generated by a multivariate normal distribution D whose parameters I don't know. I want to be able to measure the distance from an arbitrary point to the distribution D. One way of doing this would be to get an estimate of the parameters of D, and use it to get the manalanobis distance to the center of D. However, this gets wrong as the size of the sample gets small, or as the number of parameters go up. An additional information I can use is a bunch of other similar sets of sample points belonging to similar distributions - so I could use that to get a "prior" over the parameters of D, and then update that according to the sample points I know it generated. My intuition is that I could have a mahalanobis distance DC using the covariance matrix of my sample points, and a mahalanobis distance DP using the covariance matrix of all the data points I have (not only those belonging to my distribution), and use a weighted sum of them as my distance metric function of the size of my sample from D (if the size is small, I'd better use DP, if it's large, I can use DC). But I'm not sure how to formalize that exactly, and feel I'm missing some conceptual tools. (this is similar to a standard classification problem, but here I'm not interested in class membership, only in distance to a given class - the other classes are only useful to provide priors) So, how would you formally describe this problem? Is there a standard solution?
(Mahalanobis) distance to a multivariate distribution of which I have few sample points
CC BY-SA 2.5
null
2010-12-03T17:40:07.863
2011-08-30T12:25:24.120
2010-12-03T20:47:09.717
930
1737
[ "classification", "multivariate-analysis", "distance-functions" ]
5109
1
5113
null
6
4099
When minimizing a function by general Metropolis-Hastings algorithms, the function is viewed as an unnormalized density of some distribution. (1) As density functions are required to be nonnegative, I was wondering if there is some restriction on functions that can be minimized by Metropolis-Hastings algorithms? (2) To minimize the following function: > $f(x) = 10 \sin(0.3x) \sin(1.3x^2) + 0.001 x^4 + 0.2 x+ 80.$ How do I transform it into some other function before I can apply Metropolis-Hastings algorithms? Thanks and regards! --- EDIT: Thanks! I agree Simulated Annealing might better handling such situation, but this is an assignment that requires to use Metropolitan-Hastings algorithms. So I am afraid I have not other choice.
Minimization of a function by Metropolis-Hastings algorithms
CC BY-SA 2.5
null
2010-12-03T20:34:20.547
2020-05-02T08:38:51.247
2010-12-05T00:35:31.170
1005
1005
[ "self-study", "optimization", "monte-carlo", "metropolis-hastings" ]
5110
2
null
726
6
null
> β€œStatistics is much like a streetlight. Not very enlightening, but nice for supporting you” [Storm P](https://secure.wikimedia.org/wikipedia/en/wiki/Robert_Storm_Petersen#Quotes)
null
CC BY-SA 2.5
null
2010-12-03T21:05:59.607
2010-12-03T21:05:59.607
null
null
2247
null
5111
1
null
null
6
431
I have a question regarding the sign test when the individual measurements may be correlated. Let me start off with some background. Suppose we have 4 Organisms (a,b,c,d),and we make measurements in two separate ways, say A and B. Our data may look as follows a = 3 for measurement A and 1 for measurement B b = 4 for measurement A and 3 for measurement B c = 0 for A and 4 for B d = 2 for A and 0 for B We now take the difference between A and B: $2,1,-4,2$. Looking at the signs we get the pattern $++-+$. We want to test if there is any difference between method A and method B. Take: $H_0$(null hypothesis) = distribution for A is equal to the distribution for B Under $H_0$ we would expect $\textrm{Pr}(A>B)=\textrm{Pr}(B>A)=.5$, therefore any pattern of $+$'s and $-$'s would be equally likely. i.e. $--+-$ is as likely to occur as $+-++$ etc. Let $U =$ number of $+$'s (in our case $U=3$). Assuming $H_0$ one can show that $\textrm{Pr}(U\ge3) = (1+4)/2^4 = 5/16=0.3125$. Now, suppose the a and b are strongly positively correlated. Therefore not all combinations of $+$'s and $-$'s would be equally likely.For example one would not expect to have a > b for method A and a < b for method B. Therefore we would not expect sequences like $+-..$ or $-+..$ to occur. Taking this into account assuming $H_0$ it turns out that $\textrm{Pr}(U\ge3) = 3/8=0.375$, i.e. our p value increases. Now I come to my question: > If instead of 4 organisms, I have say 100 organisms, and also suppose I have an upper bound on the number of correlations and the size of each correlation. Is there any way to construct an upper bound on the p value?
Nonparametric sign test for correlated variables
CC BY-SA 2.5
null
2010-12-03T22:19:21.333
2010-12-21T18:22:10.200
2010-12-20T19:28:26.343
919
null
[ "hypothesis-testing", "nonparametric" ]
5112
2
null
5109
3
null
If you want to find the global minimum of a function, [simulated annealing](http://en.wikipedia.org/wiki/Simulated_annealing) would be the algorithm to look at, in which case there is no need to view the function as an unnormalised density of any kind and no need to transform the function.
null
CC BY-SA 2.5
null
2010-12-03T23:45:11.203
2010-12-03T23:45:11.203
null
null
887
null
5113
2
null
5109
7
null
You are rather looking for a simulated annealing, which is easier to understand when formulated in the original, physics way: Having - $x$ is a state of the system - $f(x)$ is an energy of the system; energy is defined up to addition of a constant, so there is no problem with it being negative or positive -- the only constraint is the-lower-the-better - $T$ is a temperature of the system, and $kT$ is this temperature expressed in energy units the MH algorithm, formulated as - $x'=x+\text{random change}$ - if $\big\{\exp{\frac{f(x)-f(x')}{kT}}>R(0;1)\big\}$ than $x=x'$ - goto 1 recreates the same occupancy of particular states (values of $x$) as for a system placed in temperature $T$, as shown by Metropolis in his pioneer work. Physical systems tend to spontaneously go into the lowest energy state and stay there, so after initial relaxation $x$ should generally oscillate around minimum. Those oscillations' amplitude depends on temperature, so simulated annealing reduces $T$ during run to reduce them and make more accurate localisation of minimum. Initially high temperature is required to prevent the system from landing and staying stuck for a very long time in some local minimum. EDIT: If you were told that you should use just Metropolis-Hastings, it most probably means you should do this this way, only with constant temperature (constantly deceasing temperature is the only thing that simulated annealing adds to MH). Then you will need to make few experiments to figure out a good temperature, so that there will be quite good accuracy no stucking in the local minima.
null
CC BY-SA 2.5
null
2010-12-03T23:50:56.123
2010-12-04T12:44:09.460
2010-12-04T12:44:09.460
null
null
null
5114
1
5411
null
8
320
I am trying to state a prior distribution for a Bayesian meta-analysis. I have the following information about a random variable: - Two observations: 3.0, 3.6 - a scientist who studies the variable has told me that $P(X<2)=P(X>8)=0$, and that values as high as 6 have nonzero probability. I have used the following approach to optimization (the mode of log-N = $e^{\mu-\sigma^2)}$: ``` prior <- function(parms, x, alpha) { a <- abs(plnorm(x[1], parms[1], parms[2]) - (alpha/2)) b <- abs(plnorm(x[2], parms[1], parms[2]) - (1-alpha/2)) mode <- exp(parms[1] - parms[2]^2) c <- abs(mode-3.3) return(a + b + c) } v = nlm(prior,c(log(3.3),0.14),alpha=0.05,x=c(2.5,7.5)) x <- seq(1,10,0.1) plot(x, dlnorm(x, v$estimate[1], v$estimate[2])) abline(v=c(2.5,7.5), lty=2) #95%CI ``` ![alt text](https://i.stack.imgur.com/hvKna.png) In the figure, you can see the distribution that this returns, but I would like to find something more like the red lines I have drawn in. This provides the same shaped distribution using the lognormal, gamma, or the normal, and it results in a distribution with $P(X=5)<0.05$ and $P(X=6)< 0.01$, i.e.: ``` plnorm(c(5,6), v$estimate[1],v$estimate[2]) ``` Can anyone suggest alternatives? I would prefer to stick with a single distribution rather than a mixture. Thanks!
Seeking a distribution, perhaps uncommon, consistent with two data points and expert constraints?
CC BY-SA 2.5
null
2010-12-03T23:57:59.227
2010-12-15T17:34:52.937
2010-12-15T17:34:52.937
1381
1381
[ "r", "distributions", "probability", "bayesian", "optimization" ]
5115
1
5124
null
64
10822
What are the most important statisticians, and what is it that made them famous? (Reply just one scientist per answer please.)
Most famous statisticians
CC BY-SA 3.0
null
2010-12-04T00:08:23.027
2022-10-08T18:09:33.393
2014-04-06T02:40:34.817
32036
1808
[ "methodology", "history" ]
5116
2
null
5115
38
null
[Pierre-Simon Laplace](http://en.wikipedia.org/wiki/Laplace) for work on fundamentals of (Bayesian) probability.
null
CC BY-SA 2.5
null
2010-12-04T00:18:06.090
2010-12-04T00:18:06.090
null
null
887
null
5117
2
null
5115
94
null
[Ronald Fisher](http://en.wikipedia.org/wiki/Ronald_Fisher) for his fundamental contributions to the way we analyze data, whether it be the analysis of variance framework, maximum likelihood, permutation tests, or any number of other ground-breaking discoveries.
null
CC BY-SA 2.5
null
2010-12-04T00:36:08.460
2010-12-04T00:36:08.460
null
null
1118
null
5118
2
null
5115
52
null
[Karl Pearson](http://en.wikipedia.org/wiki/Karl_Pearson) for his work on mathematical statistics. Pearson correlation, Chi-square test, and principal components analysis are just a few of the incredibly important ideas that stem from his works.
null
CC BY-SA 2.5
null
2010-12-04T01:32:33.420
2010-12-04T01:32:33.420
null
null
1028
null
5119
1
5120
null
4
2485
I asked this question on mathoverflow yesterday and got a suggestion to try it here. I apologize if this is an easy question, but I can't seem to find an answer anywhere. I'm trying to duplicate a macroeconomic paper that uses MCMC analysis to derive time series and parameter values. For the variance of a white noise term the authors of the original paper choose an inverse gamma prior of mean 0.5 and having a 90% confidence interval between 0.21 and 0.79. I'm using Dynare to run my estimation and Dynare only accepts these kinds of parameters in terms of standard error. Therefore, my question basically amounts to "how do I find the square root of an inverse gamma distribution"? In case it's not clear what I'm asking, here's how the model equation appears in the paper: $a_t=\rho_a\times{a_{t-1}}+\epsilon_a;\hspace{20 pt}\epsilon_a\sim{N}(0,\sigma_a)$ $a_t$ follows a simple AR(1) process and $\epsilon_a$ is meant to be white noise with mean 0 and variance $\sigma_a$. The prior for $\sigma_a$ is supposed to follow an inverse gamma distribution with the mean and confidence interval above. My question then is how can I find the "square root of the distribution" so I can write an expression for the standard error of $\epsilon_a$, instead of its variance? If it's not possible to come up with a simple expression for this, would you have any suggestions for roughly equivalent specifications of the prior? Thanks in advance
Square root of inverse gamma distribution?
CC BY-SA 2.5
null
2010-12-04T02:13:27.347
2014-04-05T09:29:54.803
null
null
2251
[ "distributions", "probability", "bayesian" ]
5120
2
null
5119
5
null
You need not worry about the square root of the error variance. Instead, sample the error variance directly conditional on everything else. The posterior of the error variance will also be an inverse gamma given your model assumptions. So, I am not sure why you want to know the square root of the inverse-gamma. Edit: A more careful reading suggests that the issue is with the specific software you are using. The following link seems to be useful: [http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=1372](http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=1372) In particular, the pdf file on page 5 seems to provide some guidance as to how to specify an inverse-gamma prior for the variance in Dynare.
null
CC BY-SA 2.5
null
2010-12-04T02:51:18.860
2010-12-04T03:10:39.210
2010-12-04T03:10:39.210
null
null
null
5121
2
null
22
51
null
Just a little bit of fun... # A Bayesian is one who, vaguely expecting a horse, and catching a glimpse of a donkey, strongly believes he has seen a mule. From this site: [http://www2.isye.gatech.edu/~brani/isyebayes/jokes.html](http://www2.isye.gatech.edu/~brani/isyebayes/jokes.html) and from the same site, a nice essay... "An Intuitive Explanation of Bayes' Theorem" [http://yudkowsky.net/rational/bayes](http://yudkowsky.net/rational/bayes)
null
CC BY-SA 2.5
null
2010-12-04T03:06:00.983
2010-12-04T03:12:36.393
2010-12-04T03:12:36.393
485
485
null
5122
2
null
5115
32
null
[Francis Galton](http://en.wikipedia.org/wiki/Francis_Galton) for discovering statistical correlation and promoting regression.
null
CC BY-SA 3.0
null
2010-12-04T03:37:32.367
2011-08-09T17:17:42.100
2011-08-09T17:17:42.100
74
74
null
5123
2
null
5115
30
null
[Andrey Markov](http://en.wikipedia.org/wiki/Andrey_Markov) for stochastic processes and markov chains.
null
CC BY-SA 2.5
null
2010-12-04T03:38:44.393
2010-12-04T03:38:44.393
null
null
74
null
5124
2
null
5115
56
null
Reverend [Thomas Bayes](http://en.wikipedia.org/wiki/Thomas_Bayes) for discovering Bayes' theorem
null
CC BY-SA 3.0
null
2010-12-04T03:46:20.583
2012-08-02T03:09:39.773
2012-08-02T03:09:39.773
74
74
null
5125
2
null
5115
50
null
[Carl Gauss](http://en.wikipedia.org/wiki/Carl_Friedrich_Gauss) for least squares estimation.
null
CC BY-SA 3.0
null
2010-12-04T03:48:36.697
2012-08-02T03:11:50.300
2012-08-02T03:11:50.300
74
74
null
5126
2
null
22
31
null
The Bayesian is asked to make bets, which may include anything from which fly will crawl up a wall faster to which medicine will save most lives, or which prisoners should go to jail. He has a big box with a handle. He knows that if he puts absolutely everything he knows into the box, including his personal opinion, and turns the handle, it will make the best possible decision for him. The frequentist is asked to write reports. He has a big black book of rules. If the situation he is asked to make a report on is covered by his rulebook, he can follow the rules and write a report so carefully worded that it is wrong, at worst, one time in 100 (or one time in 20, or one time in whatever the specification for his report says). The frequentist knows (because he has written reports on it) that the Bayesian sometimes makes bets that, in the worst case, when his personal opinion is wrong, could turn out badly. The frequentist also knows (for the same reason) that if he bets against the Bayesian every time he differs from him, then, over the long run, he will lose.
null
CC BY-SA 2.5
null
2010-12-04T06:54:30.117
2010-12-04T06:54:30.117
null
null
1789
null
5128
1
5130
null
8
411
We are preparing a manuscript and the editor asked us to convert a figure with a boxplot into a table "because of the more exact data content". While I think boxplots are pretty decent in revealing something about the data, I was wondering what you guys think about this? Do you (often) opt for tables over boxplots for presenting your data?
Boxplots as tables
CC BY-SA 2.5
null
2010-12-04T09:05:32.740
2010-12-04T15:54:10.600
2010-12-04T15:54:10.600
null
144
[ "data-visualization", "boxplot", "tables" ]
5129
2
null
5128
3
null
It depends on your objective, if it is a quick visualization I will stick with boxplot but if it is a more detailed examination I would stay with the data
null
CC BY-SA 2.5
null
2010-12-04T09:22:22.180
2010-12-04T09:22:22.180
null
null
1808
null
5130
2
null
5128
13
null
I tend to think that boxplots will convey more effective information if there are numerous empirical distributions that you want to summarize into a single figure. If you only have two or three groups, editors may ask you to provide numerical summaries instead, either because it is more suitable for the journal policy, or because readers won't gain much insight into the data from a figure. If you provide the three quartiles, range, and optionally the mean $\pm$ SD, then an advertised reader should have a clear idea of the shape of the distribution (symmetry, presence of outlying values, etc.). I would suggest two critical reviews by Andrew Gelman (the first goes the other way around, but still it provides insightful ideas): - Gelman, A, Pasarica, C, and Dodhia, R. Let's practice what we preach. The American Statistician (2002) 56(2): 121-130. - Gelman, A. Why Tables are Really Much Better than Graphs. (also discussed on his blog)
null
CC BY-SA 2.5
null
2010-12-04T09:23:44.360
2010-12-04T09:50:30.397
2010-12-04T09:50:30.397
930
930
null
5131
2
null
5115
24
null
[Edwin Thompson Jaynes](http://en.wikipedia.org/wiki/Edwin_Thompson_Jaynes) for work on objective Bayesian methods, particularly MaxEnt and transformation groups.
null
CC BY-SA 2.5
null
2010-12-04T10:52:07.720
2010-12-04T10:52:07.720
null
null
887
null
5132
2
null
5115
25
null
[Harold Jeffreys](http://en.wikipedia.org/wiki/Harold_Jeffreys) for revival of Bayesian interpretation of probability.
null
CC BY-SA 2.5
null
2010-12-04T10:54:08.763
2010-12-04T10:54:08.763
null
null
887
null
5133
2
null
5115
46
null
[Bradley Efron](http://en.wikipedia.org/wiki/Bradley_Efron) for the [Bootstrap](http://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29) - one of the most useful techniques in computational statistics.
null
CC BY-SA 2.5
null
2010-12-04T11:07:26.660
2010-12-04T11:07:26.660
null
null
887
null
5134
2
null
4451
2
null
Just found this: [476 million Twitter tweets](http://snap.stanford.edu/data/twitter7.html) (via [@yarapavan](https://twitter.com/#!/yarapavan)).
null
CC BY-SA 2.5
null
2010-12-04T11:27:56.983
2010-12-04T11:27:56.983
null
null
930
null
5135
1
5138
null
278
227335
The help pages in R assume I know what those numbers mean, but I don't. I'm trying to really intuitively understand every number here. I will just post the output and comment on what I found out. There might (will) be mistakes, as I'll just write what I assume. Mainly I'd like to know what the t-value in the coefficients mean, and why they print the residual standard error. ``` Call: lm(formula = iris$Sepal.Width ~ iris$Petal.Width) Residuals: Min 1Q Median 3Q Max -1.09907 -0.23626 -0.01064 0.23345 1.17532 ``` This is a 5-point-summary of the residuals (their mean is always 0, right?). The numbers can be used (I'm guessing here) to quickly see if there are any big outliers. Also you can already see it here if the residuals are far from normally distributed (they should be normally distributed). ``` Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.30843 0.06210 53.278 < 2e-16 *** iris$Petal.Width -0.20936 0.04374 -4.786 4.07e-06 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 ``` Estimates $\hat{\beta_i}$, computed by least squares regression. Also, the standard error is $\sigma_{\beta_i}$. I'd like to know how this is calculated. I have no idea where the t-value and the corresponding p-value come from. I know $\hat{\beta}$ should be normal distributed, but how is the t-value calculated? ``` Residual standard error: 0.407 on 148 degrees of freedom ``` $\sqrt{ \frac{1}{n-p} \epsilon^T\epsilon }$, I guess. But why do we calculate that, and what does it tell us? ``` Multiple R-squared: 0.134, Adjusted R-squared: 0.1282 ``` $ R^2 = \frac{s_\hat{y}^2}{s_y^2} $, which is $ \frac{\sum_{i=1}^n (\hat{y_i}-\bar{y})^2}{\sum_{i=1}^n (y_i-\bar{y})^2} $. The ratio is close to 1 if the points lie on a straight line, and 0 if they are random. What is the adjusted R-squared? ``` F-statistic: 22.91 on 1 and 148 DF, p-value: 4.073e-06 ``` F and p for the whole model, not only for single $\beta_i$s as previous. The F value is $ \frac{s^2_{\hat{y}}}{\sum\epsilon_i} $. The bigger it grows, the more unlikely it is that the $\beta$'s do not have any effect at all.
Interpretation of R's lm() output
CC BY-SA 3.0
null
2010-12-04T11:28:14.300
2021-09-16T13:03:26.477
2016-11-10T13:37:43.810
7290
2091
[ "r", "regression", "interpretation" ]
5136
1
5163
null
8
3102
The problem is to define when a variable might be considered as a latent variable. I am interested in how to describe a latent variable, and what are the properties of latent variables. My twofold question is: - When you try to explain what a latent variable is, what do you consider as the main differences between a manifest and a latent variable? - When does factor analysis or item response modeling seem more appropriate ? Example. On the one hand, if you want to measure fish weight without any instrument, you can devise items to measure it. In this case, do you rely on a latent variable model? On the other hand, social level is sometimes measured directly through answers to a survey with a linear model (or other models) applied to items such as highest diploma, numbers of books at home, numbers of electronics devices, but not by considering a latent variable. But why can't we use a latent variable model in this case? Thanks in advance.
When do you consider a variable is a latent variable?
CC BY-SA 2.5
null
2010-12-04T11:41:22.610
2010-12-05T12:21:37.620
2010-12-04T20:14:58.293
930
1154
[ "psychometrics", "latent-variable" ]
5137
2
null
5011
5
null
A quick google brings up this, which indicates that when working with circular data you'll need a different definition of 'bias' for a start: > However, when using data on the circle, we cannot use distance in Euclidean space, so all differences ΞΈ βˆ’ ΞΈi should be replaced by considering the angle between two vectors: $d_i\theta)= \| \theta -\theta_i \| = \min(|\theta-\theta_i|, 2Ο€ -|\theta-\theta_i|).$ -- Charles C Taylor. Automatic bandwidth selection for circular density estimation. Computational Statistics & Data Analysis Volume 52, Issue 7, 15 March 2008, Pages 3493-3500. doi: [10.1016/j.csda.2007.11.003](http://dx.doi.org/10.1016/j.csda.2007.11.003) He references these books: S. Rao Jammalamadaka and A. SenGupta, Topics in Circular Statistics, World Scientific, Singapore (2001). K.V. Mardia and P.E. Jupp, Directional Statistics, John Wiley, Chichester (1999).
null
CC BY-SA 2.5
null
2010-12-04T12:13:36.183
2010-12-04T12:13:36.183
2020-06-11T14:32:37.003
-1
449
null
5138
2
null
5135
240
null
## Five point summary yes, the idea is to give a quick summary of the distribution. It should be roughly symmetrical about mean, the median should be close to 0, the 1Q and 3Q values should ideally be roughly similar values. ## Coefficients and $\hat{\beta_i}s$ Each coefficient in the model is a Gaussian (Normal) random variable. The $\hat{\beta_i}$ is the estimate of the mean of the distribution of that random variable, and the standard error is the square root of the variance of that distribution. It is a measure of the uncertainty in the estimate of the $\hat{\beta_i}$. You can look at how these are computed (well the mathematical formulae used) on [Wikipedia](http://en.wikipedia.org/wiki/Ordinary_least_squares). Note that any self-respecting stats programme will not use the standard mathematical equations to compute the $\hat{\beta_i}$ because doing them on a computer can lead to a large loss of precision in the computations. ## $t$-statistics The $t$ statistics are the estimates ($\hat{\beta_i}$) divided by their standard errors ($\hat{\sigma_i}$), e.g. $t_i = \frac{\hat{\beta_i}}{\hat{\sigma_i}}$. Assuming you have the same model in object `mod`as your Q: ``` > mod <- lm(Sepal.Width ~ Petal.Width, data = iris) ``` then the $t$ values R reports are computed as: ``` > tstats <- coef(mod) / sqrt(diag(vcov(mod))) (Intercept) Petal.Width 53.277950 -4.786461 ``` Where `coef(mod)` are the $\hat{\beta_i}$, and `sqrt(diag(vcov(mod)))` gives the square roots of the diagonal elements of the covariance matrix of the model parameters, which are the standard errors of the parameters ($\hat{\sigma_i}$). The p-value is the probability of achieving a $|t|$ as large as or larger than the observed absolute t value if the null hypothesis ($H_0$) was true, where $H_0$ is $\beta_i = 0$. They are computed as (using `tstats` from above): ``` > 2 * pt(abs(tstats), df = df.residual(mod), lower.tail = FALSE) (Intercept) Petal.Width 1.835999e-98 4.073229e-06 ``` So we compute the upper tail probability of achieving the $t$ values we did from a $t$ distribution with degrees of freedom equal to the residual degrees of freedom of the model. This represents the probability of achieving a $t$ value greater than the absolute values of the observed $t$s. It is multiplied by 2, because of course $t$ can be large in the negative direction too. ## Residual standard error The residual standard error is an estimate of the parameter $\sigma$. The assumption in ordinary least squares is that the residuals are individually described by a Gaussian (normal) distribution with mean 0 and standard deviation $\sigma$. The $\sigma$ relates to the constant variance assumption; each residual has the same variance and that variance is equal to $\sigma^2$. ## Adjusted $R^2$ Adjusted $R^2$ is computed as: $$1 - (1 - R^2) \frac{n - 1}{n - p - 1}$$ The adjusted $R^2$ is the same thing as $R^2$, but adjusted for the complexity (i.e. the number of parameters) of the model. Given a model with a single parameter, with a certain $R^2$, if we add another parameter to this model, the $R^2$ of the new model has to increase, even if the added parameter has no statistical power. The adjusted $R^2$ accounts for this by including the number of parameters in the model. ## $F$-statistic The $F$ is the ratio of two variances ($SSR/SSE$), the variance explained by the parameters in the model (sum of squares of regression, SSR) and the residual or unexplained variance (sum of squares of error, SSE). You can see this better if we get the ANOVA table for the model via `anova()`: ``` > anova(mod) Analysis of Variance Table Response: Sepal.Width Df Sum Sq Mean Sq F value Pr(>F) Petal.Width 1 3.7945 3.7945 22.91 4.073e-06 *** Residuals 148 24.5124 0.1656 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 ``` The $F$s are the same in the ANOVA output and the `summary(mod)` output. The `Mean Sq` column contains the two variances and $3.7945 / 0.1656 = 22.91$. We can compute the probability of achieving an $F$ that large under the null hypothesis of no effect, from an $F$-distribution with 1 and 148 degrees of freedom. This is what is reported in the final column of the ANOVA table. In the simple case of a single, continuous predictor (as per your example), $F = t_{\mathrm{Petal.Width}}^2$, which is why the p-values are the same. This equivalence only holds in this simple case.
null
CC BY-SA 4.0
null
2010-12-04T12:59:27.017
2019-04-03T15:11:20.927
2019-04-03T15:11:20.927
28740
1390
null
5139
2
null
5136
3
null
That is a modeling decision. One way to look at it can be illustrated by the following example. A couple of hundreds electrodes are attached to the head to measure brain activity. Electricity, blood flow, whatever and you get lots of signals. These measurements that you get are observables. They are mixed in probably very non-linear way and are not useful. Latent or also hidden variables are modeling the individual variables that are responsible for their generations. They are supposed to be more pure, more interpretable. How to extract the signal that is causing the eye to blink, or to open the mouth, or emotions and many more complicated signals. Hope it helps to understand the intuition.
null
CC BY-SA 2.5
null
2010-12-04T15:12:21.650
2010-12-04T15:12:21.650
null
null
2084
null