Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4272 | 1 | 4274 | null | 97 | 43469 | In what circumstances should one consider using regularization methods (ridge, lasso or least angles regression) instead of OLS?
In case this helps steer the discussion, my main interest is improving predictive accuracy.
| When to use regularization methods for regression? | CC BY-SA 2.5 | null | 2010-11-06T17:53:05.250 | 2018-01-20T17:03:16.403 | 2017-07-24T12:24:36.363 | 11887 | 439 | [
"regression",
"least-squares",
"lasso",
"ridge-regression",
"fused-lasso"
]
|
4273 | 2 | null | 4272 | 23 | null | A theoretical justification for the use of ridge regression is that its solution is the posterior mean given a normal prior on the coefficients. That is, if you care about squared error and you believe in a normal prior, the ridge estimates are optimal.
Similarly, the lasso estimate is the posterior mode under a double-exponential prior on your coefficients. This is optimal under a zero-one loss function.
In practice, these techniques typically improve predictive accuracy in situations where you have many correlated variables and not a lot of data. While the OLS estimator is best linear unbiased, it has high variance in these situations. If you look at the bias-variance trade off, prediction accuracy improves because the small increase in bias is more than offset by the large reduction in variance.
| null | CC BY-SA 2.5 | null | 2010-11-06T18:22:43.530 | 2010-11-06T18:22:43.530 | null | null | 1815 | null |
4274 | 2 | null | 4272 | 87 | null | Short answer: Whenever you are facing one of these situations:
- large number of variables or low ratio of no. observations to no. variables (including the $n\ll p$ case),
- high collinearity,
- seeking for a sparse solution (i.e., embed feature selection when estimating model parameters), or
- accounting for variables grouping in high-dimensional data set.
Ridge regression generally yields better predictions than OLS solution, through a better compromise between bias and variance. Its main drawback is that all predictors are kept in the model, so it is not very interesting if you seek a parsimonious model or want to apply some kind of feature selection.
To achieve sparsity, the lasso is more appropriate but it will not necessarily yield good results in presence of high collinearity (it has been observed that if predictors are highly correlated, the prediction performance of the lasso is dominated by ridge regression). The second problem with L1 penalty is that the lasso solution is not uniquely determined when the number of variables is greater than the number of subjects (this is not the case of ridge regression). The last drawback of lasso is that it tends to select only one variable among a group of predictors with high pairwise correlations. In this case, there are alternative solutions like the [group](http://www-stat.stanford.edu/~tibs/ftp/sparse-grlasso.pdf) (i.e., achieve shrinkage on block of covariates, that is some blocks of regression coefficients are exactly zero) or [fused](http://www.stanford.edu/group/SOL/papers/fused-lasso-JRSSB.pdf) lasso. The [Graphical Lasso](http://www-stat.stanford.edu/~tibs/glasso/index.html) also offers promising features for GGMs (see the R [glasso](http://cran.r-project.org/web/packages/glasso/index.html) package).
But, definitely, the elasticnet criteria, which is a combination of L1 and L2 penalties achieve both shrinkage and automatic variable selection, and it allows to keep $m>p$ variables in the case where $n\ll p$. Following Zou and Hastie (2005), it is defined as the argument that minimizes (over $\beta$)
$$
L(\lambda_1,\lambda_2,\mathbf{\beta}) = \|Y-X\beta\|^2 + \lambda_2\|\beta\|^2 + \lambda_1\|\beta\|_1
$$
where $\|\beta\|^2=\sum_{j=1}^p\beta_j^2$ and $\|\beta\|^1=\sum_{j=1}^p|\beta_j |$.
The lasso can be computed with an algorithm based on coordinate descent as described in the recent paper by Friedman and coll., [Regularization Paths for Generalized Linear Models via Coordinate Descent](http://www.jstatsoft.org/v33/i01/paper) (JSS, 2010) or the LARS algorithm. In R, the [penalized](http://cran.r-project.org/web/packages/penalized/index.html), [lars](http://cran.r-project.org/web/packages/lars/index.html) or [biglars](http://cran.r-project.org/web/packages/biglars/index.html), and [glmnet](http://cran.r-project.org/web/packages/glmnet/index.html) packages are useful packages; in Python, there's the [scikit.learn](http://scikit-learn.sourceforge.net/) toolkit, with extensive [documentation](http://scikit-learn.sourceforge.net/modules/glm.html#lars-algorithm) on the algorithms used to apply all three kind of regularization schemes.
As for general references, the [Lasso page](http://www-stat.stanford.edu/~tibs/lasso.html) contains most of what is needed to get started with lasso regression and technical details about L1-penalty, and this related question features essential references, [When should I use lasso vs ridge?](https://stats.stackexchange.com/questions/866/when-should-i-use-lasso-vs-ridge)
| null | CC BY-SA 3.0 | null | 2010-11-06T19:09:19.687 | 2018-01-20T17:03:16.403 | 2018-01-20T17:03:16.403 | 128677 | 930 | null |
4275 | 2 | null | 4267 | 4 | null | For NNMF, Lee and Seung describe an iterative algorithm which is very simple to implement. Actually they give two similar algorithms, one for minimizing Frobenius norm of residual, the other for minimizing Kullback-Leibler Divergence of the approximation and original matrix.
- Daniel Lee, H. Sebastian Seung, Algorithms for Non-negative Matrix Factorization, Advances in Neural Information Processing Systems 13: Proceedings of the 2000 Conference. MIT Press. pp. 556–562.
| null | CC BY-SA 2.5 | null | 2010-11-06T19:47:38.620 | 2010-11-06T19:47:38.620 | null | null | 795 | null |
4276 | 1 | null | null | 9 | 2329 | I have a sensor which is capturing accelerometer data as a person walks. What I'm interested in extracting is each signal fragment when a step is taken. The Z-axis is what is used since only one axis is required to detect changes in steps. The images below illustrates a sample Z-axis gait signal (for 400 iterations).
The image below illustrates the first-half of the above signal (for 200 iterations). 
The subject is initially standing still and then begins walking at ~X=30. Notice how there is an apparent pattern as the user walks. What I'm interested in using Autocorrelation to smooth the Z-axis signal using Matlab to smooth the signal (based on the image below). Unfortunately, I don't have a strong signal processing background, and I have a decent grasp of Matlab. How can I go about achieving smoothing the gait signal so I can extract steps? The literature that I'm using suggests that steps may be extracted by looking at the peaks of the smoothed signal.

Other sources have suggested the use of Hidden Markov Models to extract each of the gait cycles, but I thought about a simpler signal processing approach before I consider using something advanced. However, what would be the best strategy if I wanted to pursue this course of action?
| Using autocorrelation to find commonly occurring signal fragments | CC BY-SA 2.5 | null | 2010-11-06T21:12:01.923 | 2011-03-26T00:45:12.177 | 2011-03-26T00:45:12.177 | null | 1224 | [
"matlab",
"autocorrelation",
"signal-processing",
"markov-process"
]
|
4277 | 2 | null | 4252 | 5 | null | In addition to the above answer, if there are many entries (say n), then first sorting them takes time O(n log n). However, there is a linear-time solution.
- Compute the P-quantile L and (1-P)-quantile U. There is a simple (quicksort-like) algorithm for this that runs in expected linear time. There is also a more complicated algorithm that runs in worst case linear time. Both can be found, for example, in:
Cormen, Leiserson, Rivest, Stein: Introduction to Algortithms.
- Scan through all values and add those between L and U. This obviously takes linear time.
- If there are ties and the computed quantiles exist several times among the values, we might have added too many or too few values and may need to correct for this appropriately. Since we know how many numbers we added in step 2, and also how many times we have seen L and U, this can be done in constant time.
- Divide the total sum by the number of summands.
Note that the above recipe is only worthwhile if n is really large and sorting all of them would be a performance hit, perhaps a few millions.
| null | CC BY-SA 2.5 | null | 2010-11-06T23:16:32.310 | 2010-11-06T23:16:32.310 | null | null | null | null |
4279 | 1 | null | null | 5 | 21992 | I would like build a web application using R. I am using Windows Vista and have an Apache server. I have tried [Rpad](http://rpad.googlecode.com/svn-history/r76/Rpad_homepage/index.html), but I was not able to correctly configure it. How do I set up Rpad as I am not that well off with PHP and Apache server? Or are there other ways to use R on Apache server?
| How can I integrate R with PHP? | CC BY-SA 2.5 | null | 2010-11-07T05:23:12.080 | 2014-03-23T18:42:35.027 | 2011-02-03T21:30:15.487 | 509 | 1886 | [
"r"
]
|
4280 | 2 | null | 4279 | 11 | null | Here is the easiest way to do it that I found:
This implementation of PHP and R consists of only two files. One written in PHP, and the other an R script. The PHP returns a form which uses the GET method to send a variable N to the server. When the form is submitted, the PHP will then execute an R script from the shell using a combination of the PHP command exec() and the Rscript shell command. This command will pass the variable N to the R script. The R script will then execute and save a histogram plot of N normally distributed values to the filesystem. Finally, when the R script is complete, the PHP will return the HTML tag containing the saved images path. First, the PHP file
```
< ?php
// poorman.php
echo "< form action='poorman.php' method='get'>";
echo "Number values to generate: < input type='text' name='N' />";
echo "< input type='submit' />";
echo "< /form>";
if( isset($_GET['N']))
{
$N = $_GET['N'];
// execute R script from shell
// this will save a plot at temp.png to the filesystem
exec("Rscript my_rscript.R $N");
// return image tag
$nocache = rand();
echo("< img src='temp.png?$nocache' /> ");
}
?>
```
and the R script
```
# my_rscript.R
args <- commandArgs(TRUE)
N <- args[1]
x <- rnorm(N,0,1)
png(filename="temp.png", width=500, height=500)
hist(x, col="lightblue")
dev.off()
```
Here are some more you are welcome to try:
- http://danpolant.com/r-integration-with-php/
- http://steve-chen.net/document/r/r_php
| null | CC BY-SA 2.5 | null | 2010-11-07T05:42:05.297 | 2010-11-07T08:50:19.673 | 2010-11-07T08:50:19.673 | 930 | 1808 | null |
4281 | 2 | null | 3713 | 12 | null | You can't know in advance which clustering algorithm would be better, but there are some clues, for example if you want to cluster images there are certain algorithms you should try first like Fuzzy Art, or if you want to group faces you should start with (GGCI) global geometric clustering for image.
Anyway this does not guarantee the best result, so what I would do is use a program that allows you to methodically run different cluster algorithms, such as weka, RapidMiner or even R (which is non visual), There I will set the program to launch all the different clustering algorithms I can, with all the possible different distances, and if they need parameters, experiment each one with a variety of different parameter values (besides if I do not know the amount of clusters, run each one with a variety of numbers of it). Once you settled the experiment, leave it running, but remember to store somewhere the results of each clustering run.
Then compare the results in order to obtain the best resulting clustering. This is tricky because there are several metrics you can compare and not all are provided by every algorithm. For example fuzzy clustering algorithms have different metrics than non-fuzzy, but they can still be compared by considering the fuzzy resulting groups as non-fuzzy,
I will stick for the comparison to the classic metrics such as:
• SSE: sum of the square error from the items of each cluster.
• Inter cluster distance: sum of the square distance between each cluster centroid.
• Intra cluster distance for each cluster: sum of the square distance from the items of each cluster to its centroid.
• Maximum Radius: largest distance from an instance to its cluster centroid.
• Average Radius: sum of the largest distance from an instance to its cluster centroid divided by the number of clusters.
| null | CC BY-SA 2.5 | null | 2010-11-07T07:12:45.540 | 2010-11-07T07:12:45.540 | null | null | 1808 | null |
4282 | 2 | null | 2717 | 0 | null | Before you try running the clustering on the matrix you can try doing one of the factor analysis techniques, and keep just the most important variables to compute the distance matrix.
Another thing you can do is to try use fuzzy-methods which tend to work better (at least in my experience) in this kind of cases, try first Cmeans, Fuzzy K-medoids, and Specially GKCmeans.
| null | CC BY-SA 2.5 | null | 2010-11-07T07:18:14.823 | 2010-11-07T07:18:14.823 | null | null | 1808 | null |
4283 | 2 | null | 4279 | 8 | null | If you ever think to switch to Linux, the best way would be to use [RApache](http://rapache.net/), which is an Apache module that embeds an R interpreter (`mod_R`) in the webserver
| null | CC BY-SA 2.5 | null | 2010-11-07T08:21:28.343 | 2010-11-07T08:21:28.343 | null | null | 582 | null |
4284 | 1 | 4287 | null | 61 | 26565 | I am looking for an intuitive explanation of the bias-variance tradeoff, both in general and specifically in the context of linear regression.
| Intuitive explanation of the bias-variance tradeoff? | CC BY-SA 3.0 | null | 2010-11-07T10:57:29.053 | 2021-06-23T18:55:38.060 | 2021-05-31T01:29:56.287 | 11887 | 439 | [
"regression",
"variance",
"bias",
"intuition",
"bias-variance-tradeoff"
]
|
4285 | 2 | null | 4261 | 2 | null | David, before discussing the implementation, I'd discuss question #1. I think the approach "makes sense", if you mean by this that the approach is intuitive. However, from the perspective of bootstrapping, the approach probably won't work. One well-known failure of bootstrap is in fact in the case of maxima and minima. The intuition behind this is that the estimators for these statistics have very large variance: a nonparametric approach based on the empirical distribution doesn't help estimate these statistics. Most approaches are parametric. If interested I can suggest references on the latter.
A reference on the failure of boostrap for extremal statistics is Efron and Tibshirani's very own "An Introduction to the Bootstrap". Check page 81.
| null | CC BY-SA 2.5 | null | 2010-11-07T12:25:30.237 | 2010-11-07T12:25:30.237 | null | null | 30 | null |
4286 | 1 | 4289 | null | 4 | 428 | I have the following setting. I have n Hermitian Positive Semidefinite (HPSD) matrices, and a metric induced by a matrix norm. I am primarily interested in the Frobenius norm and the operator norm. I want to extract the principal "principal component" for this set of observations, i.e., a 1-dimensional subspace in HPSD such that that the sum of squares of the minimum distances between the matrices with this subspace is minimized. I am not interested in second, third etc components, since they are not even well defined in this space, since I have not defined a scalar product for simplicity.
In the case of the Frobenius norm, the problem can be reduced to traditional PCA, by using as input vectors the stacked versions of the input matrices. But in the case of the operator norms, I can't find a strategy to attack the problem.
Questions:
- Has anyone seen this specific problem before? Recommendations and references are highly appreciated.
- Has anyone dealt with computation of PCA in the case of non-euclidean distances?
| Principal Component Analysis among matrices | CC BY-SA 2.5 | null | 2010-11-07T12:54:08.700 | 2010-11-07T14:19:31.577 | null | null | 30 | [
"pca",
"dimensionality-reduction"
]
|
4287 | 2 | null | 4284 | 27 | null | Imagine some 2D data--let's say height versus weight for students at a high school--plotted on a pair of axes.
Now suppose you fit a straight line through it. This line, which of course represents a set of predicted values, has zero statistical variance. But the bias is (probably) high--i.e., it doesn't fit the data very well.
Next, suppose you model the data with a high-degree polynomial spline. You're not satisfied with the fit, so you increase the polynomial degree until the fit improves (and it will, to arbitrary precision, in fact). Now you have a situation with bias that tends to zero, but the variance is very high.
Note that the bias-variance trade-off doesn't describe a proportional relationship--i.e., if you plot bias versus variance you won't necessarily see a straight line through the origin with slope -1. In the polynomial spline example above, reducing the degree almost certainly increases the variance much less than it decreases the bias.
The bias-variance tradeoff is also embedded in the sum-of-squares error function. Below, I have rewritten (but not altered) the usual form of this equation to emphasize this:
$$
E\left(\left(y - \dot{f}(x)\right)^2\right) = \sigma^2 + \left[f(x) - \frac{1}{\kappa}\sum_{i=0}^nf(x_n)\right]^2+\frac{\sigma^2}{\kappa}
$$
On the right-hand side, there are three terms: the first of these is just the irreducible error (the variance in the data itself); this is beyond our control so ignore it. The second term is the square of the bias; and the third is the variance. It's easy to see that as one goes up the other goes down--they can't both vary together in the same direction. Put another way, you can think of least-squares regression as (implicitly) finding the optimal combination of bias and variance from among candidate models.
| null | CC BY-SA 4.0 | null | 2010-11-07T13:21:12.523 | 2020-11-12T05:52:30.277 | 2020-11-12T05:52:30.277 | 29617 | 438 | null |
4288 | 1 | 4546 | null | 3 | 207 | When trying to estimate the number of sampling units with an attribute, is there a good algebraic way to aggregate over propensity scores for that attribute which each have their own error? For example, when the propensity scores may be calculated with varying amounts of information from each sampling unit and each has its own standard error on that propensity.
Or they could be expected values of beta distributions.
In the latter case I know I can simulate beta-bernoulli outcomes from each sampling unit and add up the results many times; but is there a consistent estimator of the result of this difficult to scale process?
In short, how do people aggregate propensity scores of varying reliabilities?
Edit:
I suppose I worded it poorly; the data I have is all either binary or categorical and each observation is accompanied by the probability it was observed correctly. So suppose I have 5 persons; 3 of which had a value of 1 for an attribute, 2 of which had a value of 0 for that attribute, each of which had a probability .8,.81,.82.,.83,.84 respectively of being observed correctly. What is the expected value of p(having that attribute)?
| Aggregation of propensity scores with varying reliability | CC BY-SA 2.5 | null | 2010-11-07T13:51:20.053 | 2011-02-15T06:22:56.137 | 2010-11-16T17:26:48.193 | 1893 | 1893 | [
"distributions",
"probability",
"beta-binomial-distribution",
"propensity-scores"
]
|
4289 | 2 | null | 4286 | 6 | null | I don't know if this is exactly what you are looking for (esp. I don't know how large is $n$ and what you intend to do with these results), however I have successfully used [coinertia analysis](http://pbil.univ-lyon1.fr/R/articles/arti113.pdf) when I was working with two data sets (same observations in rows), and for more than two data sets there are K-table methods, as implemented in the [ade4](http://cran.r-project.org/web/packages/ade4/index.html) R package. [An introduction to K-table analyses](http://pbil.univ-lyon1.fr/R/pdf/course7.pdf) outlines the main principles. When the objective is to link two or more Tables, [Generalized Canonical Correlation Analysis](http://en.wikipedia.org/wiki/Generalized_canonical_correlation) is also an option.
It seems to me that you can choose non-euclidean metric, provided it has some meaning for the data at hand and the interpretation of the factorial space. You can see an example with the use of `kdist()` in [ade4](http://cran.r-project.org/web/packages/ade4/index.html) for applying an PCA on different distance matrices. Jollife's book on Principal component analysis should provide additional hints about this (but I didn't check). There's also all the work made in the spirit of Gifi on non-linear methods (in R, a lot of packages have been developed by Jan de Leeuw, see the [PsychoR](https://r-forge.r-project.org/projects/psychor/) project).
| null | CC BY-SA 2.5 | null | 2010-11-07T14:19:31.577 | 2010-11-07T14:19:31.577 | null | null | 930 | null |
4290 | 2 | null | 2419 | 7 | null | You can use R decision tree library using Rpy(http://rpy.sourceforge.net/). Also check the article "building decision trees using python"(http://onlamp.com/pub/a/python/2...).
there is also
[http://opencv.willowgarage.com/documentation/index.html](http://opencv.willowgarage.com/documentation/index.html)
[http://research.engineering.wustl.edu/~amohan/](http://research.engineering.wustl.edu/~amohan/)
| null | CC BY-SA 2.5 | null | 2010-11-07T14:57:07.923 | 2010-11-07T14:57:07.923 | null | null | 1808 | null |
4291 | 2 | null | 2419 | 1 | null | [JBoost](http://jboost.sourceforge.net/) is an awesome library. It is definitely not written in Python, however It is somewhat language agnostic, because it can be executed from the command line and such so it can be "driven" from Python. I've used it in the past and liked it a lot, particularly the visualization stuff.
| null | CC BY-SA 2.5 | null | 2010-11-07T15:04:46.763 | 2010-11-07T15:04:46.763 | null | null | 1540 | null |
4292 | 1 | 4293 | null | 5 | 2841 | Here is a look at my data. We asked the same respondents (n=~400) to provide us with their current and future consumption as a proportion of total expenditure. Plotted here are the mean proportions of total expenditure for each category for "Now" and "Later", respectively current and future.
What I'm looking for is a statistical method that I can use to test if the other categories are increasing while the largest category is decreasing. The graph shows this, to some degree, but I would like to make sure that it is statistically valid.

| How to test if change is significant across multiple categories? | CC BY-SA 2.5 | null | 2010-11-07T17:28:27.877 | 2010-11-07T18:44:42.407 | 2010-11-07T18:05:08.847 | 776 | 776 | [
"hypothesis-testing",
"statistical-significance"
]
|
4293 | 2 | null | 4292 | 6 | null | There are subtle issues involving the difference between designed comparisons and post-hoc comparisons, of which this likely is an example.
If, before collecting the data, you anticipated this kind of pattern, you could employ a simple nonparametric test. The null hypothesis would be that all changes are due to chance with the alternative being that a specified category was increasing and the other eight categories were decreasing. Under the null, positive changes have a 50% chance of occurring, implying the chance of the alternative is $(0.50)^8(1 - 0.50)^1$ = $0.002$: highly significant evidence for the alternative.
The analysis for a post-hoc observation is difficult because we can't even get started with describing the situation. Exactly what kind of pattern would you happen to notice and considered worthy of testing? So many are possible, with no accurate description available, that all we can say (from experience) is that (a) it is highly likely that any interested investigator would notice some pattern in the data and (b) a post-hoc hypothesis test could be constructed to "demonstrate" the "high significance" of that pattern, exactly as I did above. For these reasons, applying hypothesis tests after the fact to support claims of "statistical validity" for exploratory results is frowned upon. (Among statisticians, who should know better, it is called "[data snooping](http://en.wikipedia.org/wiki/Data-snooping_bias)" or worse.)
One way out is to conduct your analysis with c. half the data, randomly selected. Look for any patterns you like. Construct an appropriate suite of hypothesis tests for those patterns and then apply them to the held-out data only. This is in the spirit of the scientific requirement for [replication](http://en.wikipedia.org/wiki/Replication_%28scientific_method%29). If you don't do this, then you would be obliged to repeat your experiment to confirm whatever you're seeing in the data you currently have.
| null | CC BY-SA 2.5 | null | 2010-11-07T17:50:01.673 | 2010-11-07T17:50:01.673 | null | null | 919 | null |
4294 | 1 | 4299 | null | 4 | 3083 | I'm taking a graduate course in regression analysis and I'm suck on a particular homework question that should be very simple to me!
I have the following model:
```
E(y) = B0 + B1x1 + B2x2 + B3x3 + B4x1x3 + B5x2x3
```
`x3` is coded as `1` if "smoker" and `0` if "non-smoker".
Therefore the regression equations are:
```
x3 = 1: E(y) = (B3 + B0) + (B1 + B4)x1 + (B2 + B5)x2
x3 = 0: E(y*) = B0 + B1x1 + B2x2
```
Now I know how to test for parallelism if `x2` is absent in the models:
```
H0: B4 = 0
H1: B4 != 0
```
But I'm lost as to what to do with the inclusion of the `x2` variable. Parallelism is obviously testing for slope, but I'm not sure where to find the "slope" coefficient.
I was thinking about using an `F-Test` but then I realized I don't actually want to test the whole model, just the parallelism.
Could someone please point me into the right direction? Even hints would be sufficient.
| How to test for parallelism for two linear models? | CC BY-SA 2.5 | null | 2010-11-07T18:32:20.763 | 2010-11-07T21:35:04.993 | 2010-11-07T20:21:28.377 | 1894 | 1894 | [
"hypothesis-testing",
"regression",
"self-study"
]
|
4295 | 2 | null | 4292 | 1 | null | Given the additional information you've subsequently posted I'm not sure any statistical test is going to be that informative. If you had a strong prediction of a pattern such as this or similar, this is such a low probability event that you're pretty much set just getting these data. With an N of 400 almost any tests will most definitely be significant. Some good descriptive stats like confidence intervals would be very useful.
I would suggest that caution be made in your description of the downward trend being remotely meaningful. It's such a tiny amount that, yeah, if your N is big enough it will be significant. But is that tiny drop in percentage meaningful? I think the more meaningful statement is that it's not an increase like the others and that it is staying roughly flat. Don't try to change the story of really small effects with statistical tests.
| null | CC BY-SA 2.5 | null | 2010-11-07T18:44:42.407 | 2010-11-07T18:44:42.407 | null | null | 601 | null |
4296 | 1 | null | null | 27 | 37169 | Does anybody have a good example for Time Series Forecasting/smoothing using Kalman Filter in R?
| R code for time series forecasting using Kalman filter | CC BY-SA 3.0 | null | 2010-11-07T20:08:22.533 | 2019-11-02T15:03:53.750 | 2011-09-27T08:34:49.010 | 2116 | 1896 | [
"r",
"time-series",
"kalman-filter"
]
|
4297 | 2 | null | 4296 | 29 | null | Have you looked at [Time Series](http://cran.r-project.org/web/views/TimeSeries.html) Task View at CRAN?
It lists several entries for packages covering Kalman filtering:
- dlm
- FKF
- KFAS
and more as this is a pretty common techique for time series estimation.
| null | CC BY-SA 4.0 | null | 2010-11-07T20:22:24.477 | 2019-11-02T15:03:53.750 | 2019-11-02T15:03:53.750 | 1381 | 334 | null |
4298 | 1 | 4314 | null | 10 | 8382 | This question is a follow up to my earlier question [here](https://stats.stackexchange.com/questions/4220/probability-distribution-over-1-is-ok)
and is also related, in intent, to [this question](https://stats.stackexchange.com/questions/3316/statistical-similarity-of-time-series).
On [this wiki page](http://en.wikipedia.org/wiki/Naive_Bayes_classifier) probability density values from an assumed normal distribution for the training set are used to calculate a Bayesian posterior rather than actual probability values. However, if a training set is not normally distributed would it be equally as valid to use a density value taken from the kernel density estimate of the training set to calculate a Bayesian posterior?
In its intended application this kernel density estimate would be taken from a theoretically ideal empirical data set generated by MC techniques.
| Use of kernel density estimate in Naive Bayes Classifier? | CC BY-SA 2.5 | null | 2010-11-07T21:23:52.657 | 2010-11-08T14:55:21.340 | 2017-04-13T12:44:41.493 | -1 | 226 | [
"bayesian",
"kde"
]
|
4299 | 2 | null | 4294 | 4 | null | I am bit unsure what exactly you mean by 'parallelism' but perhaps you mean that you want to test if the interaction terms are significant or not in which case you would do a joint test that B4=0 and B5=0.
| null | CC BY-SA 2.5 | null | 2010-11-07T21:35:04.993 | 2010-11-07T21:35:04.993 | null | null | null | null |
4300 | 2 | null | 4296 | 10 | null | For good examples look at the [dlm vignette](http://cran.r-project.org/web/packages/dlm/vignettes/dlm.pdf) I would avoid all the other packages if you don't have a clear idea of what you want to do and how.
| null | CC BY-SA 2.5 | null | 2010-11-07T22:25:28.340 | 2010-11-07T22:25:28.340 | null | null | 300 | null |
4301 | 2 | null | 726 | 58 | null | >
Tout le monde y croit cependant, me disait un jour M. Lippmann, car les expérimentateurs s'imaginent que c'est un théorème de mathématiques, et les mathématiciens que c'est un fait expérimental.
Henri Poincaré, Calcul des probabilités (2nd ed., 1912), p. 171.
In English:
>
Everybody believes in the exponential law of errors [i.e., the Normal distribution]: the experimenters, because they think it can be proved by mathematics; and the mathematicians, because they believe it has been established by observation.
Whittaker, E. T. and Robinson, G. "Normal Frequency Distribution." Ch. 8 in The Calculus of Observations: [A Treatise on Numerical Mathematics](http://books.google.com/books?id=PlEjcAAACAAJ&dq=The+Calculus+of+Observations:+An+Introduction+to+Numerical+Analysis&hl=en&ei=5CrXTLPsMoGBlAfR2fH8CA&sa=X&oi=book_result&ct=result&resnum=1&ved=0CDEQ6AEwAA), 4th ed. New York: Dover, pp. 164-208, 1967. p. 179.
Quoted at [Mathworld.com](http://mathworld.wolfram.com/NormalDistribution.html).
| null | CC BY-SA 3.0 | null | 2010-11-07T22:41:34.370 | 2017-06-04T19:01:29.570 | 2017-06-04T19:01:29.570 | 14076 | 919 | null |
4302 | 2 | null | 726 | 86 | null | >
Statistics - A subject which most statisticians find difficult but which many physicians are experts on. "Stephen S. Senn"
| null | CC BY-SA 3.0 | null | 2010-11-07T23:16:13.807 | 2017-07-27T07:26:24.357 | 2017-07-27T07:26:24.357 | 28740 | null | null |
4303 | 1 | null | null | 6 | 20343 | I wish to analyze the following :
Predictor Variable (IV): Satisfaction of sexual needs as important (4 items scale and respond based on 4 point likert scale. Sum up to get the item score.)
Response Variable (DV): Condom usagae (2 options : never or sometime)
Questions:
- Should I use binary logistic or multinomial logistic? (some people tell me to use multinomial logistic but a book said to only use it when the DV has more than two levels, and my DV only has 2 levels - never or sometime).
- How can I use SPSS to analyse this? I need step by step help.
The SPSS dialog box for logistic regression has three boxes:
- Dependent : I put in Condom Usage
- Factor(s): (I am not sure should whether I put Satisfaction of sexual needs as important here?)
- Covariate : (I am not sure whether I should put Satisfaction of sexual needs as important here?)
I am very sorry. Maybe my question sounds silly but i really need help, as i am beginner. I couldn't get any tutor within my town.
| Binary or Multinomial Logistic Regression? | CC BY-SA 2.5 | null | 2010-11-08T09:03:46.017 | 2012-09-01T13:12:07.450 | 2010-11-08T12:14:02.673 | 183 | null | [
"logistic",
"spss"
]
|
4304 | 2 | null | 4296 | 16 | null | In addition to the packages mentioned in other answers, you may want to look at
package [forecast](http://cran.r-project.org/package=forecast) which deals with a particular class of models cast in state-space form and package [MARSS](http://cran.r-project.org/package=MARSS) with examples and applications in biology (see in particular the well-writen manual, Chap. 5).
For general applications, I aggree, though, with the previous answers, with
[dlm](http://cran.r-project.org/package=dlm) being in my view a versatile and powerful package (well described in the book Dynamic Linear Models in R, by Petris et al.), [KFAS](http://cran.r-project.org/package=KFAS) offering routines which implement most of the algorithms described in the excellent [Time Series Analysis by State Space Methods](http://books.google.com/books?id=XRCu5iSz_HwC&dq=Time+Series+Analysis+by+State+Space+Methods+Durbin&source=bl&ots=hPqyNTb2x1&sig=VM5XYGxIp7xxYvsLiAb_reJLtjM&hl=es&ei=lL3XTLvTMc6D5Abm1ejyBw&sa=X&oi=book_result&ct=result&resnum=1&ved=0CBwQ6AEwAA) and [FKF](http://cran.r-project.org/package=FKF) with limited facilities and no examples, but being the fastest.
| null | CC BY-SA 2.5 | null | 2010-11-08T09:13:11.370 | 2010-11-08T09:23:42.357 | 2010-11-08T09:23:42.357 | 159 | 892 | null |
4305 | 1 | 4309 | null | 9 | 21603 | Is there a recommendation on the number of times that an experiment should be replicated? As many of you know, is not always possible to make many replicas. What would be the recommended minimum? Is there some references to support it?
In my particular case (animal reproduction), for reasons of seasonality, I can only replicate experiments 3 times and I have sometimes been criticized for the low number of replicates performed. Could be considered appropriate to assess the effect that a parameter measured 3 times in the same individuals have on the performance of these individuals?
| At least, how many times an experiment should be replicated? | CC BY-SA 2.5 | null | 2010-11-08T09:40:03.757 | 2017-03-15T08:13:43.787 | 2010-11-08T12:56:21.207 | 449 | 221 | [
"estimation",
"sample-size",
"experiment-design"
]
|
4306 | 2 | null | 2419 | 0 | null | I have experienced the similar situation with you, I find Orange is hard to tune (maybe it is my problem). In the end, I used Peter Norivig's code for his famous book, in there he provided a well written code framework for tree, all you need is to add boosting in it. This way, you can code anything you like.
| null | CC BY-SA 2.5 | null | 2010-11-08T10:30:23.147 | 2010-11-08T10:30:23.147 | null | null | 806 | null |
4308 | 1 | 4311 | null | 6 | 350 | I was just wondering-- is it possible/practical to apply bayes' theorem without an analytical expression for the prior, only samples?
For example, say you have sufficient draws from a posterior distribution from a previous experiment via MCMC methods, and you'd like to use that posterior as the prior for a new one. You have an analytical expression for the likelihood as before, but now only samples from the (new) prior. How would you proceed?
| Is it possible to apply Bayes Theorem with only samples from the prior? | CC BY-SA 2.5 | null | 2010-11-08T12:01:22.877 | 2010-11-09T09:34:18.210 | null | null | 1795 | [
"bayesian",
"markov-chain-montecarlo"
]
|
4309 | 2 | null | 4305 | 7 | null | There is no such thing as a minimum (or maximum) sample size rule. It depends on the size of the effect you are trying to measure. Your description of the experiment is slightly unclear, but consider this example, if you measured blood pressure in three different people, what could you conclude about blood pressure in the population?
Likewise, if you are conducting a clinical trial and it's clear (using statistical arguments) that one of the treatments is harmful, should you continue?
Another comment. In experiments concerning animals/people I would consider it unethical to conduct an experiment that has no chance of success due to low sample sizes. If in doubt, find a local friendly statistician. Most institutions have them somewhere.
| null | CC BY-SA 2.5 | null | 2010-11-08T12:07:50.973 | 2010-11-08T12:07:50.973 | null | null | 8 | null |
4310 | 2 | null | 4303 | 11 | null | Binary or Multinomial:
Perhaps the following rules will simplify the choice:
- If you have only two levels to your dependent variable then you use binary logistic regression.
- If you have three or more unordered levels to your dependent variable, then you'd look at multinomial logistic regression.
A few points:
- Satisfaction with sexual needs ranges from 4 to 16 (i.e., 13 distinct values). Such a variable is typically treated as a metric predictor (i.e., in the covariate box in SPSS).
- Possibly your dependent variable is causing some confusion because as you phrase it, it is not a standard dichotomy. It sounds like a frequency item that could range from never, to occasionally, to sometimes, to often, to always, etc. However, I'm guessing that either you have explicitly collapsed categories or you have required the respondent to implicitly collapse the categories down to a binary choice. As a side note, if you did have an ordered set of frequency categories, then you might want to use a model that incorporated that order.
SPSS:
I posted some [links to tutorials in SPSS and R](http://jeromyanglim.blogspot.com/2009/09/logistic-regression-resources-in-spss.html) for conducting binary logistic regression.
| null | CC BY-SA 2.5 | null | 2010-11-08T12:09:59.670 | 2010-11-08T12:09:59.670 | null | null | 183 | null |
4311 | 2 | null | 4308 | 8 | null | The short answer is yes. Have a look at [sequential MCMC/ particle filters](http://en.wikipedia.org/wiki/Particle_filter).
Essentially, your prior consists of a bunch of particles ($M$). So to sample from your prior, just select a particle with probability $1/M$. Since each particle has equal probability of being chosen, this term disappears in the M-H ratio.
A big problem with particle filters is particle degeneracy. This happens because you are trying to represent a continuous distribution with discrete particles - there's no such thing as a free lunch!
Clarification for Srikant Vadali
The question as I read it is: I have output, i.e. posterior from a MCMC scheme. I want to use this posterior as a prior for a new data set.
This (probably) means that you have a discrete representation of a continuous distribution, i.e. a particle representation. So rather than doing a random walk on a continous distribution (say), you need to pick values from your prior, i.e. you pick a particle.
[Toni et al.](http://rsif.royalsocietypublishing.org/content/6/31/187.abstract), use this idea with ABC.
| null | CC BY-SA 2.5 | null | 2010-11-08T12:14:57.690 | 2010-11-09T09:34:18.210 | 2010-11-09T09:34:18.210 | 8 | 8 | null |
4312 | 1 | 4321 | null | 1 | 2845 | This is related to another [question](https://stats.stackexchange.com/questions/4175/resampling-binomial-z-and-t-test-help-with-real-data) I asked recently. To recap:
[I had 30 people call a number and then roll a 5 sided die. If the call matches the subsequent face then the trial is a hit, else it is a miss. Each subject completes 25 trials (rolls) and thus, each participant has a score out of 25. Since the die is a virtual one, it cannot be biased. Before the experiment was conducted I was going to compare the subjects score with a one-sample t-test (compared to mu of 5). However I was pointed towards the more powerful z-test, which is appropriate because we know the population parameters for the null hypothesis: that everyone should score at chance. Since NPQ means the binomial approximates to the normal or Gaussian distribution, we can use a parametric test. So I could just forget about it all and go back to the t-test I planned to use, but it seems to me that although the z-test is not often used in real research it is appropriate here. That was the conclusion from my previous question. Now I am trying to understand how to use resampling methods (either permutation or bootstrap) to compliment my parametric analysis.]
Okay. I am trying to program a one-sample permutation [z-test](http://statistic-on-air.blogspot.com/2009/07/one-sample-z-test.html), using the [DAAG](http://pbil.univ-lyon1.fr/library/DAAG/DESCRIPTION) package [onet.permutation](http://pbil.univ-lyon1.fr/library/DAAG/html/onet.permutation.html) as inspiration. This is as far as I've got:
```
perm.z.test = function(x, mu, var, n, prob, nsim){
nx <- length(x)
mx <- mean(x)
z <- array(, nsim)
for (i in 1:nsim) {
mn <- rbinom(nx*1, size=n, prob=prob)
zeta = (mean(mn) - mu) / (sqrt(var/nx))
z[i] <- zeta
}
pval <- (sum(z >= abs(mx)) + sum(z <= -abs(mx)))/nsim
print(signif(pval, 3))
}
```
Where: `x` is the variable to test, `n` is the the number of trials (=25) and `prob` is the probability of getting it correct (=.2). The population value (`mu`) of the mean number correct is np. The population standard deviation, `var`, is square-root(np*[1-p]).
Now I guess this compares x to an array composed of randomly generated binomial sample. If I centre x at 0 (variable-mu) I get a p-value. Can somebody confirm that it is doing what I think it is doing?
My testing gives this:
```
> binom.samp1 <- as.data.frame(matrix(rbinom(30*1, size=25, prob=0.2),
ncol=1))
> z.test(binom.samp1$V1, mu=5, sigma.x=2)
data: binom.samp1$V1
z = 0.7303, p-value = 0.4652
> perm.z.test(binom.samp1$V1-5, 5, 2, 25, .2, 2000)
[1] 0.892
> binom.samp1 <- as.data.frame(matrix(rbinom(1000*1, size=25, prob=0.2),
ncol=1))
> perm.z.test(binom.samp1$V1-5, 5, 2, 25, .2, 2000)
[1] 0.937
```
Does this look right?
UPDATE:
Since this obviously doesn't do what I want, I do have another angle. This [website](http://www.stat.umn.edu/geyer/old/5601/examp/perm.html#one) offers this advice:
>
There is no reason whatsoever why a
permutation test has to use any
particular test statistic. Any test
statistic will do! ... For one-sample
or paired two-sample tests, in
particular, for Wilcoxon signed rank
tests, the permutations are really
subsets. The permutation distribution
choses an arbitrary subset to mark +
and the complementary subset is marked
-. Either subset can be empty.
What about an arbitrary subset with a one-sample z-test?
| Help with a one-sample permutation z-test | CC BY-SA 2.5 | null | 2010-11-08T13:03:59.743 | 2010-11-09T01:25:20.130 | 2017-04-13T12:44:55.360 | -1 | 1614 | [
"r",
"permutation-test"
]
|
4313 | 2 | null | 726 | 12 | null | "After 17 years of interacting with physicians, I have come to realize that many of them are adherents of a religion they call Statistics... Like any good religion, it involves vague mysteries capable of contradictory and irrational interpretation. It has a priesthood and a class of mendicant friars. And it provides Salvation: Proper invocation of the religious dogmas of Statistics will result in publication in prestigious journals."
[David S. Salsburg](http://en.wikipedia.org/wiki/David_Salsburg) (author of The Lady Tasting Tea), quoted at "[Pithypedia](http://www.pithypedia.com/?author=David+S.+Salsburg)".
| null | CC BY-SA 2.5 | null | 2010-11-08T14:12:20.500 | 2010-11-08T14:12:20.500 | null | null | 919 | null |
4314 | 2 | null | 4298 | 6 | null | I have read both the first linked earlier question, especially the answer of whuber and the comments on this.
The answer is yes, you can do that, i.e. using the density from a kde of a numeric variable as conditional probability ($P(X=x|C=c)$ in the bayes theorem.
$P(C=c|X=x)=P(C=c)*P(X=x|C=c)/P(X=x)$
By assuming that d(height) is equal across all classes, d(height) is normalized out when the theorem is applied, i.e. when $P(X=x|C=c)$ is divided by $P(X=x)$.
This paper could be interesting for you: [estimating continuous distributions in bayesian classifiers](http://www.cs.iastate.edu/~honavar/bayes-continuous.pdf)
| null | CC BY-SA 2.5 | null | 2010-11-08T14:55:21.340 | 2010-11-08T14:55:21.340 | null | null | 264 | null |
4315 | 2 | null | 726 | 15 | null | >
It would be illogical to assume that
all conditions remain stable
~ Spock, "The Enterprise Incident",stardata 5027.3
| null | CC BY-SA 2.5 | null | 2010-11-08T15:05:19.573 | 2010-11-08T15:05:19.573 | null | null | 264 | null |
4316 | 1 | null | null | 14 | 83428 | The context of this question is within a health framework i.e. looking at one or more therapies in the treatment of a condition.
It appears that even well respected researchers confuse the terms efficacy and effectiveness, using the terms interchangeably.
- How can one think of efficacy versus effectiveness in a way that can help remove confusion?
- What type of study designs would be most appropriate in determining both types of results?
- Are there any authoritative journal publications, books, or web dictionaries that may help me?
| What is the difference between effectiveness and efficacy in determining the benefit of therapy 'A' on condition 'B'? | CC BY-SA 3.0 | null | 2010-11-08T15:58:52.233 | 2015-10-24T18:51:40.493 | 2011-10-08T00:37:34.597 | 183 | 431 | [
"epidemiology",
"causality",
"clinical-trials",
"definition",
"instrumental-variables"
]
|
4317 | 2 | null | 4258 | 7 | null | Here is one approach at the automation. Feedback much appreciated. This is an attempt to replace initial visual inspection with computation, followed by subsequent visual inspection, in keeping with standard practice.
This solution actually incorporates two potential solutions, first, calculate burn-in to remove the length of chain before some threshold is reached, and then using the autocorrelation matrix to calculate the thinning interval.
- calculate a vector of the maximum median Gelman-Rubin convergence diagnostic shrink factor (grsf) for all variables in the
- find the minimum number of samples at which the grsf across all variables goes below some threshold, e.g. 1.1 in the example, perhaps lower in practice
- sub sample the chains from this point to the end of the chain
- thin the chain using the autocorrelation of the most autocorrelated chain
- visually confirm convergence with trace, autocorrelation, and density plots
---
The mcmc object can be downloaded here: [jags.out.Rdata](https://netfiles.uiuc.edu/dlebauer/www/jags.out.Rdata)
```
# jags.out is the mcmc.object with m variables
library(coda)
load('jags.out.Rdata')
# 1. calculate max.gd.vec,
# max.gd.vec is a vector of the maximum shrink factor
max.gd.vec <- apply(gelman.plot(jags.out)$shrink[, ,'median'], 1, max)
# 2. will use window() to subsample the jags.out mcmc.object
# 3. start window at min(where max.gd.vec < 1.1, 100)
window.start <- max(100, min(as.numeric(names(which(max.gd.vec - 1.1 < 0)))))
jags.out.trunc <- window(jags.out, start = window.start)
# 4. calculate thinning interval
# thin.int is the chain thin interval
# step is very slow
# 4.1 find n most autocorrelated variables
n = min(3, ncol(acm))
acm <- autocorr.diag(jags.out.trunc)
acm.subset <- colnames(acm)[rank(-colSums(acm))][1:n]
jags.out.subset <- jags.out.trunc[,acm.subset]
# 4.2 calculate the thinning interval
# ac.int is the time step interval for autocorrelation matrix
ac.int <- 500 #set high to reduce computation time
thin.int <- max(apply(acm2 < 0, 2, function(x) match(T,x)) * ac.int, 50)
# 4.3 thin the chain
jags.out.thin <- window(jags.out.trunc, thin = thin.int)
# 5. plots for visual diagnostics
plot(jags.out.thin)
autocorr.plot(jags.win.out.thin)
```
--update--
As implemented in R the computation of the autocorrelation matrix is slower than would be desirable (>15 min in some cases), to a lesser extent, so is computation of the GR shrink factor. There is a question about how to speed up step 4 on stackoverflow [here](https://stackoverflow.com/q/4110937/199217)
--update part 2--
additional answers:
- It is not possible to diagnose convergence, only to diagnose lack of convergence (Brooks, Giudici, and Philippe, 2003)
- The function autorun.jags from the package runjags automates calculation of run length and convergence diagnostics. It does not start monitoring the chain until the Gelman rubin diagnostic is below 1.05; it calculates the chain length using the Raftery and Lewis diagnostic.
- Gelman et al's (Gelman 2004 Bayesian Data Analysis, p. 295, Gelman and Shirley, 2010) state that they use a conservative approach of discarding the 1st half of the chain. Although a relatively simple solution, in practice this is sufficient to solve the issue for my particular set of models and data.
---
```
#code for answer 3
chain.length <- summary(jags.out)$end
jags.out.trunc <- window(jags.out, start = chain.length / 2)
# thin based on autocorrelation if < 50, otherwise ignore
acm <- autocorr.diag(jags.out.trunc, lags = c(1, 5, 10, 15, 25))
# require visual inspection, check acceptance rate
if (acm == 50) stop('check acceptance rate, inspect diagnostic figures')
thin.int <- min(apply(acm2 < 0, 2, function(x) match(TRUE, x)), 50)
jags.out.thin <- window(jags.out.trunc, thin = thin.int)
```
| null | CC BY-SA 2.5 | null | 2010-11-08T16:03:39.950 | 2010-11-10T00:17:12.523 | 2017-05-23T12:39:26.203 | -1 | 1381 | null |
4318 | 2 | null | 726 | 49 | null | >
On two occasions I have been asked [by
members of Parliament], ‘Pray, Mr.
Babbage, if you put into the machine
wrong figures, will the right answers
come out?’ I am not able rightly to
apprehend the kind of confusion of
ideas that could provoke such a
question.
Charles Babbage
| null | CC BY-SA 2.5 | null | 2010-11-08T16:06:40.543 | 2010-11-08T16:06:40.543 | null | null | 1614 | null |
4319 | 2 | null | 4312 | 3 | null | Caveat: I'm not sure I fully understand your question. With this in mind, your solution does, IMHO, not provide a one-sample permutation z-Test, as it does not use the original data while performing some re-labeling of the experimental units consistent with the Null hypothesis in the given experimental design. Actually I do not see how any re-labeling can be performed at all in your situation that appears to be one-sample-from-one-population with a test for the distribution's location parameter.
The onet.permutation() function you cite is, IMHO, misleadingly named as it refers to the test of two dependent samples. For each unit it randomizes which of its two values belongs to sample 1, and which one to sample 2. This is equivalent to randomizing the sign of the unit-wise difference between the two samples, as done by the lines
```
mn <- sample(c(-1, 1), n, replace = TRUE)
xbardash <- mean(mn * abs(x))
```
(x is the difference between two dependent samples) Your function does something else: it simulates new data and creates a simulated distribution that does not stem from the empirical data.
| null | CC BY-SA 2.5 | null | 2010-11-08T16:13:13.950 | 2010-11-08T16:13:13.950 | null | null | 1909 | null |
4320 | 1 | 4323 | null | 2 | 51428 | (1) I am looking for a package for computing the power of a matrix. If you have some good recommendation please let me know.
(2) I searched on the internet and followed what some said to install a package called "Malmig" in R but after selecting the mirror site, it failed:
>
In install.packages("Malmig") : package ‘Malmig’ is not available
Some idea why?
Thanks!
| Compute the power of a matrix in R | CC BY-SA 2.5 | null | 2010-11-08T16:18:53.253 | 2016-08-21T23:14:54.437 | 2010-11-08T17:12:47.230 | null | 1005 | [
"r"
]
|
4321 | 2 | null | 4312 | 2 | null | I don't think it is going a one sample Z; to me it looks like it is a test against a certain set of priors.
I'm confused, why are you doing a one sample Z using binomial data as your source data? You could simply create a distribution of N successes and see what quantile your actual data was in. However the above method doesn't look like a permutation test per-se to me; as your code doesn't actually permute the observed values between 1 and 0.
That being said, let me comment on your code - to me it looks as though z is defined as
```
zeta = (mean(mn) - mu) / (sqrt(var/nx))
z[i] <- zeta
```
Thus, each score in Z is like a Z score of the randomly created binomial vector using the priors you've selected as the null hypothesis. But then you compare that Z to abs(mx); where mx is defined as the mean of your observed binomial vector. At the very least this looks like a problem to me. Z scores should be either compared to some other Z score or means should be compared to means.
As I alluded to above, it is odd that you'd put all of this under a structure of a Z-test. The Z score is nominally a linear transform of the differences between means, as such the result of a test like this should be the same whether you use a Z score or simply look at the differences between means.
Moreover, what you are doing seems a like an attempt to test the observed value against some priors rather than an actual one sample permutation test. What you want to test against is something like permbinom (code provided below) where it could be the case for each observed value that it either was a success or it was not a success. This is in-line with Tukey's classic example with the lady who claimed she could tell whether tea or milk was added first. Critically different from your test, the assumption of this permutation test is that the null hypothesis is fixed at p = .5.
```
permbinom <- function(x)
{
newx <- x
nx <- length(x)
change <- rbinom(n=nx,size=1,prob=.5)
#This code is readable but inefficient
#Swap the values between 1 and 0 if change == 1
for (i in 1:nx)
{
if ((change[i] == 1) & (x[i] == 1)) {newx[i] <- 0}
if ((change[i] == 1) & (x[i] == 0)) {newx[i] <- 1}
}
return(newx)
}
permtest <- function(x,nsim=2000)
{
permref <- rep(NA,nsim)
obsn <- sum(x)
for (i in 1:nsim)
{
permref[i] <- sum(permbinom(x))
}
pval <- min(c(
(sum(permref > obsn)*2/nsim),
(sum(permref < obsn)*2/nsim)
))
return(pval)
}
```
I'm not 100% confident regarding how I'm calculating the p-value here; so if someone would kindly correct me if I'm doing it wrong I'll incorporate that as an edit.
For reference, here is a faster permutation function for one-sample tests of binomial data.
```
permbinomf <- function(x)
{
newx <- x
nx <- length(x)
change <- rbinom(n=nx,size=1,prob=.5)
#This code is readable but inefficient
#Swap the values between 1 and 0 if change == 1
newx <- x + change
newx <- ifelse(newx==2,0,newx)
return(newx)
}
```
Edit: The question is also put forth, "What about an arbitrary subset with a one-sample z-test?". That would also work, assuming you had a large enough sample to subset. However, it would not be a permutation test, it would be more akin to a bootstrap.
Edit 2: Perhaps the most important answer to your question, is this: You are doing something acceptable (if you fix the Z vs mean computational error noted above), but you aren't doing what you think you are doing. You are comparing your results to results where the null hypothesis is true. This is essentially a Monte-Carlo simulation and if you correct the math (and I suggest you also simplify it) it is an acceptable technique for testing your hypothesis. Also note, my answer above is for a two-tailed test. As noted in the other question, you are ignoring the nesting of binomial observations under participants but independence isn't an assumption in a permutation or monte-carlo test so you should be more or less fine. Though, as also noted there you ignore the possibility that some people are doing better than chance and others are performing at chance.
| null | CC BY-SA 2.5 | null | 2010-11-08T16:27:25.407 | 2010-11-08T17:18:47.030 | 2010-11-08T17:18:47.030 | 196 | 196 | null |
4322 | 2 | null | 4316 | 11 | null | I'm not a specialist of this domain in epidemiological studies, but it seems to me that efficacy has to do with the observed effect in a controlled setting, like a randomized controlled trial, whereas effectiveness has more to do with a larger range of outcomes or environmental factors (potentially unobserved or non manipulated in the RCT), hence it has a wider scope. At least, I've often heard of [cost-effectiveness](http://en.wikipedia.org/wiki/Cost-effectiveness_analysis) studies in pharmacoeconomics, and treatment efficacy (e.g., when comparing two treatment arms).
Quoting this article [Efficacy, effectiveness, efficiency](http://www.australianprescriber.com/magazine/23/6/114/5/),
- efficacy is "the extent to which a drug has the ability to bring about its intended effect under ideal circumstances, such as in a randomised clinical trial"
- effectiveness is "the extent to which a drug achieves its intended effect in the usual clinical setting"
As for other references, I would suggest starting with [Pitfalls of Multisite Randomized Clinical Trials of Efficacy and Effectiveness](http://schizophreniabulletin.oxfordjournals.org/content/26/3/533.full.pdf) from Helena C Kraemer (Schizophrenia Bulletin 26(3), 2000), and references therein. For example, it is read that "efficacy and effectiveness are opposite extremes on a complex multidimensional continuum of decision making in research design".
Note
Coming back from the [ISPOR](http://www.ispor.org/) 13th European conference, I've heard that the European Federation of Pharmaceutical Industries and Associations ([EFPIA](http://www.efpia.org)) considers there's now agreement on the following definitions:
- relative efficacy can be defined as the extent to which an intervention does more good than harm, under ideal circumstances, compared to one or more alternative interventions;
- relative effectiveness can be defined as the extent to which an intervention does more good than harm compared to one or more alternatives for achieving the desired results when provided under the usual circumstances of health care practice.
| null | CC BY-SA 3.0 | null | 2010-11-08T16:46:18.607 | 2012-10-24T14:19:15.780 | 2012-10-24T14:19:15.780 | 930 | 930 | null |
4323 | 2 | null | 4320 | 9 | null | Package expm provides the matrix %^% number operator notation for its function matpow():
```
> library(expm)
> mat <- matrix(1:9, nrow=3)
> mat %^% 2
[,1] [,2] [,3]
[1,] 30 66 102
[2,] 36 81 126
[3,] 42 96 150
# check
> mat %*% mat
[,1] [,2] [,3]
[1,] 30 66 102
[2,] 36 81 126
[3,] 42 96 150
```
There's also sqrtm() for taking roots and expm() for matrix exponential.
| null | CC BY-SA 2.5 | null | 2010-11-08T17:03:27.557 | 2010-11-08T17:03:27.557 | null | null | 1909 | null |
4324 | 2 | null | 4175 | 2 | null | Answer #1: binom.test is in some ways a "more correct" test because it doesn't assume normality; yes - you'll get more power out of the normality assumption, and it might be reasonable - but to any extent you violate the assumptions of the test you may increase your type-I error rate.
Explanation #1: Though with a high number of trials results from a binomial data source approaches normality, it isn't perfectly normal. To convince yourself about this you can use a Shapiro-Wilk test for normality, e.g. shapiro.test(rbinom(30,25,.2)) [where 30 is your number of participants, 25 is your number of trials, and .2 is the underlying probability of success]. You'll note with random data sometimes normality is significantly violated and sometimes it isn't. Your own data will tell the story you need to know. But, in general, because it is possible to violate normality under these circumstances, I prefer to avoid making the assumption.
Answer #2: See my answer [elsewhere](https://stats.stackexchange.com/questions/4312/help-with-a-one-sample-permutation-z-test-r/4321#4321). What you are proposing sounds like a bootstrap of permutation test results. Don't do that; it is odd and you won't be able to publish it. The binom.test is sufficient for your data and hypothesis. I'd suggest that you don't confuse matters by doing a permutation test or parametric test where the binomial distribution is clearly the best fit for the process generating your data. Also, it is confusing that in one case you'd be willing to make assumptions (e.g. normality) but elsewhere and to use a permutation test. The strength of permutation tests is that they don't tend to make as many assumptions.
Answer #3: It isn't gibberish. You might want to consider breaking your questions down in the future. It is a bit much for a single question here. In short, standard statistical approaches can lead to a failure to replicate in the way you describe because either 1) the results from experiment 1 were due to a Type I error or 2) the results from experiment 2 were due to a Type II error. Does N = 50 provide enough power that you can be confident in the results?
| null | CC BY-SA 2.5 | null | 2010-11-08T17:07:23.140 | 2010-11-08T17:07:23.140 | 2017-04-13T12:44:29.013 | -1 | 196 | null |
4325 | 1 | null | null | 3 | 335 | There are some cases where I would like to correct for multiple comparisons where the null hypotheses vary.
For example, if I am performing a chi-squared test or a fisher's exact test on multiple contingency tables of different sizes, the null hypotheses vary from table to table. Can I still apply the Benjamini-Hochberg procedure for multiple comparisons using the p-values from these tests? If not, what should be done in this situation?
| Do the null hypotheses used for benjamini-hochberg have to be identical? | CC BY-SA 2.5 | null | 2010-11-08T17:46:28.193 | 2010-11-08T18:11:16.983 | null | null | null | [
"multiple-comparisons",
"statistical-significance"
]
|
4326 | 2 | null | 4325 | 6 | null | No, there is no such restriction. The only restriction is whether the tests themselves are independent or not, and there is even some wiggle room there: if the tests are positively correlated, you can pretend they are independent. See e.g [False Discovery Rate](http://en.wikipedia.org/wiki/False_discovery_rate) or the references from [Qvalue](http://genomics.princeton.edu/storeylab/qvalue/).
| null | CC BY-SA 2.5 | null | 2010-11-08T18:11:16.983 | 2010-11-08T18:11:16.983 | null | null | 795 | null |
4327 | 2 | null | 4200 | 0 | null | [http://en.wikipedia.org/wiki/Granger_causality](http://en.wikipedia.org/wiki/Granger_causality)
Barrett, Barnett & Seth have a paper which extends the idea of Granger causality to the multivariate case.
| null | CC BY-SA 2.5 | null | 2010-11-08T18:17:46.173 | 2010-11-08T18:17:46.173 | null | null | 1349 | null |
4328 | 1 | 4330 | null | 14 | 7739 | Does anyone know of some well written code (in Matlab or R) for reversible jump MCMC? Preferably a simple demo application to compliment papers on the subject, that would be useful in understanding the process.
| Reversible jump MCMC code (Matlab or R) | CC BY-SA 3.0 | null | 2010-11-08T18:58:46.730 | 2017-05-16T21:57:13.097 | 2013-09-09T14:37:17.570 | 27581 | 1913 | [
"r",
"matlab",
"references",
"markov-chain-montecarlo"
]
|
4329 | 2 | null | 4316 | 6 | null | A standard, but not fully online, dictionary is the Dictionary of Epidemiology sponsored by the [International Epidemiological Association](http://ieaweb.org). The latest edition is the fifth, but the fourth edition appears to be latest [partly available online via Amazon](http://rads.stackoverflow.com/amzn/click/0195141695). You want to look at p57-8. It says this distinction between effectiveness, efficacy and efficiency is due to [Archie Cochrane](http://www.cochrane.org/about-us/history/archie-cochrane) in his 1972 book Effectiveness and efficiency: random reflections on health services, [much of which is available on Google Books](http://books.google.co.uk/books?id=jRsEIDJwSy8C).
I won't quote too much from the above to avoid breaching copyright. I'll only note that although the Dictionary says, "ideally the determination of efficacy is based on the results of a randomized controlled trial" (my italics), you need [instrumental variables estimation methods to determine efficacy](http://dx.doi.org/10.1002/sim.4780100110) if there is substantial non-compliance.
| null | CC BY-SA 2.5 | null | 2010-11-08T19:10:33.560 | 2010-11-08T19:10:33.560 | null | null | 449 | null |
4330 | 2 | null | 4328 | 12 | null | RJMCMC was introduced by [Peter Green](http://www.stats.bris.ac.uk/~peter/Welcome.html) in a [1995 paper](http://dx.doi.org/10.1093/biomet/82.4.711) that is a citation classic. He wrote a Fortran program called [AutoRJ](http://www.stats.bris.ac.uk/~peter/AutoRJ/) for automatic RJMCMC; his page on this links to David Hastie's C program [AutoMix](http://www.davidhastie.me.uk/automix/). There's a list of freely available software for various RJMCMC algorithms in Table 1 of a [2005 paper by Scott Sisson](http://web.maths.unsw.edu.au/~scott/papers/paper_tenyears.pdf). A Google search also finds [some pseudocode from a group at the University of Glasgow](http://www.dcs.gla.ac.uk/inference/SERRS/Code.html) that may be useful in understanding the principles if you want to program it yourself.
| null | CC BY-SA 2.5 | null | 2010-11-08T19:30:49.540 | 2010-11-08T19:30:49.540 | null | null | 449 | null |
4331 | 1 | null | null | 3 | 5567 | I've got a data-set which I assume is uniformly distributed. Say I've got `N=20000` samples and a suspected `p=0.25`. This means that I would expect each option to show up roughly `5000` times.
How do I calculate the following interval `[5000 - x, 5000 + x]` such that I can say with a certain confidence that the data-set is probably NOT uniformly distributed since the number of times an option shows up falls outside of the interval?
EDIT
ABCDBCDADBCDA, BDCAADBCDADBA, ADCDBDACDBDAD, CDBDACDBDACDA, That's some sample data. A sample is one cookie string! Now I want for each position in that cookie string determine if a character there is too rare or too common at that position. So I count, for all samples, the number of A's on positon 0, the number of B's, C's and D's. Suppose I get a count of 5 A's on position 0 and I would expect a count of roughly 50 A's then the character A is too rare at position 0. That's what I want to do for each character position.
| Uniform Distribution Test | CC BY-SA 2.5 | null | 2010-11-08T20:11:55.963 | 2010-11-09T02:56:51.557 | 2010-11-08T22:38:18.940 | null | null | [
"distributions",
"hypothesis-testing",
"uniform-distribution"
]
|
4332 | 2 | null | 4331 | 4 | null | You might try assuming--as your null hypothesis--that the distribution is discrete uniform independent of string position. Then tabulate the frequencies of each letter by position in a 4 x 13 contingency table. You can then test for non-independence with a simple chi-square test; with n=20,000 observations in your one sample, you shouldn't have any sparse table problems. You can also eyeball this with a stacked bar chart, one 4-color ABCD bar for each string position. This is useful if you reject the null with the chi-square test.
Just to be sure, you might also want to check your data overall to see if it actually fits a discrete uniform distribution using a chi-square goodness of fit test. After all, the distribution of characters could be independent of position without being uniformly distributed.
If you want to estimate confidence intervals, treat the ABCD distribution as a multinomial distribution. You can estimate standard errors from the variance-covariance matrix, which has diagonal (variance) entries `np[i](1-p[i])` and off-diagonal (covariance) entries `-np[i]p[j]`.
A good reference for all this is Agresti's Categorical Data Analysis.
If the distribution of letters seems independent of position, you can do further exploration using the runs test. Treat the letters as ordinal data, `A < B < C < D`; the runs test will check for runs that are too long (too few) or too short (too many). This type of test, and many others, are described in Knuth's Seminumerical Algorithms, where he discusses tests for random number generators.
Looks like you have a lot of tabulating to do--enjoy!
| null | CC BY-SA 2.5 | null | 2010-11-09T02:56:51.557 | 2010-11-09T02:56:51.557 | null | null | 5792 | null |
4333 | 2 | null | 4320 | 1 | null | There is the following code you can write:
>
library(Biodem)
png(filename="images/mtx.exp_%03d.png" ,width=480, height=480)
Name: mtx.exp
Title: Calculates the n-th power of a matrix
Aliases: mtx.exp
Keywords: array manip methods
** Examples
test<-matrix(c(1:16), 4,4)
pow.test<-mtx.exp(test,10)
pow.test
for more details:http://rgm2.lab.nig.ac.jp/RGM2/R_man-2.9.0/library/Biodem/man/mtx.exp.html
| null | CC BY-SA 2.5 | null | 2010-11-09T05:55:40.453 | 2010-11-09T05:55:40.453 | null | null | 1808 | null |
4334 | 1 | 4456 | null | 4 | 2309 | I would like to implement the model proposed in [Dynamic modeling of mean-reverting spreads](http://ideas.repec.org/p/arx/papers/0808.1710.html) (Kostas Triantafyllopoulos, Giovanni Montana).
They propose to model a time serie Y_t with the following equations:
```
(1) Y_t = A_t + B_t * Y_(t-1) + e_t
(2) A_t = Phi1 * A_(t-1) + nu1_t
(3) B_t = Phi2 * B_(t-1) + nu2_t
```
That can be expressed in a state space form:
```
(1') Y_t = F_t * theta_t + e_t
(2') theta_t = Phi * theta_(t-1) + nu_t
```
with
```
F_t = (1, Y_t-1)
Phi = diag(Phi1, Phi2)
```
I would like to use R to perform a bayesian update of this model. I have studied the package 'dlm' but in the book [Dynamic linear models with R](http://www.springer.com/statistics/statistical+theory+and+methods/book/978-0-387-77237-0), written by Giovanni Petris (author of the 'dlm' package), it is written (page 113)
>
"The matrix F_t of a DLM cannot depend on past values of the observations".
However, it seams that it is the case in the model proposed above.
Can someone understand this last sentence and eventually help me perform this implementation in R?
Thank you
Fred
| State space form of time varying AR(1) | CC BY-SA 2.5 | null | 2010-11-09T07:30:15.237 | 2010-12-24T13:12:37.723 | 2010-11-10T01:59:02.990 | 1709 | 1709 | [
"r",
"bayesian",
"dynamic-regression"
]
|
4335 | 1 | 4342 | null | 37 | 3750 | I just discovered the `comment` function in R. Example:
```
x <- matrix(1:12, 3,4)
comment(x) <- c("This is my very important data from experiment #0234",
"Jun 5, 1998")
x
comment(x)
```
This is the first time I came by this function and was wondering what are common/useful uses of it.
Since it is quite difficult to search "R comment" in google and find relevant results, I was hoping someone here might share with his experience.
| What is a good use of the 'comment' function in R? | CC BY-SA 3.0 | null | 2010-11-09T08:55:58.867 | 2022-03-09T15:11:17.130 | 2011-12-08T07:39:24.997 | 930 | 253 | [
"r"
]
|
4336 | 2 | null | 4335 | 14 | null | One thing I often find myself doing in my R scripts for a particular data analysis task is to include comments in the script about the units of variables in my data frames. I work with environmental data and chemists and ecologists seem to enjoy using a wide range of different units for the same things (mg L$^{-1}$ vs mu eq L$^{-1}$, etc). My colleagues usually store this information in the row immediately below the column names in Excel sheets.
I'd see `comment()` as a nice way of attaching this information to a data frame for future reference.
| null | CC BY-SA 2.5 | null | 2010-11-09T09:09:25.487 | 2010-11-09T09:09:25.487 | null | null | 1390 | null |
4337 | 1 | 4340 | null | 10 | 867 | I need to do an experiment. First let me describe present situation. The company that I work for is a cinema. It has a gaming section where people who are waiting for movies can pass time by playing games. People can pay only by using prepaid membership card. Unfortunately this gaming section is not generating enough sales. We are trying to find the cause(s).
My hypothesis is if we accept cash as payment, sales will increase.
My plan is to have experimental group and control group. The experimental group will accept cash payment, the control group doesn't. The sales of both groups are tallied before and after the experiment.
The difficult thing about this is that I can't find a way to isolate the 'cash payment' factor from other factors:
- When the movie playing in the cinema is good, more people will come and sales will also increase
- Each cinema only has one gaming section, I can't split it into two sections (one accepts cash, the other doesn't)
- If several sites accept cash and several others don't, I don't think I can compare the results directly because the visitors are different, the number of gaming units are different
I'm looking for suggestions to isolate this 'cash payment' variable, or maybe another approach altogether.
| What to do with confounding variables? | CC BY-SA 2.5 | null | 2010-11-09T09:23:00.533 | 2010-11-11T13:12:53.023 | null | null | 1922 | [
"experiment-design"
]
|
4338 | 2 | null | 4328 | 8 | null | The book [Bayesian Analysis for Population Ecology](http://rads.stackoverflow.com/amzn/click/1439811873) by King et al. describes RJMCMC in the context of population ecology. I found there description very clear and they provide the R code in the appendix.
The book also has an associated [webpage](http://lemur.mcs.st-and.ac.uk/Book-website/), but some of the code found in the book isn't on the website.
| null | CC BY-SA 3.0 | null | 2010-11-09T09:27:58.247 | 2013-01-22T21:31:24.653 | 2013-01-22T21:31:24.653 | 8 | 8 | null |
4339 | 2 | null | 4335 | 7 | null | Similar facilities exist in other packages, such as the [-notes- command in Stata](http://www.stata.com/help.cgi?notes). We use this to document full details of a variable, e.g. details of assay for a biochemical measurement, or exact wording of the question asked for questionnaire data. This is often too much info for the variable name or label, one or both of which are displayed in the output of every analysis involving the variable and are therefore best kept reasonably short.
| null | CC BY-SA 2.5 | null | 2010-11-09T09:33:50.550 | 2010-11-09T09:33:50.550 | null | null | 449 | null |
4340 | 2 | null | 4337 | 6 | null | Here are some suggestions relating your to bullet points above:
- What about using the daily takings as an explanatory variable?
What you need to do is form an equation where you predict gaming sales given a number of other factors. There factors will include things you are interested in such as whether they used a prepaid card. However, you need to also include factors that you aren't interested in but have to adjust for, such as daily takings. Obviously, if the film is a blockbuster then gaming sales will increase.
- Suppose you have N cinemas. Select N/2 cinemas and put them in group A and rest go in Group B. Now let Group A be the control group and B the experimental group. If possible, alternate this set-up, i.e. make Group A the experimental setup for a few weeks.
- If you can mix over groups (point above) then this isn't problem. Even if you can't you can include a variable representing the number of gaming units.
The statistical techniques you will probably need is [multiple linear regression](http://en.wikipedia.org/wiki/Linear_regression) (MLR). Essentially, you build an equation of the form:
```
Gaming sales = a0 + a1*Prepaid + a2*Takens + a3*<other things>
```
where
- a0, a1, a2 are just numbers
- Prepaid is either 0 or 1
- Takens are the daily takens.
MLR will allow you calculate the values of a0-a2. So if a1 is large this indicates that Prepaid is important.
| null | CC BY-SA 2.5 | null | 2010-11-09T10:58:27.400 | 2010-11-09T11:51:48.430 | 2010-11-09T11:51:48.430 | 8 | 8 | null |
4341 | 1 | null | null | 8 | 214249 | I do have a big list of numeric values (including duplicates) and I do want to group them into ranges in order to see if how do they distribute.
Let's say there are 1000 values ranging from 0 to 2.000.000 and I do want to group them.
How can I achieve this, preferably in Excel or SQL.
| How do I group a list of numeric values into ranges? | CC BY-SA 2.5 | null | 2010-11-09T11:36:58.940 | 2016-08-20T14:31:05.390 | 2010-11-09T13:00:22.120 | null | 1901 | [
"excel",
"sql"
]
|
4342 | 2 | null | 4335 | 15 | null | To second @Gavin, Frank Harrell has developed efficient ways to handle annotated data.frame in R in his [Hmisc](http://cran.r-project.org/web/packages/Hmisc/index.html) package. For example, the `label()` and `units()` functions allow to add dedicated attributes to R objects. I find them very handy when producing summary of data.frame (e.g., with `describe()`).
Another useful way of using such an extra attribute is to apply a timestamp on a data set. I also add an attribute for things like random seed, fold number (when I use k-kold or LOO cross-validation).
| null | CC BY-SA 3.0 | null | 2010-11-09T11:54:24.230 | 2011-12-08T07:39:06.230 | 2011-12-08T07:39:06.230 | 930 | 930 | null |
4343 | 2 | null | 4341 | 8 | null | I'll assume that you've already determined the number of categories you'll use. Let's say you want to use 20 categories. Then they will be:
- Category 1: [0 - 100,000)
- Category 2: [100,000 - 200,000)
- Category 3: [200,000 - 300,000)
- ...
- Category 19: [1,800,000 - 1,900,000)
- Category 20: [1,900,000 - 2,000,000]
Note that the label of each category can be defined as
```
FLOOR (x / category_size) + 1
```
This is trivial to define as a computed column in SQL or as a formula in Excel.
Note that the last category is infinitesimally larger than the others, since it is closed on both sides. If you happen to get a value of exactly 2,000,000 you might erroneously classify it as falling into category 21, so you have to treat this exception with an ugly "IF" (in Excel) or "CASE" (in SQL).
| null | CC BY-SA 2.5 | null | 2010-11-09T12:25:21.007 | 2010-11-09T12:56:17.023 | 2010-11-09T12:56:17.023 | 666 | 666 | null |
4345 | 2 | null | 4341 | 9 | null | Why group them? Instead, how about estimate the probability density function (PDF) of the distributions from which the data arise? Here's an R-based example:
```
set.seed(123)
dat <- c(sample(2000000, 500), rnorm(100, 1000000, 1000),
rnorm(150, 1500000, 100),rnorm(150, 500000, 10),
rnorm(180, 10000, 10), rnorm(10, 1000, 5), 1:10)
dens <- density(dat)
plot(dens)
```
If the data are strictly bounded (0, 2,000,000) then the kernel density estimate is perhaps not best suited. You could fudge things by asking it to only evaluate the density between the bounds:
```
dens2 <- density(dat, from = 0, to = 2000000)
plot(dens2)
```
Alternatively there is the histogram - a coarse version of the kernel density. What you specifically talk about is binning your data. There are lots of rules/approaches to selecting equal-width bins (i.e. the number of bins) from the data. In R the default is Sturges rule, but it also includes the Freedman-Diaconis rule and Scott's rule. There are others as well - see the Wikipedia page on [histograms](http://en.wikipedia.org/wiki/Histogram).
```
hist(dat)
```
If you are not interested in the kernel density plot or the histogram per se, rather just the binned data, then you can compute the number of bins using the `nclass.X` family of functions where `X` is one of `Sturges`, `scott` or `FD`. And then use `cut()` to bin your data:
```
cut.dat <- cut(dat, breaks = nclass.FD(dat), include.lowest = TRUE)
table(cut.dat)
```
which gives:
```
> cut.dat
[-2e+03,2.21e+05] (2.21e+05,4.43e+05] (4.43e+05,6.65e+05] (6.65e+05,8.88e+05]
247 60 215 61
(8.88e+05,1.11e+06] (1.11e+06,1.33e+06] (1.33e+06,1.56e+06] (1.56e+06,1.78e+06]
153 51 205 50
(1.78e+06,2e+06]
58
```
in R.
However, binning is fraught with problems, most notably; How do you know that your choice of bins hasn't influenced the resulting impression you get of the way the data are distributed?
| null | CC BY-SA 2.5 | null | 2010-11-09T13:00:47.157 | 2010-11-09T13:57:35.120 | 2010-11-09T13:57:35.120 | 1390 | 1390 | null |
4347 | 1 | 4423 | null | 7 | 1820 | From what you have read or heard about, which is a good book on fuzzy logic/sets/systems? I'm interested in basic of fuzzy systems, fuzzification/defuzzification, etc.
| Fuzzy textbooks | CC BY-SA 3.0 | null | 2010-11-09T14:58:56.320 | 2017-01-30T11:56:40.843 | 2017-01-30T11:56:40.843 | 28666 | 976 | [
"references",
"fuzzy"
]
|
4348 | 2 | null | 4341 | 3 | null | You have requested an Excel or SQL solution. The easiest way in Excel is to use its "Analysis" add-in to create a histogram. It will automatically create the bins (ranges of values) but, optionally, accepts a list of bin cutpoints as input and uses them. The output includes a parallel list of bin counts. This is especially handy for irregular-width bins.
This is a one-off calculation: if the data change or the cutpoints change, you have to go through the entire dialog again. A more flexible option is to use COUNTIF to count all values less than or equal to any given bin cutpoint. The first differences of such an array give the bin counts.
Here is a working example. The data are in a column named "Simulation_Z" (which in this particular case is defined to be an entire column, such as `$C:$C`). The formulae shown below are copied from columns L2:N10 of a sheet in the same workbook. They were created by copying the first one downward (but notice the special formula for the first count in N3).
```
Cut Count up Count
-3.0 =COUNTIF(Simulation_Z, "<=" & L3) =M3
-2.0 =COUNTIF(Simulation_Z, "<=" & L4) =M4-M3
-1.0 =COUNTIF(Simulation_Z, "<=" & L5) =M5-M4
0.0 =COUNTIF(Simulation_Z, "<=" & L6) =M6-M5
1.0 =COUNTIF(Simulation_Z, "<=" & L7) =M7-M6
2.0 =COUNTIF(Simulation_Z, "<=" & L8) =M8-M7
3.0 =COUNTIF(Simulation_Z, "<=" & L9) =M9-M8
=MAX(Simulation_Z) =COUNTIF(Simulation_Z, "<=" & L10) =M10-M9
```
Column L ("Cut") stipulates the upper limits of each bin.
This procedure simultaneously defines the bins and computes their counts, which are then available for further testing (e.g., $\chi\text{-squared}$) or plotting.
| null | CC BY-SA 2.5 | null | 2010-11-09T15:08:16.407 | 2010-11-13T01:32:33.880 | 2010-11-13T01:32:33.880 | 919 | 919 | null |
4350 | 1 | null | null | 3 | 414 | I am trying to use R to conduct community division within my weighted network (based from an association matrix). I tried with igraph but I encountered some problems. I usually use the program Socprog (Whitehead 2009) for my analysis, but as I would like to conduct a community division with Newman modularity (2006) on 1000 bootstraps of my data in order to calculate the comembership matrice resulting from all the community division of the bootstraps. So I am looking for a script that would conduct a community division (Newman 2006) from an association matrice and that would give a list of individuals with their cluster.
Does anyone have a R script that works for this analysis? Any advice to implement this analysis would be much apreciated.
Best regards
| How to conduct community division of a social network with R? | CC BY-SA 2.5 | null | 2010-11-09T15:34:36.077 | 2016-05-23T17:35:54.197 | 2016-05-23T17:35:54.197 | 114327 | null | [
"r",
"clustering",
"networks",
"igraph",
"modularity"
]
|
4353 | 1 | null | null | 2 | 1213 | when fitting GAMM with R, I would like to know why when the smooth function is linear, the confidence interval is zero around the middle (the dotted lines are crossing each other at the middle)
| Fitting GAMM model in R | CC BY-SA 2.5 | null | 2010-11-09T17:43:49.160 | 2010-11-09T19:04:51.730 | 2010-11-09T17:54:31.160 | 5 | null | [
"r",
"mixed-model",
"fitting"
]
|
4354 | 1 | 4355 | null | 36 | 6813 | I was wondering if there are any distributions besides the normal where the mean and variance are independent of each other (or in other words, where the variance is not a function of the mean).
| Distributions other than the normal where mean and variance are independent | CC BY-SA 2.5 | null | 2010-11-09T18:27:48.047 | 2022-12-28T16:15:40.340 | 2022-12-28T16:15:40.340 | 11887 | 1934 | [
"distributions",
"mathematical-statistics",
"normal-distribution",
"variance",
"mean"
]
|
4355 | 2 | null | 4354 | 13 | null | Note: Please read answer by @G. Jay Kerns, and see [Carlin and Lewis 1996](http://rads.stackoverflow.com/amzn/click/1584881704) or your favorite probability reference for background on the calculation of mean and variance as the expectated value and second moment of a random variable.
A quick scan of Appendix A in Carlin and Lewis (1996) provides the following distributions which are similar in this regard to the normal, in that the same distribution parameters are not used in the calculations of the mean and variance. As pointed out by @robin, when calculating parameter estimates from a sample, the sample mean is required to calculate sigma.
Multivariate Normal
$$E(X) = \mu$$
$$Var(X) = \Sigma$$
t and multivariate t:
$$E(X) = \mu$$
$$Var(X) = \nu\sigma^2/(\nu - 2)$$
Double exponential:
$$E(X) = \mu$$
$$Var(X) = 2\sigma^2$$
Cauchy:
With some qualification it could be argued that the mean and variance of the Cauchy are not dependent.
$E(X)$ and $Var(X)$ do not exist
Reference
[Carlin, Bradley P., and Thomas A. Louis. 1996. Bayes and Empirical bayes Methods for Data Analysis, 2nd ed. Chapman and Hall/CRC, New York](http://rads.stackoverflow.com/amzn/click/1584881704)
| null | CC BY-SA 2.5 | null | 2010-11-09T18:52:16.460 | 2010-11-12T17:08:50.283 | 2010-11-12T17:08:50.283 | 1381 | 1381 | null |
4356 | 1 | 5802 | null | 8 | 8066 | I know that R's `rpart` function keeps the data it would need to implement multivariate split, but I don't know if it's actually performing multivariate splits. I've tried researching it online looking at the `rpart` docs, but I don't see any information that it can do it or is doing it. Anyone know for sure?
| Does rpart use multivariate splits by default? | CC BY-SA 2.5 | null | 2010-11-09T18:55:17.830 | 2010-12-30T06:31:03.853 | 2010-11-09T19:22:47.847 | null | 1929 | [
"r",
"multivariate-analysis",
"cart"
]
|
4357 | 2 | null | 7 | 1 | null | Usage Over Time
A very large Excel spreadsheet available for download containing data points for all online activities, with user demographics, over time. Please read Tip Sheet (below) before downloading or using this spreadsheet.
[http://pewinternet.org/Trend-Data/Usage-Over-Time.aspx](http://pewinternet.org/Trend-Data/Usage-Over-Time.aspx)
| null | CC BY-SA 2.5 | null | 2010-11-09T19:00:39.677 | 2010-11-09T19:00:39.677 | null | null | 253 | null |
4358 | 2 | null | 4353 | 5 | null | It is due to the default for argument `'seWithMean'` in `plot.gam()`, which is `FALSE`. This plots confidence intervals purely for the centred smooth function only, and there is no uncertainty at 0. If we add in the uncertainty in the mean then you get the more familiar confidence interval.
Here's an example, but using `gam()` rather than `gamm()` as it shows the same issue:
```
## dummy data:
set.seed(123)
dat <- data.frame(x = 1:10, y = 1:10 + rnorm(10))
plot(y ~ x, data = dat)
## load mgcv and fit an AM to the dummy data
require(mgcv)
mod <- gam(y ~ s(x), data = dat, method = "ML")
```
The default plot shows the credible intervals as you describe:
```
plot(mod)
```
whilst more natural credible intervals are given by
```
plot(mod, seWithMean = TRUE)
```
| null | CC BY-SA 2.5 | null | 2010-11-09T19:04:51.730 | 2010-11-09T19:04:51.730 | null | null | 1390 | null |
4359 | 2 | null | 4354 | 30 | null | In fact, the answer is "no". Independence of the sample mean and variance characterizes the normal distribution. This was shown by Eugene Lukacs in ["A Characterization of the Normal Distribution", The Annals of Mathematical Statistics, Vol. 13, No. 1 (Mar., 1942), pp. 91-93.](http://www.jstor.org/stable/2236166)
I didn't know this, but Feller, "Introduction to Probability Theory and Its Applications, Volume II" (1966, pg 86) says that R.C. Geary proved this, too.
| null | CC BY-SA 2.5 | null | 2010-11-09T19:09:47.093 | 2010-11-09T19:32:28.120 | 2010-11-09T19:32:28.120 | null | null | null |
4360 | 1 | 4362 | null | 12 | 1142 | Let's say we are repeatedly tossing a fair coin, and we know number of heads and tails should be roughly equal. When we see a result like 10 heads and 10 tails for a total of 20 tosses, we believe the results and are inclined to believe the coin is fair.
Well when you see a result like 10000 heads and 10000 tails for a total of 20000 tosses, I actually would question the validity of the result (did the experimenter fake the data), as I know this is more unlikely than, say a result of 10093 heads and 9907 tails.
What is the statistical argument behind my intuition?
| Statistical argument for why 10,000 heads from 20,000 tosses suggests invalid data | CC BY-SA 2.5 | null | 2010-11-09T19:23:19.233 | 2010-12-12T17:50:29.487 | 2010-11-23T17:05:19.157 | 8 | 578 | [
"confidence-interval",
"binomial-distribution"
]
|
4361 | 2 | null | 4356 | 1 | null | As fas as I know, it doesn't; but have not used it for a while. If I understand you well, you might want to look at package [mvpart](http://cran.at.r-project.org/package=mvpart) instead.
| null | CC BY-SA 2.5 | null | 2010-11-09T19:31:10.937 | 2010-11-09T19:31:10.937 | null | null | 892 | null |
4362 | 2 | null | 4360 | 21 | null | Assuming a fair coin the outcome of 10000 heads and 10000 tails is actually more likely than an outcome of 10093 heads and 9907 tails.
However, when you say that a real experimenter is unlikely to obtain an equal number of heads and tails, you are implicitly invoking Bayes theorem. Your prior belief about a real experiment is that Prob(No of heads = 10000 in 20000 tosses | Given that experimenter is not faking) is close to 0. Thus, when you see an actual outcome that the 'No of heads = 10000' your posterior about Prob(Experimenter is not faking | observed outcome of 10000 heads) is also close to 0. Thus, you conclude that the experimenter is faking the data.
| null | CC BY-SA 2.5 | null | 2010-11-09T19:37:13.950 | 2010-11-09T19:37:13.950 | null | null | null | null |
4363 | 2 | null | 4356 | 1 | null | Your terminology is confusing. Due you mean splits using more than one variable, or a tree that allows for a multivariate (as opposed to a univariate) response? I presume the latter.
F. Tusell has pointed you to the mvpart package, which adds a multivariate criterion for node impurity that is evaluated for all possible splits at each stage of tree building.
An alternative is the [party](http://cran.r-project.org/web/packages/party/index.html) package, whose function `ctree()` can handle multivariate responses.
| null | CC BY-SA 2.5 | null | 2010-11-09T20:03:11.627 | 2010-11-09T20:03:11.627 | null | null | 1390 | null |
4364 | 1 | 4373 | null | 67 | 11408 | A standardized Gaussian distribution on $\mathbb{R}$ can be defined by giving explicitly its density:
$$ \frac{1}{\sqrt{2\pi}}e^{-x^2/2}$$
or its characteristic function.
As recalled in [this](https://stats.stackexchange.com/questions/4354/distributions-other-than-the-normal-where-mean-and-variance-are-independent) question it is also the only distribution for which the sample mean and variance are independent.
What are other surprising alternative characterization of Gaussian measures that you know ? I will accept the most surprising answer
| What is the most surprising characterization of the Gaussian (normal) distribution? | CC BY-SA 3.0 | null | 2010-11-09T20:19:21.667 | 2017-11-27T22:36:43.827 | 2017-11-27T22:36:43.827 | 128677 | 223 | [
"probability",
"normal-distribution",
"mathematical-statistics",
"characteristic-function"
]
|
4365 | 2 | null | 4364 | 19 | null | Gaussian distributions are the only [sum-stable](http://en.wikipedia.org/wiki/Stable_distribution) distributions with finite variance.
| null | CC BY-SA 2.5 | null | 2010-11-09T20:30:11.373 | 2010-11-09T20:30:11.373 | null | null | 795 | null |
4366 | 2 | null | 4364 | 30 | null | The continuous distribution with fixed variance which maximizes [differential entropy](http://en.wikipedia.org/wiki/Differential_entropy) is the Gaussian distribution.
| null | CC BY-SA 2.5 | null | 2010-11-09T20:36:44.480 | 2010-11-09T20:36:44.480 | null | null | 795 | null |
4367 | 1 | 4369 | null | 10 | 6117 | I believe $p[x]$ is a probability distribution, where
\begin{equation}
p[x] = \frac{1}{\pi (1+x^2)}
\end{equation}
since it's positive everywhere and integrates to 1 on $-\infty, \infty$.
The mean is 0 by symmetry, even though integrating $xp[x]$ on
$-\infty, \infty$ does not converge. This is "suspicious" since
$p[x]$ is supposed to be a probability distribution, but reasonable
because $xp[x]$ is $O(1/x)$ which is known to diverge.
The bigger problem is in computing the standard deviation. Since $x^2 p[x]$
also diverges, since $x^2 p[x]$ is $O(1)$.
If this isn't a probability distribution, why not? If it is, is its
standard deviation infinite?
The cumulative distribution function is $\arctan[x]/\pi$ if that helps.
Someone mentioned this might be a gamma distribution, but that isn't
clear to me.
| Can a probability distribution have infinite standard deviation? | CC BY-SA 2.5 | null | 2010-11-09T20:59:38.060 | 2017-01-21T00:51:18.200 | 2010-11-10T09:57:21.213 | 8 | null | [
"distributions",
"standard-deviation"
]
|
4368 | 1 | 4370 | null | 2 | 5572 | Why in the white test, we estimate auxiliary regression model of the squared residuals (in the original model) and not just plain residuals?
| Auxiliary Model in the White Test | CC BY-SA 2.5 | null | 2010-11-09T21:05:20.573 | 2010-11-17T04:00:28.513 | null | null | 333 | [
"heteroscedasticity"
]
|
4369 | 2 | null | 4367 | 12 | null | To answer your question title: Yes, a probability distribution can have infinite standard deviation (see below).
Your example is a special case of the [Cauchy distribution](http://en.wikipedia.org/wiki/Cauchy_distribution) whose mean or variance does not exist. Set the location parameter to 0 and the scale to 1 for the Cauchy to get to your pdf.
| null | CC BY-SA 2.5 | null | 2010-11-09T21:07:46.350 | 2010-11-09T21:07:46.350 | null | null | null | null |
4370 | 2 | null | 4368 | 1 | null | Because the White's test examines for heteroscedasticity by assuming that:
$\sigma^2 = \beta_0 + x_1 \beta_1 + x_2 \beta_2 + ...$
(include squares and cross-product of covariates on the right hand side)
In other words, White's test investigates if the error variance is heteroscedastic by regressing an estimate of $\sigma^2$ against the available regressors. An estimate of $\sigma^2$ is given by the squared residuals as the mean of the residuals is zero.
| null | CC BY-SA 2.5 | null | 2010-11-09T21:41:21.690 | 2010-11-09T21:46:44.320 | 2010-11-09T21:46:44.320 | null | null | null |
4371 | 1 | null | null | 16 | 3227 | In "The Elements of Statistical Learning" (2nd ed), p63, the authors give the following two formulations of the ridge regression problem:
$$ \hat{\beta}^{ridge} = \underset{\beta}{\operatorname{argmin}} \left\{ \sum_{i=1}^N(y_i-\beta_0-\sum_{j=1}^p x_{ij} \beta_j)^2 + \lambda \sum_{j=1}^p \beta_j^2 \right\} $$
and
$$ \hat{\beta}^{ridge} = \underset{\beta}{\operatorname{argmin}} \sum_{i=1}^N(y_i-\beta_0-\sum_{j=1}^p x_{ij} \beta_j)^2 \text{, subject to } \sum_{j=1}^p \beta_j^2 \leq t.$$
It is claimed that the two are equivalent, and that there is a one-to-one correspondence between the parameters $\lambda$ and $t$.
It would appear that the first formulation is a Lagrangian relaxation of the second. However, I never had an intuitive understanding of how or why Lagrangian relaxations work.
Is there a simple way to demonstrate that the two formulations are indeed equivalent? If I have to choose, I'd prefer intuition over rigour.
Thanks.
| Lagrangian relaxation in the context of ridge regression | CC BY-SA 2.5 | null | 2010-11-09T22:45:45.627 | 2010-11-10T08:58:59.530 | null | null | 439 | [
"ridge-regression"
]
|
4372 | 2 | null | 4334 | 5 | null | This is a very unusual state space model because the dynamics are included in both the observation equation (1') and the state equation (2'). Usually, the dynamics are only in the state equation and the observation equation is a linear function of the state vector. I don't think any of the state space implementations in R will allow dynamics in the observation equation.
It is possible to re-write the model so that the dynamics are all in the state equation, but then it becomes non-linear.
I suspect you will have to write your own code, or ask the authors if they can give you theirs.
| null | CC BY-SA 2.5 | null | 2010-11-09T22:54:54.083 | 2010-11-09T22:54:54.083 | null | null | 159 | null |
4373 | 2 | null | 4364 | 45 | null | My personal most surprising is the one about the sample mean and variance, but here is another (maybe) surprising characterization: if $X$ and $Y$ are IID with finite variance with $X+Y$ and $X-Y$ independent, then $X$ and $Y$ are normal.
Intuitively, we can usually identify when variables are not independent with a scatterplot. So imagine a scatterplot of $(X,Y)$ pairs that looks independent. Now rotate by 45 degrees and look again: if it still looks independent, then the $X$ and $Y$ coordinates individually must be normal (this is all speaking loosely, of course).
To see why the intuitive bit works, take a look at
$$
\left[
\begin{array}{cc}
\cos45^{\circ} & -\sin45^{\circ} \newline
\sin45^{\circ} & \cos45^{\circ}
\end{array}
\right]
\left[
\begin{array}{c}
x \newline
y
\end{array}
\right]= \frac{1}{\sqrt{2}}
\left[
\begin{array}{c}
x-y \newline
x+y
\end{array}
\right]
$$
| null | CC BY-SA 2.5 | null | 2010-11-09T23:09:28.487 | 2010-11-09T23:09:28.487 | null | null | null | null |
4375 | 2 | null | 4360 | 12 | null | I like Srikant's explanation, and I think the Bayesian idea is probably the best way to approach a problem like this. But here is another way to see it without Bayes: (in R)
```
dbinom(10, size = 20, prob = 0.5)/dbinom(10000, 20000, 0.5)
```
which is about 31.2 on my system. In other words, it is over 30 times more likely to see 10 out of 20 than it is to see 10,000 out of 20,000, even with a fair coin in both cases. This ratio increases without bound as the sample size increases.
This is a sort of likelihood ratio approach, but again, in my gut this feels like a Bayesian judgement call more than anything else.
| null | CC BY-SA 2.5 | null | 2010-11-09T23:44:11.400 | 2010-11-09T23:44:11.400 | null | null | null | null |
4377 | 1 | 4378 | null | 1 | 848 | I want to model the probability of a binary variable x given some predictor, d. It needs two parameters:
- One parameter that sets the "break point", at which p(x=1 | d) = 0.5.
- One parameter that sets the "softness", i.e. how abruptly the probability changes around the break point.
(Sorry I'm sure there's standard terminology for this, but I'm a bit of a novice.)
A logistic regression model feels like the right approach, but the domain of d is [0, inf] (d is a distance metric). Is there a standard way to transform d to [-inf, inf] so a logistic regression can be used (e.g. f(x) = x-1/x)? Or is there another common model for this kind of scenario?
| Logistic regression with non-negative parameter | CC BY-SA 2.5 | null | 2010-11-10T02:00:03.560 | 2010-11-10T02:36:38.237 | 2010-11-10T02:29:39.833 | 1938 | 1938 | [
"logistic",
"classification"
]
|
4378 | 2 | null | 4377 | 5 | null | Logistic regression satisfies your requirements -- two parameters controlling the mid-point and the rate of change. But there is no restriction on the domain of predictors. You can just fit a logistic regression using d as it is.
However, if d is highly skewed (as it might be with that domain), then taking a log of d may be useful to prevent the fit being dominated by a few observations.
If d includes exact zeros, then you obviously can't take logs. In that case, if you still think you need a transformation, then I'd use an [inverse hyperbolic sine transformation](http://robjhyndman.com/researchtips/transformations/).
| null | CC BY-SA 2.5 | null | 2010-11-10T02:36:38.237 | 2010-11-10T02:36:38.237 | null | null | 159 | null |
4379 | 2 | null | 4371 | 3 | null | The correspondence can most easily be shown using the [Envelope Theorem](http://books.google.com/books?id=NiQw5ZEw2IIC&lpg=PA177&dq=envelope%20theorem%20shadow%20price&pg=PA177#v=onepage&q=envelope%20theorem%20shadow%20price&f=false).
First, the standard Lagrangian will have an additional $\lambda \cdot t$ term. This will not affect the maximization problem if we are just treating $\lambda$ as given, so Hastie et al drop it.
Now, if you differentiate the full Lagrangian with respect to $t$, the Envelope Theorem says you can ignore the indirect effects of $t$ through $\beta$, because you're at a maximum. What you'll be left with is the Lagrange multipler from $\lambda \cdot t$.
But what does this mean intuitively? Since the constraint binds at the maximum, the derivative of the Lagrangian, evaluated at the maximum, is the same as the deriviate the original objective. Therefore the Lagrange multiplier gives the shadow price -- the value in terms of the objective -- of relaxing the constraint by increasing $t$.
I assume this is the correspondence Hastie et al. are referring to.
| null | CC BY-SA 2.5 | null | 2010-11-10T03:54:06.940 | 2010-11-10T03:54:06.940 | null | null | 493 | null |
4382 | 2 | null | 4364 | -1 | null | The expectation plus minus the standard deviation are the saddle points of the function.
| null | CC BY-SA 2.5 | null | 2010-11-10T11:57:37.740 | 2010-11-10T11:57:37.740 | null | null | 253 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.