Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2153 | 2 | null | 2149 | 52 | null | From Dan Simon's "Optimal State Estimation":
>
In a linear system with Gaussian noise, the Kalman filter is optimal. In a system that is nonlinear, the Kalman filter can be used for state estimation, but the particle filter may give better results at the price of additional computational effort. In a system that has non-Gaussian noise, the Kalman filter is the optimal linear filter, but again the particle filter may perform better. The unscented Kalman filter (UKF) provides a balance between the low computational effort of the Kalman filter and the high performance of the particle filter.
>
The particle filter has some similarities with the UKF in that it transforms a set of points via known nonlinear equations and combines the results to estimate the mean and covariance of the state. However, in the particle filter the points are chosen randomly, whereas in the UKF the points are chosen on the basis of a specific algorithm*. Because of this, the number of points used in a particle filter generally needs to be much greater than the number of points in a UKF. Another difference between the two filters is that the estimation error in a UKF does not converge to zero in any sense, but the estimation error in a particle filter does converge to zero as the number of particles (and hence the computational effort) approaches infinity.
>
*The unscented transformation is a method for calculating the statistics of a random variable which undergoes a nonlinear transformation and uses the intuition (which also applies to the particle filter) that it is easier to approximate a probability distribution than it is to approximate an arbitrary nonlinear function or transformation. See also this as an example of how the points are chosen in UKF."
| null | CC BY-SA 4.0 | null | 2010-08-27T06:23:29.647 | 2020-09-06T16:19:25.457 | 2020-09-06T16:19:25.457 | 7290 | 339 | null |
2154 | 1 | 2157 | null | 2 | 852 | (I know it is not quite appropriate to quote it as "part 2", but since the question has been dormant for quite a while, I hope by doing this will rise peoples' attention again, you may have a look of part I [here](https://stats.stackexchange.com/questions/1228/how-to-interpret-a-control-chart-containing-a-majority-of-zero-values).)
I have come across an article online talking about the case similar to mine, that most of the time the case count is zero, make sometimes when the case number increases to one, it already shoots above the control level and consider the case as "out of control".
Since c-chart will be easier for my bosses to read and interpret, I wonder if the method is sound, or did anyone have some more official reference on this method? (I have googled quite a while but I can find nothing)
The article can be found [here](http://www.spcforexcel.com/small-sample-case-for-c-and-u-control-charts).
An to further my question, I want to ask one more thing: for the assumption of c-chart that the case count needs to follow a Poisson distribution, is it applicable to all lambda (i.e. mean of case count)?
Thanks again.
| How to interpret a control chart containing a majority of zero values? (Part 2) | CC BY-SA 2.5 | null | 2010-08-27T07:32:57.340 | 2010-08-27T14:12:33.627 | 2017-04-13T12:44:24.667 | -1 | 588 | [
"control-chart"
] |
2155 | 2 | null | 2151 | 66 | null | It seems you are looking for multi-class ROC analysis, which is a kind of multi-objective optimization covered in a [tutorial](http://www.cs.bris.ac.uk/~flach/ICML04tutorial/) at ICML'04. As in several multi-class problem, the idea is generally to carry out pairwise comparison (one class vs. all other classes, one class vs. another class, see (1) or the Elements of Statistical Learning), and there is a recent paper by Landgrebe and Duin on that topic, [Approximating the multiclass ROC by pairwise analysis](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.108.3250&rep=rep1&type=pdf), Pattern Recognition Letters 2007 28: 1747-1758. Now, for visualization purpose, I've seen some papers some time ago, most of them turning around [volume under the ROC surface](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.14.2427&rep=rep1&type=pdf) (VUS) or [Cobweb diagram](http://cskku.kkh.go.th/homework/ke/paper/visualization%20and%20analysis%20of%20classifiers%20performance%20in%20multi-class%20medical%20data.pdf).
I don't know, however, if there exists an R implementation of these methods, although I think the `stars()` function might be used for cobweb plot. I just ran across a Matlab toolbox that seems to offer multi-class ROC analysis, [PRSD Studio](http://doc.prsdstudio.com/latest/guide/ROC_analysis.html).
Other papers that may also be useful as a first start for visualization/computation:
- Visualisation of multi-class ROC surfaces
- A simplified extension of the Area under the ROC to the multiclass domain
References:
1. Allwein, E.L., Schapire, R.E. and Singer, Y. (2000). Reducing multiclass to binary: A unifying approach for margin classifiers. Journal of Machine Learning Research, 1:113–141.
| null | CC BY-SA 2.5 | null | 2010-08-27T08:12:12.627 | 2010-08-27T08:12:12.627 | null | null | 930 | null |
2156 | 2 | null | 1241 | 2 | null | In case you're interested in further references, an extensive list of papers is available on K.H. Zou's website, [Receiver Operating Characteristic (ROC) Literature Research](http://www.spl.harvard.edu/archive/spl-pre2007/pages/ppl/zou/roc.html).
ROC curves are also used when one is interested in comparing different classifiers performance, with wide applications in biomedical research and bioinformatics.
| null | CC BY-SA 2.5 | null | 2010-08-27T08:22:12.153 | 2010-08-27T08:22:12.153 | null | null | 930 | null |
2157 | 2 | null | 2154 | 3 | null | C-charts basically works like this: you collect all your observations (this is an approximation of getting only observations from "normal" periods) of a case count, then fit Poisson distribution to it (so here you apply all Poisson assumptions, mainly that each case occurance is independent to the others), and finally test all case counts with H0 that they are just from this fitted distribution.
So, if you have majority of zeros, it will just drive $\lambda$ of fit near zero; for $\lambda<0.01$ even one case will be something strange (on $p$-value=1%).
EDIT:
In the article you've linked the whole fitting is done just by taking mean as $\lambda$, while testing just by $3\sigma$ criterion based on the fact that Poisson's variance is also $\lambda$. Still it is sufficient for this case.
| null | CC BY-SA 2.5 | null | 2010-08-27T09:27:37.370 | 2010-08-27T13:37:10.033 | 2010-08-27T13:37:10.033 | null | null | null |
2158 | 2 | null | 1815 | 4 | null | [Experiments: Planning, Analysis and Optimization](http://rads.stackoverflow.com/amzn/click/0471699462) by Wu & Hamada.
I'm only a couple of chapters in, so not yet in a position to recommend confidently, but so far it looks like a good graduate text, reasonably detailed, comprehensive and up-to-date. Has more of a "no nonsense" feel than the Montgomery.
| null | CC BY-SA 2.5 | null | 2010-08-27T09:36:24.300 | 2010-08-27T09:36:24.300 | null | null | 174 | null |
2159 | 2 | null | 1595 | 19 | null | I use Python for statistical analysis and forecasting. As mentioned by others above, Numpy and Matplotlib are good workhorses. I also use ReportLab for producing PDF output.
I'm currently looking at both Resolver and Pyspread which are Excel-like spreadsheet applications which are based on Python. Resolver is a commercial product but [Pyspread](http://pyspread.sourceforge.net/) is still open-source. (Apologies, I'm limited to only one link)
| null | CC BY-SA 2.5 | null | 2010-08-27T10:10:35.860 | 2010-08-27T10:10:35.860 | null | null | 1105 | null |
2160 | 2 | null | 1815 | 3 | null | Not really a book but a gentle introduction on DoE in R: [An R companion to Experimental Design](http://cran.r-project.org/doc/contrib/Vikneswaran-ED_companion.pdf).
| null | CC BY-SA 2.5 | null | 2010-08-27T11:15:23.930 | 2010-08-27T11:15:23.930 | null | null | 930 | null |
2163 | 2 | null | 1815 | 13 | null | Ronald Fisher's [The Design of Experiments](http://en.wikipedia.org/wiki/The_Design_of_Experiments) (link is Wikipedia rather than Amazon since it is long out of print) is interesting for historical context. The book is often credited as founding the whole field, and certainly did a lot to promote things like blocking, randomisation and factorial design, though things have moved on a bit since.
As a period document it's quite fascinating, but it's also maddening. In the absence of a common terminology and notation, a lot of time is spent painstakingly explaining things in what now seems comically-stilted English. If you had to use it as a reference to look up how to calculate something you'd probably gnaw your own leg off. But the terribly polite hatchet job on some of Galton's analysis is entertaining.
(I know, I know -- how the readers of tomorrow will laugh at the archaisms of today's scientific literature...)
| null | CC BY-SA 2.5 | null | 2010-08-27T14:38:12.360 | 2010-08-27T14:38:12.360 | null | null | 174 | null |
2166 | 2 | null | 213 | 5 | null | I'm not aware that anyone is doing this, but I generally like to try [dimensionality reduction](http://en.wikipedia.org/wiki/Dimensionality_reduction) when I have a problem like this. You might look into a method from manifold learning or [non-linear dimensionality reduction](http://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction).
An example would be a [Kohonen map](http://en.wikipedia.org/wiki/Kohonen_maps). A good reference for R is ["Self- and Super-organizing Maps in R: The kohonen Package"](http://www.jstatsoft.org/v21/i05/paper).
| null | CC BY-SA 2.5 | null | 2010-08-27T16:44:31.610 | 2010-08-27T16:44:31.610 | null | null | 5 | null |
2167 | 1 | 2168 | null | 20 | 4439 | The [kernel trick](http://en.wikipedia.org/wiki/Kernel_trick) is used in several machine learning models (e.g. [SVM](http://en.wikipedia.org/wiki/Support_vector_machine)). It was first introduced in the "Theoretical foundations of the potential function method in pattern recognition learning" paper in 1964.
The wikipedia definition says that it is
>
a method for using a linear classifier
algorithm to solve a non-linear
problem by mapping the original
non-linear observations into a
higher-dimensional space, where the
linear classifier is subsequently
used; this makes a linear
classification in the new space
equivalent to non-linear
classification in the original space.
One example of a linear model that has been extended to non-linear problems is the [kernel PCA](http://en.wikipedia.org/wiki/Kernel_principal_component_analysis). Can the kernel trick be applied to any linear model, or does it have certain restrictions?
| Applying the "kernel trick" to linear methods? | CC BY-SA 2.5 | null | 2010-08-27T17:00:11.247 | 2020-10-14T20:03:08.280 | 2015-04-14T19:14:27.327 | 9964 | 5 | [
"machine-learning",
"kernel-trick"
] |
2168 | 2 | null | 2167 | 17 | null | The kernel trick can only be applied to linear models where the examples in the problem formulation appear as dot products (Support Vector Machines, PCA, etc).
| null | CC BY-SA 2.5 | null | 2010-08-27T17:16:04.163 | 2010-08-27T17:16:04.163 | null | null | 881 | null |
2169 | 1 | 2288 | null | 6 | 403 | I am trying to compute the standard error of the sample [spectral risk measure](http://en.wikipedia.org/wiki/Spectral_risk_measure), which is used as a metric for portfolio risk. Briefly, a sample spectral risk measure is defined as
$q = \sum_i w_i x_{(i)}$, where $x_{(i)}$ are the sample order statistics, and $w_i$ is a sequence of monotonically non-increasing non-negative weights that sum to $1$. I would like to compute the standard error of $q$ (preferrably not via bootstrap). I don't know much about L-estimators, but it looks to me like $q$ is a kind of L-estimator (but with extra restrictions imposed on the weights $w_i$), so this should probably be an easily solved problem.
edit: per @srikant's question, I should note that the weights $w_i$ are chosen a priori by the user, and should be considered independent from the samples $x$.
| How to compute the standard error of an L-estimator? | CC BY-SA 2.5 | null | 2010-08-27T18:23:40.470 | 2010-11-04T16:51:47.097 | 2010-11-04T15:56:08.473 | 930 | 795 | [
"estimation",
"finance",
"standard-error"
] |
2170 | 1 | 2177 | null | 7 | 718 | Say some previous findings identified a curvilinear effect of X on Y, (specifically that X had a positive effect on Y, and that X^2 had a negative effect). You want to see if the same holds for your entirely different sample (although everything else between studies, constructs/measures, are exactly the same). Neither the previous study nor my study are experimental (so I am not manipulating X, merely observing it). There is no explicit theoretical reason why a curvilinear effect would occur.
I will give some examples of how I would currently go about seeing if this is true, but I would like some input and peoples opinions on what they believe would be a preferrable method, if any of my suggestions are innapropriate, and of course if their are other alternatives.
Examples:
1) Simply examine the bivariate scatterplot and fit some type of smoothing line to the mean of Y over bins of X (e.g. LOESS). Although confounding could be an issue, if a curvilinear effect exists their will likely be some evidence in its distribution.
2) Examine the [partial regression plot](http://en.wikipedia.org/wiki/Partial_regression_plot) or other visualization techniques to identify the effect of X on Y independent of other confounding variables.
3) Use some type of model selection criteria (e.g. BIC), and determine whether a model including X^2 is preferred over a model without X^2
4) Include a model with X^2 and see if X^2 has a statistically significant regression coefficient.
Like I said any other suggestions are welcome as well.
Edit: In this context, my main concern is to identify if the effect of X on Y is best represented in a similar manner to the previous study. While their could be other substantively interesting points to compare between studies (like the magnitude of the effect of X on Y), this is not my main concern.
| Preferred method for identifying curvilinear effect in multi-variable regression framework | CC BY-SA 2.5 | null | 2010-08-27T20:17:34.687 | 2010-09-03T17:57:15.193 | 2010-08-30T12:27:26.667 | 1036 | 1036 | [
"modeling",
"regression",
"methodology"
] |
2171 | 1 | 2172 | null | 19 | 2535 | I'm interested in learning how to create the type of visualizations you see at [http://flowingdata.com](http://flowingdata.com) and informationisbeautiful. EDIT: Meaning, visualizations that are interesting in of themselves -- kinda like the NY Times graphics, as opposed to a quick something for a report.
What kinds of tools are used to create these -- is it mostly a lot of Adobe Illustrator/Photoshop? What are good resources (books, websites, etc.) to learn how to use these tools for data visualization in particular?
I know what I want visualizations to look like (and I'm familiar with design principles, e.g., from Tufte's books), but I have no idea how to create them.
| Resources for learning to create data visualizations? | CC BY-SA 2.5 | null | 2010-08-27T22:00:32.020 | 2019-02-26T18:41:29.610 | 2010-08-28T04:31:24.107 | 1106 | 1106 | [
"data-visualization"
] |
2172 | 2 | null | 2171 | 20 | null | Flowing data regularly discusses the tools that he uses. See, for instance:
- 40 Essential Tools and Resources to Visualize Data
- What Visualization Tool/Software Should You Use? – Getting Started
He also shows in great detail how he makes graphics on occasion, such as:
- How to Make a US County Thematic Map Using Free Tools
- How to Make a Graph in Adobe Illustrator
- How to Make a Heatmap – a Quick and Easy Solution
There are also other questions on this site:
- Recommended visualization libraries for standalone applications
- Web visualization libraries
IMO, try:
- R and ggplot2: this is a good introductory video, but the ggplot2 website has lots of resources.
- Processing: plenty of good tutorials on the homepage.
- Protovis: also a plethora of great examples on the homepage.
You can use Adobe afterwards to clean these up.
You can also look at the R `webvis` package, although it isn't as complete as `ggplot2`. From R, you can run this command to see the Playfair's Wheat example:
```
install.packages("webvis")
library(webvis)
demo("playfairs.wheat")
```
Lastly, my favorite commercial applications for interactive visualization are:
- Tableau
- Spotfire
- Qlikview
| null | CC BY-SA 2.5 | null | 2010-08-27T22:50:35.313 | 2010-08-28T00:06:51.733 | 2017-04-13T12:44:27.570 | -1 | 5 | null |
2173 | 2 | null | 2171 | 2 | null | You'll spend a lot of time getting up to speed with R.
RapidMiner is free and open source and graphical, and has plenty of good visualizations, and you can export them.
If you have money to spare, or are a university staff/student then JMP is also very freaking nice. It can make some very pretty graphs, very very easily. Can export to flash or PNG or PDF or what have you.
| null | CC BY-SA 2.5 | null | 2010-08-27T23:47:09.177 | 2010-08-27T23:47:09.177 | null | null | 74 | null |
2174 | 2 | null | 2171 | 5 | null | Already mentioned processing has a nice set of books available. See: [1](http://rads.stackoverflow.com/amzn/click/0262182629), [2](http://rads.stackoverflow.com/amzn/click/144937980X), [3](http://rads.stackoverflow.com/amzn/click/0123736021), [4](http://rads.stackoverflow.com/amzn/click/159059617X), [5](http://rads.stackoverflow.com/amzn/click/0470375485), [6](http://rads.stackoverflow.com/amzn/click/0596514557), [7](http://rads.stackoverflow.com/amzn/click/1568817169)
You will find lots of stuff on the web to help you start with R. As next step then ggplot2 has excellent web [documentation](http://had.co.nz/ggplot2/). I also found Hadley's [book](http://rads.stackoverflow.com/amzn/click/0387981403) very helpful.
Python might be another way to go. Especially with tools like:
- matplotlib
- NetworkX
- igraph
- Chaco
- Mayavi
All projects are well documented on the web. You might also consider peeking into [some](http://rads.stackoverflow.com/amzn/click/1430218436) [books](http://rads.stackoverflow.com/amzn/click/1847197906).
Lastly, [Graphics of Large Datasets](http://rads.stackoverflow.com/amzn/click/0387329064) book could be also some help.
| null | CC BY-SA 2.5 | null | 2010-08-28T00:26:17.800 | 2010-08-28T00:26:17.800 | null | null | 22 | null |
2175 | 7 | null | null | 0 | null | CrossValidated is for statisticians, data miners, and anyone else doing data analysis or interested in it as a discipline. If you have a question about
- statistical analysis, applied or theoretical
- designing experiments
- collecting data
- data mining
- machine learning
- visualizing data
- probability theory
- mathematical statistics
- statistical and data-driven computing
then you're in the right place. Anybody can ask a question, regardless of skills and experience, but some questions are still better than others. If you came here with a question to ask and are new to the site, please consult our thread on [how to ask a good question](http://meta.stats.stackexchange.com/questions/1479/how-to-ask-a-good-question-on-crossvalidated).
Our community aims to create a lasting record of great solutions to questions. For more about this and guidance about how to provide your own great answers, please read [How should questions be answered on Cross Validated?](http://meta.stats.stackexchange.com/questions/1390/how-should-questions-be-answered-on-cross-validated). Providing references to peer-reviewed literature or links to on-line resources is warmly welcomed. You can also incorporate the work of others under [fair use doctrine](http://en.wikipedia.org/wiki/Fair_use), which particularly means that you must attribute any text, images, or other material that is not originally yours.
Homework questions are welcome. Please mark them with the [homework](http://stats.stackexchange.com/questions/tagged/homework) tag. They get [somewhat special treatment](http://meta.stackexchange.com/questions/10811/how-to-ask-and-answer-homework-questions/10812#10812), because ultimately you benefit most by finding the solution yourself. The community will try to provide [guidance, hints, and useful links](http://meta.stats.stackexchange.com/q/12/919).
There are certain subjects that will probably get better responses on our sister sites. If your question is about
- Programming, ask on Stack Overflow. If the language is statistically oriented (such as R, SAS, Stata, SPSS, etc.), then decide based on the nature of your question: if it needs statistical expertise to understand or answer, ask it here; if it's about an algorithm, routine data processing, or details of the language, then please refer to the collection of links to resources we maintain.
- Mathematics, ask on math.stackexchange.com.
- Bugs in software, ask the people who produced the software.
Questions about obtaining particular datasets are off-topic (they are too specialized). The [GIS site](http://gis.stackexchange.com) welcomes inquiries about obtaining geographically related datasets.
Please note, however, that cross-posting is not encouraged on SE sites. Choose one best location to post your question. Later, if it proves better suited on another site, it can be migrated.
| null | CC BY-SA 3.0 | null | 2010-08-28T01:20:33.947 | 2013-01-10T19:43:24.013 | 2014-04-23T13:43:43.010 | -1 | -1 | null |
2176 | 2 | null | 2104 | 5 | null | I generally recommend avoiding these types of sphericity tests altogether by using modern mixed modeling methods. If you are not working with few subjects this will give you a great deal of flexibility in modeling an appropriate covariance structure, freeing you from the strict assumption of sphericity when necessary. I infer from the `str` output that you have 16 subjects with 12 observations each (I assume balance b/c you areusing clasical method-of-moments tools) which should be enough data to fit a mixed model with structured covariance matrices via (restricted) maximum likelihood.
Without being close to your data I can't offer specific model recommendations, but a place to start in R would be to replace `aov` in your model specifications with `lme` (after `library(nlme)`). The reason this will work is that you have mistakenly provided an `nlme`-style random argument to `aov` (when, as @Matt Albrecht pointed out, an `Error` term would have been approriate). In nlme, with the random argument set to `~ 1|<your grouping structure>`
and no `correlation` or `weight` arguments, you are specifying a random intercept for each group, implying the response covariance within groups is `ZGZ' + R = 1G1'+ \sigma^2 I` ==> compound symmetry with between-group variance off-diagonal and between-group variance + within-group variance on the diagonal ==> a spherical structure. From there you can begin to explore (e.g. using the built-in graphical methods), model, and test (e.g. comparing information criteria or using LRTs for nested models) the various forms of non-sphericity. Some of the tools for the modeling component are:
- Using the weights argument to model non-constant variance (diagonals) within or between groups (e.g. error variance changes between sucrose levels).
- Using the correlation argument to model non-constant covariance (off-diagonals) within groups (e.g. a structure within a group where residual errors that are closer together in time (e.g. AR1 structure) or space (e.g. Spherical structure) are more similar).
- Modeling random slopes by adding terms to the LHS of the | in the random formula.
Though the process can be complex with many potential pitfalls, I believe it will lead you to think more about the data generating mechanism, and when combined with careful graphical checks (I recommed `lattice` b/c `nlme` has excellent lattice-based plotting methods -- but `ggplot` works well too) you are likely to have not only a better scientific understanding of the process, but also less biased and more efficient estimators with which to draw inferences.
| null | CC BY-SA 2.5 | null | 2010-08-28T02:32:13.413 | 2010-08-28T02:32:13.413 | null | null | 1107 | null |
2177 | 2 | null | 2170 | 6 | null | It sounds as though you are interested in formal inference and for that method 4 is best. Add X^2 to a model containing terms you wish to control for and conduct a test to assess the streght of evidence for the quadratic term given the terms in the model. Note however that "absence of evidence is not evidence of absence" and statistical power will come into play (this is of interest if you fail to reject or the CI contains zero). You willof course also want to perform diagnostics of model assumptions prior to drawing conclusions.
Methods 1 and 2 are excellent exploratory tools and I would encourage exploring the relationship in as many meaningful ways as you wish (since you know a priori what formal test you will conduct -- a test of the quadratic term -- this will not lead to data-driven hypothesis testing). Other methods of exploration include plotting a fitted LOESS smoother or spline to the (possibly partial) relationships, fitting smoothers or parametric fits within subsets of the data (eg using conditioning plots), a 3d scatterplot with a fitted surface (particularly if you include continuous interactions), etc. These plots will not only help you understand the data better but can also be used as part of a less formal case for/against the quadratic (keeping in mind that humans are excellent at spotting trends in noise).
I'm not sure what model selection methods you refer to in 3, but generally automated model selection and testing do not mix. If you are referring to using information criteria (AICc, BIC, ...) note that the theory behind these is based on prediction rather than testing. So, number 4 is the most rigorous way to test the quadratic.
Finally, 2 comments on terminology:
- 'multivariate' models are those whose response is a matrix, and 'multivariable' models are those with a vector response and multiple terms on the RHS.
- Partial residual plots differ from partial regression plots..
| null | CC BY-SA 2.5 | null | 2010-08-28T04:34:44.940 | 2010-08-28T04:34:44.940 | null | null | 1107 | null |
2178 | 2 | null | 2167 | 7 | null | Two further references from [B. Schölkopf](http://www.kyb.mpg.de/~bs):
- Schölkopf, B. and Smola, A.J. (2002). Learning with kernels. The MIT Press.
- Schölkopf, B., Tsuda, K., and Vert, J.-P. (2004). Kernel methods in computational biology. The MIT Press.
and a website dedicated to [kernel machines](http://www.kernel-machines.org/).
| null | CC BY-SA 2.5 | null | 2010-08-28T07:52:54.063 | 2010-08-28T07:52:54.063 | null | null | 930 | null |
2179 | 1 | 2180 | null | 37 | 33076 | How to obtain a variable (attribute) importance using SVM?
| Variable importance from SVM | CC BY-SA 2.5 | null | 2010-08-28T13:34:42.963 | 2017-09-14T15:17:42.047 | null | null | null | [
"machine-learning",
"feature-selection",
"svm"
] |
2180 | 2 | null | 2179 | 23 | null | If you use l-1 penalty on the weight vector, it does automatic feature selection as the weights corresponding to irrelevant attributes are automatically set to zero. See [this paper](http://books.nips.cc/papers/files/nips16/NIPS2003_AA07.pdf). The (absolute) magnitude of each non-zero weights can give an idea about the importance of the corresponding attribute.
Also look at [this paper](http://jmlr.csail.mit.edu/papers/volume3/rakotomamonjy03a/rakotomamonjy03a.pdf) which uses criteria derived from SVMs to guide the attribute selection.
| null | CC BY-SA 2.5 | null | 2010-08-28T14:36:05.907 | 2010-08-28T14:36:05.907 | null | null | 881 | null |
2181 | 1 | null | null | 29 | 26770 | I'm interested in getting some books about multivariate analysis, and need your recommendations. Free books are always welcome, but if you know about some great non-free MVA book, please, state it.
| Book recommendations for multivariate analysis | CC BY-SA 2.5 | null | 2010-08-28T17:07:59.760 | 2016-06-23T19:39:00.613 | null | null | 1356 | [
"references",
"multivariate-analysis"
] |
2182 | 1 | null | null | 14 | 15016 | I need help explaining, and citing basic statistics texts, papers or other references, why it is generally incorrect to use the margin of error (MOE) statistic reported in polling to naively declare a statistical tie.
An example:
Candidate A leads Candidate B in a poll, $39 - 31$ percent, $4.5 \%$ margin-of-error for $500$ surveyed voters.
My friend reasons like so:
>
Because of the intricacies of statistical modeling, the margin of error means that A's true support could be as low as 34.5 percent and B's could be as high as 35.5 percent. Therefore, A and B are actually in a statistical dead heat.
All help appreciated in clearly articulating the flaw my friend's reasoning. I've tried to explain that it is incorrect to naively reject the hypothesis "A leads B" if $p_A-p_B < 2MOE$.
| Can you explain why statistical tie is not naively rejected when $p_1-p_2 < 2 \,\text {MOE}$? | CC BY-SA 3.0 | null | 2010-08-28T22:34:01.353 | 2020-02-19T01:06:57.893 | 2015-12-02T14:30:40.800 | 67822 | null | [
"polling"
] |
2183 | 2 | null | 2182 | 7 | null | My first attempt at an answer was flawed (see below for the flawed answer). The reason it is flawed is that the margin of error (MOE) that is reported applies to a candidate's polling percentage but not to the difference of the percentages. My second attempt explicitly addresses the question posed by the OP a bit better.
Second Attempt
The OP's friend reasons as follows:
- Construct the confidence interval for Candidate A and Candidate B separately using the given MOE.
- If they overlap we have a statistical dead hear and if they do not then A is currently leading B.
The main issue here is that the first step is invalid. Constructing confidence intervals independently for the two candidates is not a valid step because the polling percentages for the two candidates are dependent random variables. In other words, a voter who decides not to vote for A may potentially decide to vote for B instead. Thus, the correct way to assess if the lead is significant or not is to construct a confidence interval for the difference. See the wiki as to how to compute the standard error for the [difference of polling percentages](http://en.wikipedia.org/wiki/Margin_of_error#Comparing_percentages) under some assumptions.
Flawed answer below
In my opinion the 'correct' way to think of the polling result is as follows:
>
In a survey of 500 voters the chances that we will see a difference in lead as high as 8% is greater than 5%.
Whether you believe that 'A leads B' or 'A ties B' is then dependent on the extent to which you are willing to accept 5% as your cut-off criteria.
| null | CC BY-SA 2.5 | null | 2010-08-28T23:20:51.237 | 2010-08-29T02:31:19.680 | 2010-08-29T02:31:19.680 | null | null | null |
2184 | 2 | null | 2182 | 4 | null | Not only is that a bad way to term things but that's not even a statistical dead heat.
You don't use overlapping confidence intervals that way. If you really wanted to only say that Candidate A was going to win then Candidate A is definitely in the lead. The lead is 8% MOE 6.4%. The confidence interval of that subtraction score is not double the confidence interval of the individual scores. Which is implied by claiming the overlap of CIs (±MOE) around each estimate is a dead heat. Assuming equal N and variance, the MOE of the difference is sqrt(2) times 4.5. That's because finding the difference between the values would only double the variance (SD squared). The confidence interval is based on a sqrt of the variance therefore combining them is the average (4.5) * sqrt(2). Since the MOE of your 8% lead is approximately 6.4% then Candidate A is in the lead.
As an aside, MOE's are very conservative and based on the 50% choice value. The formula is sqrt(0.25/n) * 2. There is a formula for calculating standard errors of difference scores that we could use as well. We would apply that using the found values rather than the 50% cutoff and that still gives us a significant lead for Candidate A (7.5% MOE). I believe that, given the questioners comment, and the proximity of that cutoff to the hypothetical one selected, that that was probably what they were looking for.
Any introduction to both confidence intervals and to power would be helpful here. Even the wikipedia article on MOE looks pretty good.
| null | CC BY-SA 4.0 | null | 2010-08-29T00:12:35.500 | 2020-02-19T01:06:57.893 | 2020-02-19T01:06:57.893 | 601 | 601 | null |
2185 | 2 | null | 665 | 7 | null | Similar to what Mark said, Statistics was historically called Inverse Probability, since statistics tries to infer the causes of an event given the observations, while probability tends to be the other way around.
| null | CC BY-SA 2.5 | null | 2010-08-29T01:35:44.567 | 2010-08-29T01:35:44.567 | null | null | 1106 | null |
2186 | 2 | null | 2181 | 17 | null | Off the top of my head, I would say that the following general purpose books are rather interesting as a first start:
- Izenman, J. Modern Multivariate Statistical Techniques: Regression, Classification, and Manifold Learning. Springer. companion website
- Tinsley, H. and Brown, S. (2000). Handbook of Applied Multivariate Statistics and Mathematical Modeling. Academic Press.
There is also many applied textbook, like
- Everitt, B.S. (2005). An R and S-Plus® Companion to Multivariate Analysis. Springer. companion website
It is difficult to suggest you specific books as there are many ones that are domain-specific (e.g. social sciences, machine learning, categorical data, biomedical data).
| null | CC BY-SA 2.5 | null | 2010-08-29T08:44:54.437 | 2010-08-29T08:44:54.437 | null | null | 930 | null |
2187 | 2 | null | 2181 | 9 | null | Here are some of my books on that field (in alphabetical order).
- AFIFI, A., CLARK, V. Computer-Aided
Multivariate Analysis. CHAPMAN & HALL, 2000
- AGRESTI, A. Categorical Data Analysis. WILEY, 2002
- HAIR, Multivariate Data Analysis. 6th Ed.
- ΗÄRDLE, W., SIMAR, L. Applied Multivariate Statistical Analysis. SPRINGER, 2007.
- HARLOW, L. The Essence of Multivariate Thinking. LAWRENCE ERLBAUM ASSOCIATES, INC., 2005
- GELMAN, A., HILL, J. Data Analysis
Using Regression and
Multilevel/Hierarchical Models.
CAMBRIDGE UNIVERSITY PRESS, 2007.
- IZENMAN, A. J. Modern Multivariate Statistical Techniques. SPRINGER, 2008
- RENCHER, A. Methods of Multivariate analysis. SECOND ED., WILEY-INTERSCIENCE, 2007
- TABACHNICK B., FIDELL, L. Using Multivariate Statistics. 5th Ed. Pearson Education. Inc, 2007.
- TIMM, N. Applied Multivariate Analysis. SPRINGER, 2002
- YANG, K., TREWN, J. Multivariate Statistical Methods in Quality Management. MCGRAW-HILL, 2004
| null | CC BY-SA 2.5 | null | 2010-08-29T08:58:43.240 | 2010-08-29T08:58:43.240 | null | null | 339 | null |
2188 | 2 | null | 490 | 12 | null | I have a slight preference for [Random Forests](http://www.stat.berkeley.edu/~breiman/RandomForests/) by Leo Breiman & Adele Cutleer for several reasons:
- it allows to cope with categorical and continuous predictors, as well as unbalanced class sample size;
- as an ensemble/embedded method, cross-validation is embedded and allows to estimate a generalization error;
- it is relatively insensible to its tuning parameters (% of variables selected for growing a tree, # of trees built);
- it provides an original measure of variable importance and is able to uncover complex interactions between variables (although this may lead to hard to read results).
Some authors argued that it performed as well as penalized SVM or Gradient Boosting Machines (see, e.g. Cutler et al., 2009, for the latter point).
A complete coverage of its applications or advantages may be off the topic, so I suggest the [Elements of Statistical Learning](http://www-stat.stanford.edu/~tibs/ElemStatLearn/) from Hastie et al. (chap. 15) and Sayes et al. (2007) for further readings.
Last but not least, it has a nice implementation in R, with the [randomForest](http://cran.r-project.org/web/packages/randomForest/index.html) package. Other R packages also extend or use it, e.g. [party](http://cran.r-project.org/web/packages/party/index.html) and [caret](http://cran.r-project.org/web/packages/caret/index.html).
References:
Cutler, A., Cutler, D.R., and Stevens, J.R. (2009). Tree-Based Methods, in High-Dimensional Data Analysis in Cancer Research, Li, X. and Xu, R. (eds.), pp. 83-101, Springer.
Saeys, Y., Inza, I., and Larrañaga, P. (2007). A review of feature selection techniques in bioinformatics. Bioinformatics, 23(19): 2507-2517.
| null | CC BY-SA 2.5 | null | 2010-08-29T12:04:51.210 | 2010-08-29T12:04:51.210 | null | null | 930 | null |
2189 | 2 | null | 2140 | 1 | null | I'm not sure if such a (non-parametric) permutation procedure could be applied here. Anyways, here is my idea:
```
a <- c(1.18, -0.41, -0.66, 0.98, 0.1)
b <- c(-0.36, -0.73, -1.47, 0.15, -0.31)
total <- c(a,b)
first <- combn(total,length(a))
second <- apply(first,2,function(z) total[is.na(pmatch(total,z))])
var.ratio <- apply(first,2,var) / apply(second,2,var)
# the first element of var.ratio is the one that I'm interested in
(p.value <- length(var.ratio[var.ratio >= var.ratio[1]]) / length(var.ratio))
[1] 0.3055556
```
| null | CC BY-SA 2.5 | null | 2010-08-29T14:07:55.913 | 2010-08-29T14:41:44.697 | 2010-08-29T14:41:44.697 | 339 | 339 | null |
2190 | 1 | null | null | 2 | 304 | I have a great prediction yet I am unsure how to uncover how the results were generated?
| How to reconstruct ensemble of trees from random forest? | CC BY-SA 2.5 | 0 | 2010-08-29T16:26:44.937 | 2010-09-28T19:53:38.997 | 2010-08-29T18:45:28.027 | 71 | null | [
"classification",
"random-forest"
] |
2191 | 2 | null | 2181 | 4 | null | [Analyzing Multivariate Data](http://rads.stackoverflow.com/amzn/click/0534349749) by James Lattin, J Douglas Carroll and Paul E Green.
| null | CC BY-SA 2.5 | null | 2010-08-29T17:15:22.733 | 2010-08-29T17:15:22.733 | null | null | 174 | null |
2192 | 2 | null | 224 | 1 | null | I've used [ZedGraph](http://zedgraph.org/) for .NET. It's open source, and supports all common 2D chart types.
| null | CC BY-SA 2.5 | null | 2010-08-29T17:49:09.767 | 2010-08-29T17:49:09.767 | null | null | 956 | null |
2193 | 2 | null | 2190 | 1 | null | From the trees attributed to each class's output you can do a tree search on the similarities. You could do it manually, but that would be as tedious as examining the weights on a Neural network. So you want to find the overlaps in the decision tree structures. This can look for various features depending upon the problem.
Eg. you can see if there is statistically more preference for certain nodes to be parents of other nodes giving a hierarchical structure. You just search all nodes and find if there is more than average having certain children. Do a distribution test to show its beyond a random chance to have consistently those children and that reveals structure of the problem you analyzed.
Best.
| null | CC BY-SA 2.5 | null | 2010-08-29T19:47:13.183 | 2010-08-29T19:47:13.183 | null | null | 1098 | null |
2194 | 2 | null | 2171 | 0 | null | R is great, but it is not that R is difficult to learn it's that the documentation is impossible to search for any other name like Rq would be great. So when you got a problem, searching for a solution is a nightmare, and the documentation is not great either. Matlab or Octave will be great. And to get those plots in R or Matlab would be very very tedious.
IMHO post processing visuals is the best route. A lot of them from flowing data are put through Adobe Illustrator or Gimp. It is faster. Once you get the structure of the plot, then change details in an editor. Using R as an editor does not give you the flexibility you want. You will find yourself searching for new packages all the time.
| null | CC BY-SA 3.0 | null | 2010-08-29T19:54:09.083 | 2014-11-15T13:46:30.810 | 2014-11-15T13:46:30.810 | 22047 | 1098 | null |
2195 | 2 | null | 21 | 2 | null | I don't know about the first point. But for the second one, autoregressive (AR) functions could be simple. I would really chose a parametric method against a non-parametric one. The forecasting in AR is straight forward. And consensus data has lots of samples for each period so you can get robust parameter estimates at each time. And for the association x_n-1 to x_n function is simply a smoothed interpolation of any choice.
Changes in zones, well based on empirical data or prior belief?
State of the art in consenus? Those methods are arcane. They iterate over generations! Aeons. You could use Gaussian processes which would be quite advanced methodology for these problems. But most in the field stick to older methods given more 'tuning'.
Best.
| null | CC BY-SA 2.5 | null | 2010-08-29T20:01:34.300 | 2010-08-29T20:01:34.300 | null | null | 1098 | null |
2196 | 2 | null | 369 | 5 | null | You can try Latent Semantic Analysis, which basically provides a way to represent in a reduced space your news feeds and any term (in your case, keyword appearing in the title). As it relies on Singular Value Decomposition, I suppose you may then be able to check if there exists a particular association between those two attributes. I know this is used to find documents matching a specific set of criteria, as in information retrieval, or to construct a tree reflecting terms similarity (like a dictionary) based on a large corpus (which here plays the role of the concept space).
See for a gentle introduction [An Introduction to Latent Semantic Analysis](http://lsa.colorado.edu/papers/dp1.LSAintro.pdf), by Landauer et al.
Moreover, there is an R package that implements this technique, namely [lsa](http://cran.r-project.org/web/packages/lsa/index.html).
| null | CC BY-SA 2.5 | null | 2010-08-29T20:42:33.390 | 2010-08-29T20:42:33.390 | null | null | 930 | null |
2197 | 1 | 2206 | null | 7 | 384 | Let's say I have a dataset with 1000 observations in 10 variables, "A" through "J." I have 1000 responses/measures for each of the first 8 variables, through "H," but only the first 500 observations for "I" are not missing, and only the last 500 observations for "J" are not missing -- there are no observations for which I have measures of both of the last two variables, I and J.
Thus, if I calculate (pairwise) correlations, I have a full correlation matrix, with only the correlation between I and J missing. Let's say I want to run Principal Component Analysis, or some other such scaling procedure on this correlation matrix.
What I think I would like to do is:
- Randomly generate (perhaps from some distribution on [-1, 1], or perhaps via sampling from existing values in the rest of the correlation matrix) an "invented" correlation between I and J.
- Put that in the correlation matrix.
- Run PCA on the correlation matrix with this invented value.
- Repeat steps 1 - 3 some large number of times.
- Assess the collective results of this large number of PCAs, looking at the mean and variance of the loadings, scores, eigenvalues, etc., based on the "pseudo-bootstrapped" iterations.
Questions:
- Is there a better way to handle (a) missing value(s) in the correlation matrix?
- Is there any precedent for replacing such (a) missing value(s) with random invented values? If so, what is it called?
- Is this related to the bootstrap?
Thanks a lot, in advance.
Edit:
Question 4. Is this a defensible approach to imputation?
| Precedent for Bootstrap-like procedure with "invented" data? | CC BY-SA 2.5 | null | 2010-08-29T22:50:57.957 | 2010-10-10T19:36:49.337 | 2010-08-29T23:18:38.910 | 1117 | 1117 | [
"correlation",
"pca",
"bootstrap"
] |
2198 | 1 | 2202 | null | 5 | 7467 | If given probability of $A$ is $a$ and probability of $B$ is $b$, how do I find min/max probability of intersection? Max value of intersection would be $\min(a,b)$, how do I find the min?
| Find range of possible values for probability of intersection given individual probabilities | CC BY-SA 2.5 | null | 2010-08-30T00:54:36.273 | 2010-08-30T06:25:16.763 | 2010-08-30T06:25:16.763 | null | 862 | [
"probability"
] |
2199 | 2 | null | 2181 | 4 | null | Tabachnick is the most cited on Google Scholar
Hair (6th ed) has the most ratings (with a score above 4.5) on Amazon
I recommend Hair, as I've read it, and it is written in plain language.
If you are a student or staff at a university, then I would see if your school has an account with SpringerLink, as the Hardle book is on there for free.
| null | CC BY-SA 2.5 | null | 2010-08-30T01:22:20.390 | 2010-08-30T01:22:20.390 | null | null | 74 | null |
2200 | 2 | null | 2198 | 1 | null | The min is the smaller of two values: $\min(a,b) = a$ if $a < b$ and $b$ otherwise. Though I do not think this is what you are asking for...
| null | CC BY-SA 2.5 | null | 2010-08-30T02:51:15.950 | 2010-08-30T02:51:15.950 | null | null | 795 | null |
2201 | 2 | null | 2197 | 5 | null | An alternative approach would be to impute the missing raw data using a missing data replacement procedure. You could then run the PCA on the correlation matrix that resulted from the imputed dataset (see also [multiple imputation](http://www.stat.psu.edu/~jls/mifaq.html)).
Here are a few links on missing data imputation in R:
- Gelman on missing data imputation
- Quick-R has links to R packages such as Amelia II, Mice and mitools
| null | CC BY-SA 2.5 | null | 2010-08-30T03:31:36.610 | 2010-08-30T03:31:36.610 | null | null | 183 | null |
2202 | 2 | null | 2198 | 3 | null | if $a+b \le 1$, then presumably one can find disjoint sets $A$ and $B$ with ${\rm P}A = a$ and
${\rm P}B = b$. so in this case, the min is 0.
if $a+b > 1$, we get a smallest intersection by choosing $B$ to contain all of $A^C$, which has probability $1-a$ and then adding to that a piece of $A$ to bring ${\rm P}B$ up to $b$. so the piece of $A$ added in has to have probability $b - (1-a) = a+b - 1$. this last quantity is then the min probability for the intersection when $a+b > 1$.
so in any case. the min is $(a+b-1)^+$.
| null | CC BY-SA 2.5 | null | 2010-08-30T03:59:15.940 | 2010-08-30T03:59:15.940 | null | null | 1112 | null |
2203 | 2 | null | 2181 | 7 | null | JOHNSON R., WICHERN D., [Applied Multivariate Statistical Analysis](http://www.pearsonhighered.com/educator/academic/product/0,3110,0131877151,00.html), is what we used in our undergraduate Multivariate class at UC Davis, and it does a pretty good job (though it's a bit pricey).
| null | CC BY-SA 2.5 | null | 2010-08-30T04:55:48.270 | 2010-08-30T04:55:48.270 | null | null | 1118 | null |
2204 | 2 | null | 2072 | 8 | null | You could start with the following references:
- Comte (1999) "Discrete and continuous time cointegration", Journal of Econometrics.
- Ferstl (2009) "Cointegration in discrete and continuous time". Thesis.
[Citations of Comte](http://scholar.google.com/scholar?cites=9115376900789007179) may also be useful.
| null | CC BY-SA 3.0 | null | 2010-08-30T09:45:59.520 | 2014-01-13T01:53:34.157 | 2014-01-13T01:53:34.157 | 159 | 159 | null |
2205 | 2 | null | 2181 | 4 | null | Hastie, T., Tibshirani, R. and Friedman, J.: "The Elements of Statistical Learning: Data Mining, Inference, and Prediction.", Springer ([book home page](http://www-stat.stanford.edu/~tibs/ElemStatLearn/))
| null | CC BY-SA 2.5 | null | 2010-08-30T11:48:13.960 | 2010-08-30T11:48:13.960 | null | null | 961 | null |
2206 | 2 | null | 2197 | 5 | null |
- I don't know.
- What you've shown is a legitimate Monte Carlo simulation
- Bootstrap is also a Monte Carlo method, but it is more about estimating distributions.
- In general yes, especially if imputation is giving poor results. In special cases when imputation works great, no. In simple words, it will be as good as strongly you are convinced that you cannot say more about I&J correlation that it is in -1..1.
| null | CC BY-SA 2.5 | null | 2010-08-30T11:49:00.233 | 2010-10-10T19:36:49.337 | 2010-10-10T19:36:49.337 | 930 | null | null |
2207 | 2 | null | 354 | 9 | null | In ordinary least squares, the solution to (A'A)^(-1) x = A'b minimizes squared error loss, and is the maximum likelihood solution.
So, largely because the math was easy in this historic case.
But generally people minimize many different [loss functions](http://en.wikipedia.org/wiki/Loss_function), such as exponential, logistic, cauchy, laplace, huber, etc. These more exotic loss functions generally require a lot of computational resources, and don't have closed form solutions (in general), so they're only starting to become more popular now.
| null | CC BY-SA 2.5 | null | 2010-08-30T13:44:51.723 | 2010-11-28T11:58:46.793 | 2010-11-28T11:58:46.793 | 930 | 1119 | null |
2208 | 2 | null | 409 | 0 | null | The reason the above works for uncertainty of the mean is because of the central limit theorem.
As long as the central limit theorem holds for your application, so will the above.
| null | CC BY-SA 2.5 | null | 2010-08-30T13:55:03.287 | 2010-08-30T13:55:03.287 | null | null | 1119 | null |
2209 | 2 | null | 125 | 19 | null | Its focus isn't strictly on Bayesian statistics, so it lacks some methodology, but David MacKay's Information Theory, Inference, and Learning Algorithms made me intuitively grasp Bayesian statistics better than others - most do the how quite nicely, but I felt MacKay explained why better.
| null | CC BY-SA 2.5 | null | 2010-08-30T14:00:17.647 | 2010-09-09T06:16:29.723 | 2010-09-09T06:16:29.723 | 1119 | 1119 | null |
2210 | 2 | null | 1164 | 7 | null | While they're not mutually exclusive, I think the growing popularity of Bayesian statistics is part of it. Bayesian statistics can achieve a lot of the same goals through priors and model averaging, and tend to be a bit more robust in practice.
| null | CC BY-SA 2.5 | null | 2010-08-30T14:11:06.037 | 2010-08-30T14:11:06.037 | null | null | 1119 | null |
2212 | 2 | null | 224 | 4 | null | For javascript protovis (http://vis.stanford.edu/protovis/) is very nice.
| null | CC BY-SA 2.5 | null | 2010-08-30T14:19:24.787 | 2010-08-30T14:19:24.787 | null | null | 1119 | null |
2213 | 1 | 2218 | null | 74 | 102624 | What is the difference between a [feed-forward](http://en.wikipedia.org/wiki/Feedforward_neural_network) and [recurrent](http://en.wikipedia.org/wiki/Recurrent_neural_networks) neural network?
Why would you use one over the other?
Do other network topologies exist?
| What's the difference between feed-forward and recurrent neural networks? | CC BY-SA 3.0 | null | 2010-08-30T15:33:28.180 | 2020-01-07T17:45:25.723 | 2017-10-17T23:25:36.790 | null | 5 | [
"machine-learning",
"neural-networks",
"terminology",
"recurrent-neural-network",
"topologies"
] |
2214 | 1 | null | null | 8 | 217 | One of the purported uses of L-estimators is the ability to 'robustly' estimate the parameters of a random variable drawn from a given class. One of the downsides of using [Levy $\alpha$-stable distributions](http://en.wikipedia.org/wiki/Stable_distribution) is that it is difficult to estimate the parameters given a sample of observations drawn from the class. Has there been any work in estimating parameters of a Levy RV using L-estimators? There is an obvious difficulty in the fact that the PDF and CDF of the Levy distribution do not have a closed form, but perhaps this could be overcome by some trickery. Any hints?
| Estimating parameters of sum-stable RV via L-estimators | CC BY-SA 2.5 | null | 2010-08-30T16:36:55.427 | 2020-09-27T06:26:57.433 | 2020-09-27T06:26:57.433 | 7290 | 795 | [
"distributions",
"estimation",
"robust",
"stable-distribution"
] |
2215 | 2 | null | 2197 | 5 | null |
- I think we need to know more about the nature of the data to make recommendations on how to deal with the missing values. An exploratory task that jumps out to me is to look at the behavior of variables A through H when I is present, versus A through H when J is present. Is there anything interesting to take into account for subsequent modeling? Instead of resampling a descriptive statistic, like correlation, I would consider resampling the data itself. For example, you could use the bootstrap to create 500 new (I,J) pairs based upon the 500 values you actually have for these variables. But, again, the exploratory work may inform a resampling scheme beyond a "naive", IID approach.
- In general, as others have noted, filling in missing data goes by "imputation" and there are different techniques depending on the context. For example, in one setting I might simply use a median value, or a spline fit, but for a missing data point in a time series I might impute with a value generated from an ARMA time series model.
- Your outlined solution would be "bootstrapping" if you resample from the observed data. I think of Monte Carlo as any method that uses probabilistic sampling of data as input into a computation. When the sampling is from a non-parametric or parametric distribution that you use to model how the data was generated, I still call it Monte Carlo. But, when the sampling is done from an empirical distribution (i.e., the observed data itself, not a model of the data generating process) I call it bootstrapping.
| null | CC BY-SA 2.5 | null | 2010-08-30T16:54:55.467 | 2010-08-30T16:54:55.467 | null | null | 1080 | null |
2216 | 2 | null | 2072 | 4 | null | Although it may only be of little help, the problem you present to me is synonymous with the "[Change of Support](http://dx.doi.org/10.1198/016214502760047140)" problem encountered when using areal units. Although this work just presents a framework for what you describe as "reglarize and interpolate" using a method referred to as "kriging". I don't think any of this work will help answer your question of whether estimating your missing values in the series in such a manner will bias error correction estimates, although if some of your samples are in clustered time intervals for both series you may be able to check for yourself. You may also be interested in the technique of "co-kriging" from this field, which uses information from one source to estimate the value for another (if your interested I would suggest you check out the work being done by [Pierre Goovaerts](http://sites.google.com/site/goovaertspierre/pierregoovaertswebsite)).
Again I'm not sure how helpful this will be though. It may be substantially simpler to just use current time-series forecasting techniques to estimate your missing data. It won't help you decide what to estimate either.
Good luck, and keep the thread updated if you find any pertinent material. I would be interested, and you would think with the proliferation of data sources online this would become an pertinent issue for at least some research projects.
| null | CC BY-SA 2.5 | null | 2010-08-30T17:42:31.317 | 2010-08-30T17:42:31.317 | null | null | 1036 | null |
2217 | 2 | null | 423 | 226 | null | Another from [XKCD](http://xkcd.com/539/):

Mentioned [here](http://www.stat.columbia.edu/~cook/movabletype/archives/2009/02/cartoon.html) and [here](http://www.cerebralmastication.com/2009/02/box-plot-vs-violin-plot-in-r/).
| null | CC BY-SA 3.0 | null | 2010-08-30T18:02:22.737 | 2014-08-16T17:48:56.870 | 2014-08-16T17:48:56.870 | 3807 | 5 | null |
2218 | 2 | null | 2213 | 67 | null | [Feed-forward](http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.multil.jpg) ANNs allow signals to travel one way only: from input to output. There are no feedback (loops); i.e., the output of any layer does not affect that same layer. Feed-forward ANNs tend to be straightforward networks that associate inputs with outputs. They are extensively used in pattern recognition. This type of organisation is also referred to as bottom-up or top-down.

[Feedback](http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.neural2.jpg) (or recurrent or interactive) networks can have signals traveling in both directions by introducing loops in the network. Feedback networks are powerful and can get extremely complicated. Computations derived from earlier input are fed back into the network, which gives them a kind of memory. Feedback networks are dynamic; their 'state' is changing continuously until they reach an equilibrium point. They remain at the equilibrium point until the input changes and a new equilibrium needs to be found.

Feedforward neural networks are ideally suitable for modeling relationships between a set of predictor or input variables and one or more response or output variables. In other words, they are appropriate for any functional mapping problem where we want to know how a number of input variables affect the output variable. The multilayer feedforward neural networks, also called [multi-layer perceptrons](http://en.wikipedia.org/wiki/Multilayer_perceptron) (MLP), are the most widely studied and used neural network model in practice.
As an example of feedback network, I can recall [Hopfield’s network](http://en.wikipedia.org/wiki/Hopfield_net). The main use of Hopfield’s network is as associative memory. An associative memory is a device which accepts an input pattern and generates an output as the stored pattern which is most closely associated with the input. The function of the associate memory is to recall the corresponding stored pattern, and then produce a clear version of the pattern at the output. Hopfield networks are typically used for those problems with binary pattern vectors and the input pattern may be a noisy version of one of the stored patterns. In the Hopfield network, the stored patterns are encoded as the weights of the network.
Kohonen’s self-organizing maps (SOM) represent another neural network type that is markedly different from the feedforward multilayer networks. Unlike training in the feedforward MLP, the SOM training or learning is often called unsupervised because there are no known target outputs associated with each input pattern in SOM and during the training process, the SOM processes the input patterns and learns to cluster or segment the data through adjustment of weights (that makes it an important neural network model for dimension reduction and data clustering). A two-dimensional map is typically created in such a way that the orders of the interrelationships among inputs are preserved. The number and composition of clusters can be visually determined based on the output distribution generated by the training process. With only input variables in the training sample, SOM aims to learn or discover the underlying structure of the data.
(The diagrams are from Dana Vrajitoru's [C463 / B551 Artificial Intelligence web site](http://www.cs.iusb.edu/~danav/teach/c463/12_nn.html).)
| null | CC BY-SA 4.0 | null | 2010-08-30T18:23:24.283 | 2019-01-08T20:14:05.063 | 2019-01-08T20:14:05.063 | 79696 | 339 | null |
2219 | 1 | 2221 | null | 5 | 4414 | General question: Given a dartboard of unit radius, what's the probability that a dart randomly lands within a circle of radius 1/3 centered inside the dartboard?
Standard answer: The dart is thrown such that it hits each point with equal likelihood. The probability that it lands within the inner circle is the ratio of the areas of the two circles, which is 1/9.
Other formulation: Suppose that the dart is thrown directly at the center of the dartboard, but wind comes from a random direction. For any wind direction, the velocity of the wind pushes the dart some distance from the center of the dartboard to its perimeter, with each distance being equally likely. Each wind vector is equally likely and, for each vector, each distance is equally likely. The probability of being less than 1/3 units from the center of the dartboard is 1/3.
The definition of randomness is different for each formulation. In the standard answer, a random vector is chosen from the set ${(x,y)\colon \; x^2+y^2 \leq 1}$ and we ask the probability that $x^2 + y^2 \leq \frac{1}{9}$. In the other formulation, a random vector is chosen ${(x,y)\colon \; x^2+y^2 = 1}$ and, on this vector, a distance $d$ is chosen uniformly on $[0,1]$. We ask the probability that $d\leq\frac{1}{3}$.
I understand the math of solving this problem, but I don't understand intuitively why these different conceptions of randomness give two different answers. It seems that both are valid means for answering the general question. Intuitively, why do they give different answers?
| Two answers to the dartboard problem | CC BY-SA 2.5 | null | 2010-08-30T18:42:30.753 | 2010-08-31T13:43:39.357 | 2010-08-31T13:43:39.357 | 8 | 401 | [
"probability",
"games"
] |
2220 | 1 | 2222 | null | 9 | 9248 | Permutation tests are significance tests based on permutation resamples drawn at random from the original data. Permutation resamples are drawn without replacement, in contrast to bootstrap samples, which are drawn with replacement. Here is [an example I did in R](https://stackoverflow.com/questions/2449226/randomized-experiments-in-r) of a simple permutation test. (Your comments are welcome)
Permutation tests have great advantages. They do not require specific population shapes such as normality. They apply to a variety of statistics, not just to statistics that have a simple distribution under the null hypothesis. They can give very accurate p-values, regardless of the shape and size of the population (if enough permutations are used).
I have also read that it is often useful to give a confidence interval along with a test, which is created using bootstrap resampling rather than permutation resampling.
Could you explain (or just give the R code) how a confidence interval is constructed (i.e. for the difference between the means of the two samples in the above example) ?
EDIT
After some googling I found [this interesting reading](https://pdfs.semanticscholar.org/a964/bc18aabbbd1c507168f3835a54a2f2cab3cd.pdf).
| How do we create a confidence interval for the parameter of a permutation test? | CC BY-SA 4.0 | null | 2010-08-30T18:47:48.040 | 2018-12-13T15:08:07.167 | 2018-12-13T15:08:07.167 | 339 | 339 | [
"confidence-interval",
"bootstrap",
"permutation-test"
] |
2221 | 2 | null | 2219 | 3 | null | Intuitively, imagine modeling the second formulation as follows: randomly select an angle to the $x$-axis, calling it $\theta$, then model the location of the dart as falling uniformly in a very thin rectangle along the line $y = (\tan\theta) x$. Approximately, the dart is in the inner circle with probability $1/3$. However, when you consider the collection of all such thin rectangles (draw them, say), you will see that they have more overlapping area near the center of the dartboard, and less overlap towards the perimeter of the dartboard. This will be more obvious as you draw the rectangles larger and larger (though the approximation will be worse). As you make the rectangles thinner, the approximation gets better, but the same principle applies: you are putting more area around the center of the circle, which increases the probability of hitting the inner circle.
| null | CC BY-SA 2.5 | null | 2010-08-30T18:53:38.447 | 2010-08-30T18:53:38.447 | null | null | 795 | null |
2222 | 2 | null | 2220 | 7 | null | It's OK to use permutation resampling. It really depends on a number of factors. If your permutations are a relatively low number then your estimation of your confidence interval is not so great with permutations. Your permutations are in somewhat of a gray area and probably are fine.
The only difference from your prior code is that you'd generate your samples randomly instead of with permutations. And, you'd generate more of them, let's say 1000 for example. Get the difference scores for your 1000 replications of your experiment. Take the cutoffs for the middle 950 (95%). That's your confidence interval. It falls directly from the bootstrap.
You've already done most of this in your example. dif.treat is 462 items long. Therefore, you need the lower 2.5% and upper 2.5% cut offs (about 11 items in on each end).
Using your code from before...
```
y <- sort(dif.treat)
ci.lo <- y[11]
ci.hi <- y[462-11]
```
Off hand I'd say that 462 is a little low but you'll find a bootstrap to 10,000 comes out with scores that are little different (likely closer to the mean).
Thought I'd also add in some simple code requiring the boot library (based on your prior code).
```
diff <- function(x,i) mean(x[i[6:11]]) - mean(x[i[1:5]])
b <- boot(total, diff, R = 1000)
boot.ci(b)
```
| null | CC BY-SA 2.5 | null | 2010-08-30T19:49:10.880 | 2010-08-30T20:24:38.670 | 2010-08-30T20:24:38.670 | 601 | 601 | null |
2223 | 1 | null | null | 1 | 208 | I would like to know if anyone could recommend a book that deals more with the practical issues around conducting a meta-analysis?
Thanking you in advance
Andrew Vitiello
| Books covering how to conduct a meta-anlysis | CC BY-SA 2.5 | null | 2010-08-30T19:54:28.290 | 2010-08-30T20:33:07.587 | null | null | 431 | [
"meta-analysis"
] |
2224 | 2 | null | 2223 | 4 | null | I asked this question last week and obtained two excellent answers. The question is readily accessible through links on your "meta-analysis" tag. Here's the URL:
[Looking for good introductory treatment of meta-analysis](https://stats.stackexchange.com/questions/1963/looking-for-good-introductory-treatment-of-meta-analysis)
| null | CC BY-SA 2.5 | null | 2010-08-30T20:33:07.587 | 2010-08-30T20:33:07.587 | 2017-04-13T12:44:52.277 | -1 | 919 | null |
2225 | 2 | null | 2219 | 1 | null | Think of the board as a filter -- it just converts the positions on board into an id of a field that dart hit. So that the output will be only a deterministically converted input -- and thus it is obvious that different realization of throwing darts will result in distribution of results.
The paradox itself is purely linguistic -- "random throwing" seems ok, while the true is that it misses crucial information about how the throwing is realized.
| null | CC BY-SA 2.5 | null | 2010-08-30T21:07:57.217 | 2010-08-30T21:07:57.217 | null | null | null | null |
2226 | 2 | null | 2219 | 3 | null | It seems to me that the fundamental issue is that the two scenarios assume different data generating process for the position of a dart which results in different probabilities.
The first situation's data generating process looks like so: (a) Pick a $x \in U[-1,1]$ and (b) Pick a $y$ uniformly subject to the constraint that $x^2+y^2 \le 1$. Then the required probability is $P(x^2 + y^2 \le \frac{1}{9})= \frac{1}{9}$.
The second situation's data generating process is as described in the question: (a) Pick an angle $\theta \in [0,2\pi]$ and (b) Pick a point on the diameter that is at an angle $\theta$ to the x-axis. Under this data generating process the required probability is $\frac{1}{3}$ as mentioned in the question.
As articulated by mbq, the issue is that the phrase 'randomly lands on the dartboard' is not precise enough as it leaves the meaning of 'random' ambiguous. This is similar to asking what is the probability of coin landing heads on a random toss. The answer can be 0.5 if we assume that the coin is a fair coin but it can be anything else (say, 0.8) if the coin is biased towards heads.
| null | CC BY-SA 2.5 | null | 2010-08-30T22:37:36.583 | 2010-08-30T22:37:36.583 | null | null | null | null |
2227 | 2 | null | 423 | 97 | null | There is [this one](http://www.isds.duke.edu/~mw/ABS04/Lecture_Slides/4.Stats_Regression.pdf) on Bayesian learning:

| null | CC BY-SA 3.0 | null | 2010-08-30T23:04:11.447 | 2012-05-04T22:21:31.813 | 2012-05-04T22:21:31.813 | 919 | 881 | null |
2228 | 2 | null | 2104 | 2 | null | ez has now been updated to version 2.0. Among other improvements, the bug that caused it to fail to work for this example has been fixed.
| null | CC BY-SA 2.5 | null | 2010-08-31T00:03:12.787 | 2010-08-31T00:03:12.787 | null | null | 364 | null |
2229 | 2 | null | 1531 | 7 | null | There is no single exact confidence interval for the ratio of two proportions. Generally speaking, an exact 95% confidence interval is any interval-generating procedure that guarantees at least 95% coverage of the true ratio, irrespective of the values of the underlying proportions.
An interval formed by the Fisher Exact Test is probably overly conservative -- in that it has MORE than 95% coverage for most values of the parameters. It's not wrong but it's also wider than it has to be.
The interval used by the StatXact software with the default settings would be a better choice here -- I believe it uses some variety of Chan interval (i.e. an extremum-searching interval using the Berger-Boos procedure and a standardized statistic), but would need to check the manual to be sure.
When you ask for the "how and why" -- does this answer your question? I think we could certainly expound further about the definition of confidence intervals and how to construct one from scratch if that's what you were looking for. Or does it do the trick just to say that this is a Fisher Exact Test-based interval, one (but not the only and not the most powerful) of the confidence intervals that guarantees its coverage unconditionally?
(Footnote: Some authors reserve the word "exact" to apply only to intervals and tests where false-positives are controlled at exactly alpha, instead of merely bounded by alpha. Taken in this sense, there simply isn't a deterministic exact confidence interval for the ratio of two proportions, period. All of the deterministic intervals are necessarily approximate. Of course, even so some intervals and tests do unconditionally control Type I error and some don't.)
| null | CC BY-SA 2.5 | null | 2010-08-31T00:32:34.883 | 2010-08-31T02:24:01.933 | 2010-08-31T02:24:01.933 | 1122 | 1122 | null |
2230 | 1 | 2232 | null | 104 | 49356 | I've never really grokked the difference between these two measures of convergence. (Or, in fact, any of the different types of convergence, but I mention these two in particular because of the Weak and Strong Laws of Large Numbers.)
Sure, I can quote the definition of each and give an example where they differ, but I still don't quite get it.
What's a good way to understand the difference? Why is the difference important? Is there a particularly memorable example where they differ?
| Convergence in probability vs. almost sure convergence | CC BY-SA 2.5 | null | 2010-08-31T03:57:21.193 | 2022-11-05T12:15:45.517 | 2010-08-31T08:21:26.447 | null | 1106 | [
"probability",
"random-variable"
] |
2231 | 2 | null | 2230 | 7 | null | I understand it as follows,
Convergence in probability
The probability that the sequence of random variables equals the target value is asymptotically decreasing and approaches 0 but never actually attains 0.
Almost Sure Convergence
The sequence of random variables will equal the target value asymptotically but you cannot predict at what point it will happen.
Almost sure convergence is a stronger condition on the behavior of a sequence of random variables because it states that "something will definitely happen" (we just don't know when). In contrast, convergence in probability states that "while something is likely to happen" the likelihood of "something not happening" decreases asymptotically but never actually reaches 0. (something $\equiv$ a sequence of random variables converging to a particular value).
The [wiki](http://en.wikipedia.org/wiki/Convergence_of_random_variables) has some examples of both which should help clarify the above (in particular see the example of the archer in the context of convergence in prob and the example of the charity in the context of almost sure convergence).
From a practical standpoint, convergence in probability is enough as we do not particularly care about very unlikely events. As an example, consistency of an estimator is essentially convergence in probability. Thus, when using a consistent estimate, we implicitly acknowledge the fact that in large samples there is a very small probability that our estimate is far from the true value. We live with this 'defect' of convergence in probability as we know that asymptotically the probability of the estimator being far from the truth is vanishingly small.
| null | CC BY-SA 2.5 | null | 2010-08-31T04:39:45.463 | 2018-10-23T18:33:37.603 | 2018-10-23T18:33:37.603 | 7290 | null | null |
2232 | 2 | null | 2230 | 110 | null | From my point of view the difference is important, but largely for philosophical reasons. Assume you have some device, that improves with time. So, every time you use the device the probability of it failing is less than before.
Convergence in probability says that the chance of failure goes to zero as the number of usages goes to infinity. So, after using the device a large number of times, you can be very confident of it working correctly, it still might fail, it's just very unlikely.
Convergence almost surely is a bit stronger. It says that the total number of failures is finite. That is, if you count the number of failures as the number of usages goes to infinity, you will get a finite number. The impact of this is as follows: As you use the device more and more, you will, after some finite number of usages, exhaust all failures. From then on the device will work perfectly.
As Srikant points out, you don't actually know when you have exhausted all failures, so from a purely practical point of view, there is not much difference between the two modes of convergence.
However, personally I am very glad that, for example, the strong law of large numbers exists, as opposed to just the weak law. Because now, a scientific experiment to obtain, say, the speed of light, is justified in taking averages. At least in theory, after obtaining enough data, you can get arbitrarily close to the true speed of light. There wont be any failures (however improbable) in the averaging process.
Let me clarify what I mean by ''failures (however improbable) in the averaging process''. Choose some $\delta > 0$ arbitrarily small. You obtain $n$ estimates $X_1,X_2,\dots,X_n$ of the speed of light (or some other quantity) that has some `true' value, say $\mu$. You compute the average
$$S_n = \frac{1}{n}\sum_{k=1}^n X_k.$$
As we obtain more data ($n$ increases) we can compute $S_n$ for each $n = 1,2,\dots$. The weak law says (under some assumptions about the $X_n$) that the probability
$$P(|S_n - \mu| > \delta) \rightarrow 0$$
as $n$ goes to $\infty$. The strong law says that the number of times that $|S_n - \mu|$ is larger than $\delta$ is finite (with probability 1). That is, if we define the indicator function $I(|S_n - \mu| > \delta)$ that returns one when $|S_n - \mu| > \delta$ and zero otherwise, then
$$\sum_{n=1}^{\infty}I(|S_n - \mu| > \delta)$$
converges. This gives you considerable confidence in the value of $S_n$, because it guarantees (i.e. with probability 1) the existence of some finite $n_0$ such that $|S_n - \mu| < \delta$ for all $n > n_0$ (i.e. the average never fails for $n > n_0$). Note that the weak law gives no such guarantee.
| null | CC BY-SA 2.5 | null | 2010-08-31T06:53:32.093 | 2010-10-02T04:41:28.330 | 2010-10-02T04:41:28.330 | 352 | 352 | null |
2233 | 2 | null | 1531 | 13 | null | Check out the R [Epi](http://cran.r-project.org/web/packages/Epi/index.html) and [epitools](http://cran.r-project.org/web/packages/epitools/index.html) packages, which include many functions for computing exact and approximate CIs/p-values for various measures of association found in epidemiological studies, including relative risk (RR). I know there is also [PropCIs](http://cran.r-project.org/web/packages/PropCIs), but I never tried it. Bootstraping is also an option, but generally these are exact or approximated CIs that are provided in epidemiological papers, although most of the explanatory studies rely on GLM, and thus make use of odds-ratio (OR) instead of RR (although, wrongly it is often the RR that is interpreted because it is easier to understand, but this is another story).
You can also check your results with online calculator, like on [statpages.org](http://statpages.org/ctab2x2.html), or [Relative Risk and Risk Difference Confidence Intervals](http://www.phsim.man.ac.uk/risk/). The latter explains how computations are done.
By "exact" tests, we generally mean tests/CIs not relying on an asymptotic distribution, like the chi-square or standard normal; e.g. in the case of an RR, an 95% CI may be approximated as
$\exp\left[ \log(\text{rr}) - 1.96\sqrt{\text{Var}\big(\log(\text{rr})\big)} \right], \exp\left[ \log(\text{rr}) + 1.96\sqrt{\text{Var}\big(\log(\text{rr})\big)} \right]$,
where $\text{Var}\big(\log(\text{rr})\big)=1/a - 1/(a+b) + 1/c - 1/(c+d)$ (assuming a 2-way cross-classification table, with $a$, $b$, $c$, and $d$ denoting cell frequencies). The explanations given by @Keith are, however, very insightful.
For more details on the calculation of CIs in epidemiology, I would suggest to look at Rothman and Greenland's textbook, [Modern Epidemiology](http://www.lww.com/product/?978-0-7817-5564-1) (now in it's 3rd edition), [Statistical Methods for Rates and Proportions](http://www.wiley.com/remtitle.cgi?0471526290), from Fleiss et al., or [Statistical analyses of the relative risk](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1637917/), from J.J. Gart (1979).
You will generally get similar results with `fisher.test()`, as pointed by @gd047, although in this case this function will provide you with a 95% CI for the odds-ratio (which in the case of a disease with low prevalence will be very close to the RR).
Notes:
- I didn't check your Excel file, for the reason advocated by @csgillespie.
- Michael E Dewey provides an interesting summary of confidence intervals for risk ratios, from a digest of posts on the R mailing-list.
| null | CC BY-SA 2.5 | null | 2010-08-31T07:47:14.557 | 2010-08-31T11:14:57.893 | 2010-08-31T11:14:57.893 | 930 | 930 | null |
2234 | 1 | 2235 | null | 59 | 18867 | I would like as many algorithms that perform the same task as logistic regression. That is algorithms/models that can give a prediction to a binary response (Y) with some explanatory variable (X).
I would be glad if after you name the algorithm, if you would also show how to implement it in R. Here is a code that can be updated with other models:
```
set.seed(55)
n <- 100
x <- c(rnorm(n), 1+rnorm(n))
y <- c(rep(0,n), rep(1,n))
r <- glm(y~x, family=binomial)
plot(y~x)
abline(lm(y~x), col='red', lty=2)
xx <- seq(min(x), max(x), length=100)
yy <- predict(r, data.frame(x=xx), type='response')
lines(xx, yy, col='blue', lwd=5, lty=2)
title(main='Logistic regression with the "glm" function')
```
| Alternatives to logistic regression in R | CC BY-SA 3.0 | null | 2010-08-31T10:02:07.947 | 2016-09-26T17:18:55.707 | 2016-09-26T17:18:55.707 | 7290 | 253 | [
"r",
"regression",
"logistic",
"classification",
"predictive-models"
] |
2235 | 2 | null | 2234 | 30 | null | Popular right now are randomForest and gbm (called MART or Gradient Boosting in machine learning literature), rpart for simple trees. Also popular is bayesglm, which uses MAP with priors for regularization.
```
install.packages(c("randomForest", "gbm", "rpart", "arm"))
library(randomForest)
library(gbm)
library(rpart)
library(arm)
r1 <- randomForest(y~x)
r2 <- gbm(y~x)
r3 <- rpart(y~x)
r4 <- bayesglm(y ~ x, family=binomial)
yy1 <- predict(r1, data.frame(x=xx))
yy2 <- predict(r2, data.frame(x=xx))
yy3 <- predict(r3, data.frame(x=xx))
yy4 <- predict(r4, data.frame(x=xx), type="response")
```
| null | CC BY-SA 2.5 | null | 2010-08-31T10:13:21.270 | 2010-08-31T10:13:21.270 | null | null | 1119 | null |
2236 | 2 | null | 2234 | 15 | null | I agree with Joe, and would add:
Any classification method could in principle be used, although it will depend on the data/situation. For instance, you could also use a SVM, possibly with the popular C-SVM model. Here's an example from kernlab using a radial basis kernel function:
```
library(kernlab)
x <- rbind(matrix(rnorm(120),,2),matrix(rnorm(120,mean=3),,2))
y <- matrix(c(rep(1,60),rep(-1,60)))
svp <- ksvm(x,y,type="C-svc")
plot(svp,data=x)
```
| null | CC BY-SA 2.5 | null | 2010-08-31T13:02:43.247 | 2010-08-31T13:02:43.247 | null | null | 5 | null |
2237 | 1 | null | null | 4 | 531 | My question is based on the "forecast" package for R used in [Forecasting with Exponential Smoothing. The State Space Approach](http://rads.stackoverflow.com/amzn/click/3540719164) - Hyndman et al. 2008. I am using the `ets` function to estimate the parameters of a model.
Is there a way to obtain standard errors for the estimates of the smoothing parameters?
| Standard errors for estimates of smoothing parameters | CC BY-SA 3.0 | null | 2010-08-31T14:26:41.777 | 2012-09-02T02:55:29.943 | 2012-09-02T02:55:29.943 | 3826 | 443 | [
"time-series",
"forecasting"
] |
2238 | 2 | null | 2237 | 4 | null | Not all methods lead to analytic expressions (preferably based on proper asymptotic results) that provides this.
But the bootstrap allows you to approximate this via simulation. In essence, you generate (lots of) surrogate 'fake' data sets, employ your estimator on each of these and then use the population of your estimates to make inferences. However, doing bootstrapping in a time series context has its own challenges...
| null | CC BY-SA 2.5 | null | 2010-08-31T14:30:58.947 | 2010-08-31T14:30:58.947 | null | null | 334 | null |
2239 | 2 | null | 2234 | 25 | null | Actually, that depends on what you want to obtain. If you perform logistic regression only for the predictions, you can use any supervised classification method suited for your data. Another possibility : discriminant analysis ( lda() and qda() from package MASS)
```
r <- lda(y~x) # use qda() for quadratic discriminant analysis
xx <- seq(min(x), max(x), length=100)
pred <- predict(r, data.frame(x=xx), type='response')
yy <- pred$posterior[,2]
color <- c("red","blue")
plot(y~x,pch=19,col=color[pred$class])
abline(lm(y~x),col='red',lty=2)
lines(xx,yy, col='blue', lwd=5, lty=2)
title(main='lda implementation')
```
On the other hand, if you need confidence intervals around your predictions or standard errors on your estimates, most classification algorithms ain't going to help you. You could use generalized additive (mixed) models, for which a number of packages are available. I often use the mgcv package of Simon Wood. Generalized additive models allow more flexibility than logistic regression, as you can use splines for modelling your predictors.
```
set.seed(55)
require(mgcv)
n <- 100
x1 <- c(rnorm(n), 1+rnorm(n))
x2 <- sqrt(c(rnorm(n,4),rnorm(n,6)))
y <- c(rep(0,n), rep(1,n))
r <- gam(y~s(x1)+s(x2),family=binomial)
xx <- seq(min(x1), max(x1), length=100)
xxx <- seq(min(x2), max(x2), length=100)
yy <- predict(r, data.frame(x1=xx,x2=xxx), type='response')
color=c("red","blue")
clustering <- ifelse(r$fitted.values < 0.5,1,2)
plot(y~x1,pch=19,col=color[clustering])
abline(lm(y~x1),col='red',lty=2)
lines(xx,yy, col='blue', lwd=5, lty=2)
title(main='gam implementation')
```
There's a whole lot more to do :
```
op <- par(mfrow=c(2,1))
plot(r,all.terms=T)
par(op)
summary(r)
anova(r)
r2 <- gam(y~s(x1),family=binomial)
anova(r,r2,test="Chisq")
```
...
I'd recommend the [book of Simon Wood about Generalized Additive Models](http://rads.stackoverflow.com/amzn/click/1584884746)
| null | CC BY-SA 2.5 | null | 2010-08-31T15:30:46.797 | 2010-08-31T15:30:46.797 | null | null | 1124 | null |
2240 | 2 | null | 2220 | 5 | null | As a permutation test is an exact test, giving you an exact p-value. Bootstrapping a permutation test doesn't make sense.
Next to that, determining a confidence interval around a test statistic doesn't make sense either, as it is calculated based on your sample and not an estimate. You determine confidence intervals around estimates like means and the likes, but not around test statistics.
Permutation tests should not be used on datasets that are so big you can't calculate all possible permutations any more. If that's the case, use a bootstrap procedure to determine the cut-off for the test statistic you use. But again, this has little to do with a 95% confidence interval.
An example : I use here the classic T-statistic, but use a simple approach to bootstrapping for calculation of the empirical distribution of my statistic. Based on that, I calculate an empirical p-value :
```
x <- c(11.4,25.3,29.9,16.5,21.1)
y <- c(23.7,26.6,28.5,14.2,17.9,24.3)
t.sample <- t.test(x,y)$statistic
t.dist <- apply(
replicate(1000,sample(c(x,y),11,replace=F)),2,
function(i){t.test(i[1:5],i[6:11])$statistic})
# two sided testing
center <- mean(t.dist)
t.sample <-abs(t.sample-center)
t.dist <- abs(t.dist - center)
p.value <- sum( t.sample < t.dist ) / length(t.dist)
p.value
```
Take into account that this 2-sided testing only works for symmetrical distributions. Non-symmetrical distributions are typically only tested one-sided.
EDIT :
OK, I misunderstood the question. If you want to calculate a confidence interval on the estimate of the difference, you can use the code mentioned [here](https://stackoverflow.com/questions/3615718/bootstrapping-to-compare-two-groups/3615880#3615880) for bootstrapping within each sample. Mind you, this is a biased estimate: generally this gives a CI that is too small. Also see the example given there as a reason why you have to use a different approach for the confidence interval and the p-value.
| null | CC BY-SA 3.0 | null | 2010-08-31T15:55:59.283 | 2012-06-13T12:06:40.830 | 2017-05-23T12:39:26.523 | -1 | 1124 | null |
2241 | 2 | null | 423 | 53 | null | I found this [from a NoSQL presentation](http://www.erlang-factory.com/upload/presentations/282/neo4j-is-not-erlang-but-i-still-heart-you-2010-06-10.pdf), but the cartoon can be found directly at
[http://browsertoolkit.com/fault-tolerance.png](http://browsertoolkit.com/fault-tolerance.png)

| null | CC BY-SA 2.5 | null | 2010-08-31T15:58:42.227 | 2010-08-31T15:58:42.227 | null | null | 1080 | null |
2243 | 2 | null | 423 | 8 | null |  My favorite was created by Emanuel Parzen, appearing in [IMA preprint 663](http://www.ima.umn.edu/preprints/July90Series/663.pdf), but this illustrates my degenerate sense of humor.
Gorbachev says to Bush: "that's a very nice golfcart, Mr. President. Can it change how statistics is practiced?" etc. hahahah.
| null | CC BY-SA 2.5 | null | 2010-08-31T17:41:55.473 | 2010-08-31T17:41:55.473 | null | null | 795 | null |
2244 | 1 | 2246 | null | 9 | 1987 | What is the best package to to do some survival analysis and plots in R? I have tried some tutorials but I couldn't find a definite answer.
TIA
| Kaplan-Meier, survival analysis and plots in R | CC BY-SA 2.5 | null | 2010-08-31T18:56:09.437 | 2016-06-26T20:59:39.213 | 2010-09-16T12:33:07.553 | null | 1088 | [
"r",
"data-visualization",
"survival"
] |
2245 | 1 | 2251 | null | 62 | 6671 | In his 1984 paper ["Statistics and Causal Inference"](http://www-unix.oit.umass.edu/~stanek/pdffiles/causal-holland.pdf), Paul Holland raised one of the most fundamental questions in statistics:
>
What can a statistical model say about
causation?
This led to his motto:
>
NO CAUSATION WITHOUT MANIPULATION
which emphasized the importance of restrictions around experiments that consider causation.
Andrew Gelman makes [a similar point](http://www.stat.columbia.edu/~cook/movabletype/archives/2010/08/no_understandin.html):
>
"To find out what happens when you change something, it is necessary to change it."...There are things you learn from perturbing a system that you'll never find out from any amount of passive observation.
His ideas are summarized in [this article](http://www.stat.columbia.edu/~gelman/research/published/causalreview4.pdf).
What considerations should be made when making a causal inference from a statistical model?
| Statistics and causal inference? | CC BY-SA 2.5 | null | 2010-08-31T19:13:04.883 | 2018-12-25T22:19:26.480 | 2010-09-16T06:32:59.970 | null | 5 | [
"causality"
] |
2246 | 2 | null | 2244 | 6 | null | Try CRAN Task View: [http://cran.at.r-project.org/web/views/Survival.html](http://cran.at.r-project.org/web/views/Survival.html)
| null | CC BY-SA 2.5 | null | 2010-08-31T19:41:55.037 | 2010-08-31T19:41:55.037 | null | null | null | null |
2247 | 2 | null | 2244 | 11 | null | I think that it's fair to say that the [survival](http://cran.r-project.org/web/packages/survival/) package is the "recommended" package in general, as it's included in base R (i.e. does not need to be installed separately). There are many good tutorials online for this. But you need to be more specific to get a more specific answer.
| null | CC BY-SA 2.5 | null | 2010-08-31T19:48:40.107 | 2010-08-31T19:48:40.107 | null | null | 5 | null |
2248 | 1 | null | null | 7 | 7949 | I have a series of observations that fall into bins (or "scores"); that is, the data can be 0, 1, 2, 3 or 4. There are two groups of such data, control and treated. I know the number of individuals with each score for each group.
What is the best way to determine whether these groups are different or not?
A colleague suggested just arranging the data as individual data points with the given score, and doing the analysis on those two columns of data. Since there are ten individuals per group, this is not difficult, but I do not believe that I am getting a valid answer.
| How to test group differences on a five point variable? | CC BY-SA 3.0 | null | 2010-08-31T20:57:50.950 | 2013-04-26T12:52:46.923 | 2013-04-26T12:52:46.923 | 183 | null | [
"nonparametric",
"statistical-significance",
"discrete-data",
"scales"
] |
2249 | 2 | null | 2248 | 5 | null | Three things come to mind:
- Contingency table analysis using Fisher's exact test or Chi Square (but will only tell you that somewhere in the table there is a difference that is significant. You'd have to visualize your data or do post-hoc tests to know where this difference is.) Not my preferred solution.
- A non-parametric method such the Mann Whitney test. This will rank all of your scores within each group. A good method, but may be underpowered.
- A parametric method (such as a t test). Disadvantage is that the assumptions of this method may be violated, especially with such a small sample. Also, the difference between 0 and 1 is not likely to be the same (depending on what you're measuring) as the difference between 3 and 4. The good news is that the t test is relatively robust to the assumptions you are supposed to ensure are true before using the test. However, as I said, the sample size is fairly small.
The best bet may be the Mann Whitney test.
| null | CC BY-SA 2.5 | null | 2010-08-31T21:33:24.810 | 2010-09-01T01:57:27.007 | 2010-09-01T01:57:27.007 | 561 | 561 | null |
2250 | 2 | null | 2248 | 6 | null | What you are looking for seems to be a test for comparing two groups where observations are kind of ordinal data. In this case, I would suggest to apply a trend test to see if there are any differences between the CTL and TRT group.
Using a t-test would not acknowledge the fact your data are discrete, and the Gaussian assumption may be seriously violated if scores distribution isn't symmetric as is often the case with Likert scores (such as the ones you seem to report). Don't know if these data come from a case-control study or not, but you might also apply rank-based method as suggested by @propfol: If it is not a matched design, the Wilcoxon-Mann-Whitney test (`wilcox.test()` in R) is fine, and ask for an exact p-value although you may encounter problem with tied observations. The efficiency of the WMW test is $3/\pi$ with respect to the t-test if normality holds but it may even be better otherwise, I seem to remember.
Given your sample size, you may also consider applying a permutation test (see the [perm](http://cran.r-project.org/web/packages/perm/index.html) or [coin](http://cran.r-project.org/web/packages/coin/index.html) R packages).
Check also those related questions:
- Group differences on a five point Likert item
- Under what conditions should Likert scales be used as ordinal or interval data?
| null | CC BY-SA 2.5 | null | 2010-08-31T21:49:30.223 | 2010-09-01T14:32:30.520 | 2017-04-13T12:44:48.803 | -1 | 930 | null |
2251 | 2 | null | 2245 | 33 | null | This is a broad question, but given the Box, Hunter and Hunter quote is true I think what it comes down to is
- The quality of the experimental design:
randomization, sample sizes, control of confounders,...
- The quality of the implementation of the design:
adherance to protocol, measurement error, data handling, ...
- The quality of the model to accurately reflect the design:
blocking structures are accurately represented, proper degrees of freedom are associated with effects, estimators are unbiased, ...
At the risk of stating the obvious I'll try to hit on the key points of each:
- is a large sub-field of statistics, but in it's most basic form I think it comes down to the fact that when making causal inference we ideally start with identical units that are monitored in identical environments other than being assigned to a treatment. Any systematic differences between groups after assigment are then logically attributable to the treatment (we can infer cause). But, the world isn't that nice and units differ prior to treatment and evironments during experiments are not perfectly controlled. So we "control what we can and randomize what we can't", which helps to insure that there won't be systematic bias due to the confounders that we controlled or randomized. One problem is that experiments tend to be difficult (to impossible) and expensive and a large variety of designs have been developed to efficiently extract as much information as possible in as carefully controlled a setting as possible, given the costs. Some of these are quite rigorous (e.g. in medicine the double-blind, randomized, placebo-controlled trial) and others less so (e.g. various forms of 'quasi-experiments').
- is also a big issue and one that statisticians generally don't think about...though we should. In applied statistical work I can recall incidences where 'effects' found in the data were spurious results of inconsistency of data collection or handling. I also wonder how often information on true causal effects of interest is lost due to these issues (I believe students in the applied sciences generally have little-to-no training about ways that data can become corrupted - but I'm getting off topic here...)
- is another large technical subject, and another necessary step in objective causal inference. To a certain degree this is taken care of because the design crowd develop designs and models together (since inference from a model is the goal, the attributes of the estimators drive design). But this only gets us so far because in the 'real world' we end up analysing experimental data from non-textbook designs and then we have to think hard about things like the appropriate controls and how they should enter the model and what associated degrees of freedom should be and whether assumptions are met if if not how to adjust of violations and how robust the estimators are to any remaining violations and...
Anyway, hopefully some of the above helps in thinking about considerations in making causal inference from a model. Did I forget anything big?
| null | CC BY-SA 3.0 | null | 2010-08-31T23:26:46.003 | 2016-09-06T21:28:32.410 | 2016-09-06T21:28:32.410 | 49647 | 1107 | null |
2252 | 2 | null | 2230 | 6 | null | If you enjoy visual explanations, there was a nice ['Teacher's Corner' article](http://dx.doi.org/doi:10.1198/tas.2009.0032) on this subject in the American Statistician (cite below). As a bonus, the authors included an [R package](http://www.biostatisticien.eu/ConvergenceConcepts/) to facilitate learning.
```
@article{lafaye09,
title={Understanding Convergence Concepts: A Visual-Minded and Graphical Simulation-Based Approach},
author={Lafaye de Micheaux, P. and Liquet, B.},
journal={The American Statistician},
volume={63},
number={2},
pages={173--178},
year={2009},
publisher={ASA}
}
```
| null | CC BY-SA 2.5 | null | 2010-09-01T00:00:38.753 | 2010-09-01T00:23:54.000 | 2010-09-01T00:23:54.000 | 159 | 1107 | null |
2253 | 2 | null | 2248 | 4 | null | This question is a little unusual because the nature of "different" is unspecified. This response is formulated in the spirit of trying to detect as many kinds of differences as possible, not just changes of location ("trend").
One approach that might have more power than most, while remaining agnostic about the relative magnitudes represented by the five groups (e.g., adopting a multinomial model) yet retaining the ordering of the groups, is a Kolmogorov-Smirnov test: as a test statistic use the size of the largest deviation between the two empirical cdfs. This is easy and quick to compute and it would also be easy to bootstrap a p-value by pooling the two sets of results.
Specifically, let the count in bin $j$ for group $i$ be $k_{ij}$. Then the empirical cdf for group $i$ is essentially the vector $\left( 0 = m_{i0}, m_{i1}, \ldots, m_{i5}=n_i \right) / n_i$ where $m_{i,j} = m_{i-1,j} + k_{ij}, 1 \le i \le 5$. The test statistic is the sup norm of the difference of these two vectors.
Critical values ($\alpha = 0.05$) with two groups of ten individuals are going to be around 0.2 - 0.4, with the higher values occurring when the 20 values are spread evenly between the two extremes.
| null | CC BY-SA 2.5 | null | 2010-09-01T00:27:21.270 | 2010-09-01T00:27:21.270 | null | null | 919 | null |
2254 | 5 | null | null | 0 | null |
# Usage on CV
`R`-based questions are frequently migrated between [Cross Validated](http://stats.stackexchange.com/) (CV) and [Stack Overflow](http://stackoverflow.com/) (SO). CV fields questions with statistical content or of statistical interest and SO fields questions of programming and implementation.
Your question belongs on CV when any of the following apply:
- You're not sure what the right procedure is to use on your data.
- You would like help interpreting and understanding the output of an R procedure.
- You would like help with producing a certain type of data visualization (or selecting the most appropriate one).
Your question belongs on SO the following applies:
Your question belongs on SO when any of the following applyapplies:
- You already know exactly what analysis or visualization or data processing you want to do, but only need advice about how to get it done in R.
- You want to know how to get your data into a format suitable for R.
- You need to know the R syntax for a particular model. [This is marginal, because often such questions are covering up deeper modeling issues. If you are unsure of the model, not just the syntax, post your question on CV!]
- r is the only applicable tag, i.e. when your question has no statistical content..
# R
[R](http://www.r-project.org) is an open source programming language and software environment for statistical computing and graphics. R is an implementation of the [S programming language](http://lib.stat.cmu.edu/S/Spoetry/Tutor/slanguage.html) combined with lexical scoping semantics inspired by [Scheme](http://stackoverflow.com/tags/scheme/info). R was created by [Ross Ihaka](http://www.stat.auckland.ac.nz/%7Eihaka/) and [Robert Gentleman](http://gentleman.fhcrc.org/) and is now developed by the [R Development Core Team](http://www.r-project.org/contributors.html). The R environment is easily extended through a packaging system on [CRAN](http://cran.r-project.org).
As of June 2020, formatting `R` code has been simplified. For small snippets within text, use backticks, as in `x <- c(1,2)`. For blocks of code, just paste (or type) them in (no initial spaces needed) but precede the block with ``` lang-R and follow it with ```, as in
``` lang-R
```
for (i in 1:3) {
hist(rnorm(10*i))
}
```
```
## Official CRAN Documentation
Additional free resources include:
- PDF
HTML An Introduction to R, a basic introduction for beginners.
- PDF HTML The R Language Definition, a more technical discussion of the R language itself.
- PDF HTML Writing R Extensions, a development guide for R.
- PDF HTML R Data Import/Export, a data import and export guide.
- PDF HTML R Installation , an installation guide (from R source code).
- PDF HTML R Internals, internal structures and coding guidelines.
## Free Resources
Free resource materials include:
- Wikibook The R Programming wikibook, a collaborative textbook
- PDF The R Inferno by Patrick Burns
- Try R - A web-based R tutorial
- R by example
- CRAN maintains an extensive list of free contributed documentation in a range of languages.
- The R Journal lists research articles and summaries of major revisions.
We also maintain a list of internet based resources for `R` on meta.CV [here](http://meta.stats.stackexchange.com/questions/793/internet-support-for-statistics-software/795#795).
## Other Resources
Recommended additional R resources include:
- Packages in the standard library
- R mailing lists
- List of CRAN Task Views, summary of useful packages per subject area.
- R chat
- RSeek: a search engine for R
- Cookbook for R: Solutions to common tasks in analyzing data
- Quick-R: accessing the power of R
- What is the most useful R trick?
- ESS - Emacs package for R that makes working with R easy
- RStudio A nice IDE for working with R
- R programming language by wikipedia
- Dataquest: Interactive R courses for data science
## Frequently Asked Questions
Lists of frequently asked questions include:
- r-faq - Tag for frequently asked R questions on SO
- R FAQ - Official list of R FAQs on CRAN
- How to make a great R reproducible example
- For resources on both R and statistics, see the following questions on CV:
What book is recommendable to start learning statistics using R at the same time?, and
The reference book for statistics with R – does it exist and what should it contain?
| null | CC BY-SA 4.0 | null | 2010-08-30T10:31:13.000 | 2022-07-26T19:57:40.550 | 2022-07-26T19:57:40.550 | 919 | null | null |
2255 | 2 | null | 93 | 3 | null | This is probably a stupid answer (I am new here), but if you want to estimate the hazard function from observations of an initial population that slowly died away (i.e. had events and then were censored), isn't that what the Nelson-Aalen estimator was built to do?
We could have another conversation about the reliability of the available classical confidence intervals -- my understanding is that there basically do not exist functioning exact confidence intervals that guarantee their coverage even over small sample sizes, since such an interval would need to work over all distributions of censoring time. (Maybe the problem is simpler when individuals are always censored exactly after their first event.) And mapping out the coverage of an approximate interval precisely would take work.
But if you just need a point estimate, the Nelson-Aalen estimator seems to do the trick. (It's a lot like the Kaplan-Meier estimate for the survival function...)
If you want to calculate an a posteriori distribution on a whole family of possible hazard functions, and your prior is that they are drawn from the Gaussian processes with certain statistics, can you explain further what the difficulty is? If there isn't agreement on the covariance matrix, then that needs to be part of the prior -- that the covariance matrix is drawn from some distribution. You're not going to get around having to state a prior if the goal is a posterior.
| null | CC BY-SA 2.5 | null | 2010-09-01T03:35:33.823 | 2010-09-01T03:35:33.823 | null | null | 1122 | null |
2256 | 1 | 2257 | null | 5 | 5395 | I have a dataset forwhich i have performed an mds and visualized the results using scatterplot3d library. However i would like to see the names of the points on the 3d plot. How do i accomplish that? Each column belongs to a certain group i would like to see which points belong to which groups on the 3dplot.
```
#generate a distance matrix of the data
d <- dist(data)
#perform the MDS on 3 dimensions and include a Goodness-of-fit (GOF)
fit.mds <- cmdscale(d,eig=TRUE, k=3) # k is the number of dimensions; 3 in this case
#Assign names x,y,z to the result vectors (dimension numbers)
x <- fit.mds$points[,1]
y <- fit.mds$points[,2]
z <- fit.mds$points[,3]
plot3d <- scatterplot3d(x,y,z,highlight.3d=TRUE,xlab="",ylab="",pch=16,main="Multidimensional Scaling 3-D Plot",col.axis="blue")
```
| Adding labels to points using mds and scatter3d package with R | CC BY-SA 2.5 | null | 2010-09-01T05:55:30.923 | 2010-09-18T21:56:35.090 | 2010-09-18T21:56:35.090 | 930 | 18462 | [
"r",
"multidimensional-scaling"
] |
2257 | 2 | null | 2256 | 5 | null | Basically, what you need is to store your `scatterplot3d` in a variable and reuse it like this:
```
x <- replicate(10,rnorm(100))
x.mds <- cmdscale(dist(x), eig=TRUE, k=3)
s3d <- scatterplot3d(x.mds$points[,1:3])
text(s3d$xyz.convert(0,0,0), labels="Origin")
```
Replace the coordinates and text by whatever you want to draw. You can also use a color vector to highlight the groups of interest.
The `R.basic` package, from H[enrik Bengtsson](http://www.braju.com/R/), seems to provide additional facilities to customize 3D plots, but I never tried it.
| null | CC BY-SA 2.5 | null | 2010-09-01T06:11:00.007 | 2010-09-01T06:11:00.007 | null | null | 930 | null |
2258 | 1 | 2279 | null | 1 | 433 | Given a function mapping between two sample spaces $S_1$ and $S_2$, if $S_2$,$F_2$ is measurable how do I show that preimage of $S_2$,$F_2$ in $S_1$ is measurable set?
| How to prove that preimage of measurable space is measurable? | CC BY-SA 2.5 | null | 2010-09-01T07:09:34.870 | 2010-09-02T00:32:21.437 | 2010-09-02T00:32:21.437 | 159 | 862 | [
"probability"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.