Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
856 | 1 | 1306 | null | 13 | 6485 | Does anyone know of a variation of Fisher's Exact Test which takes weights into account? For instance [sampling weights](http://www.measuredhs.com/help/Datasets/sampling_weights.htm).
So instead of the usual 2x2 cross table, every data point has a "mass" or "size" value weighing the point.
Example data:
```
A B weight
N N 1
N N 3
Y N 1
Y N 2
N Y 6
N Y 7
Y Y 1
Y Y 2
Y Y 3
Y Y 4
```
Fisher's Exact Test then uses this 2x2 cross table:
```
A\B N Y All
N 2 2 4
Y 2 4 6
All 4 6 10
```
If we would take the weight as an 'actual' number of data points, this would result in:
```
A\B N Y All
N 4 13 17
Y 3 10 13
All 7 23 30
```
But that would result in much too high a confidence. One data point changing from N/Y to N/N would make a very large difference in the statistic.
Plus, it wouldn't work if any weight contained fractions.
| Fisher's Exact Test with weights? | CC BY-SA 2.5 | null | 2010-07-27T20:24:45.767 | 2011-04-24T08:04:05.637 | 2010-07-29T15:21:16.363 | 506 | 506 | [
"hypothesis-testing"
] |
857 | 2 | null | 775 | 6 | null | Operations Research began during wartime in the 1940s with scientists and others addressing problems in Radar operations, Anti-Submarine Warfare (ASW), and air operations. It is really a methodology to help decision makers choose a course of action by using an analytic framework that includes statistics, linear and non-linear programming, game theory, decision theory, etc. Statistics is one of many tools it uses.
| null | CC BY-SA 2.5 | null | 2010-07-27T20:25:16.327 | 2010-07-27T20:25:16.327 | null | null | 482 | null |
858 | 2 | null | 837 | 3 | null | I'm not sure why this doesn't work with `lrm`. However, R does Logistic Regression just fine with its own internal functions. See GLM. Here's your model, working...
```
summary(glm(am~1, data = mtcars, family=binomial(link=logit)))
```
So, unless you need something that `lrm()` from design provides, then use GLM with the binomial logit link.
| null | CC BY-SA 3.0 | null | 2010-07-27T20:33:48.787 | 2013-08-09T00:48:47.660 | 2013-08-09T00:48:47.660 | 7290 | 485 | null |
859 | 1 | 867 | null | 12 | 5991 | Sorry for the verbose background to this question:
Occasionally in investigations of animal behaviour, an experimenter is interested in the amount of time that a subject spends in different, pre-defined zones in a test apparatus. I've often seen this sort of data analyzed using ANOVA; however, I have never been entirely convinced of the validity of such analyses, given that ANOVA assumes the observations are independent, and they never actually are independent in these analyses (since more time spent in one zone means that less is spent in other zones!).
For example,
>
D. R. Smith, C. D. Striplin, A. M.
Geller, R. B. Mailman, J. Drago, C. P.
Lawler, M. Gallagher, Behavioural
assessment of mice lacking D1A
dopamine receptors, Neuroscience,
Volume 86, Issue 1, 21 May 1998, Pages
135-146
In the above article, they reduce the degrees of freedom by 1 in order to compensate for the non-independence. However, I am not sure how such a manipulation can actually ameliorate this violation of ANOVA assumptions.
Perhaps a chi-squared procedure might be more appropriate? What would you do to analyze data like this (preference for zones, based on time spent in zones)?
Thanks!
| ANOVA with non-independent observations | CC BY-SA 2.5 | null | 2010-07-27T21:07:20.127 | 2014-05-21T17:36:11.060 | 2010-12-21T17:06:54.740 | 930 | 445 | [
"anova"
] |
860 | 2 | null | 834 | 7 | null | Let $f$ be your true distribution, and $g$ the family from which you are trying to fit your data. Then $\theta$, the maximum likelihood estimator of parameters of $g$, is a random variable. You could formulate model selection as finding the distribution family $g$ that minimizes the expected KL divergence between $f$ and $g(\theta)$, which can be written as
$$\text{Entropy}(f)-E_x E_y[\log(g(x|\theta(y)))]$$
Since you are minimizing over $g$, the Entropy($f$) term doesn't matter and you look for $g$ that maximizes $E_x E_y[\log(g(x|\theta(y)))]$.
Let $L(\theta(y)|y)$ be the likelihood of data $y$ according to $g(\theta)$. You could estimate $E_x E_y[\log(g(x|\theta(y)))]$ as $\log(L(\theta(y)|y))$ but that estimator is biased.
Akaike's showed that when $f$ belongs to family $g$ with dimension $k$, the following estimator is asymptotically unbiased
$$\log(L(\theta(y)|y))-k$$
Burnham has more details in this [paper](http://faculty.washington.edu/skalski/classes/QERM597/papers_xtra/Burnham%20and%20Anderson.pdf), also blog [post](http://www.emakalic.org/blog/?p=26) by Enes Makalic has further explanation and references
| null | CC BY-SA 2.5 | null | 2010-07-27T21:51:43.650 | 2010-11-18T00:36:44.833 | 2010-11-18T00:36:44.833 | 159 | 511 | null |
861 | 2 | null | 743 | 3 | null | There is a free book on Geostatistical Mapping with R [here](http://spatial-analyst.net/book/) it might help your problem.
| null | CC BY-SA 2.5 | null | 2010-07-27T22:33:05.287 | 2010-07-27T22:33:05.287 | null | null | 481 | null |
862 | 2 | null | 798 | 146 | null | The [Freedman-Diaconis](https://en.wikipedia.org/wiki/Freedman%E2%80%93Diaconis_rule) rule is very robust and works well in practice. The bin-width is set to $h=2\times\text{IQR}\times n^{-1/3}$. So the number of bins is $(\max-\min)/h$, where $n$ is the number of observations, max is the maximum value and min is the minimum value.
In base R, you can use:
```
hist(x, breaks="FD")
```
For other plotting libraries without this option (e.g., `ggplot2`), you can calculate binwidth as:
```
bw <- 2 * IQR(x) / length(x)^(1/3)
### for example #####
ggplot() + geom_histogram(aes(x), binwidth = bw)
```
| null | CC BY-SA 4.0 | null | 2010-07-28T00:23:22.107 | 2019-01-31T23:55:44.507 | 2019-01-31T23:55:44.507 | 159 | 159 | null |
863 | 2 | null | 859 | 5 | null | Mike,
I agree that an ANOVA based on total time probably isn't the correct approach here. Further, I'm not convinced that Chi Sqaure solves your problem. Chi square will respect the idea that you can't be in two locations at the same time, but it doesn't address the problem that there are likely dependencies between time N and time N+1. In regards to this second issue, I see some analogies between your situation and what people run into with eye and mouse tracking data. A multinomial model of some sort may serve your purposes well. Unfortunately, the details of that type of model are beyond my expertise. I'm sure some statistics book somewhere has a nice little primer on that topic, but off the top of my head I'd point you towards:
- Barr D.J. (2008) Analyzing ‘visual world’ eyetracking data using multilevel logistic regression. Journal of Memory and Language, Special Issue: Emerging Data Analysis (59) pp 457-474
- https://r-forge.r-project.org/projects/gmpm/ is a non-parametric approach to the same issue being developed by Dr. Barr
If anything, both of those sources should be more than complete because they get into how to analyze the time course of the position.
| null | CC BY-SA 2.5 | null | 2010-07-28T00:29:41.487 | 2010-07-28T00:29:41.487 | null | null | 196 | null |
864 | 2 | null | 859 | 3 | null | I am going to suggest an answer that is very different from that of a traditional ANOVA. Let T be the total time that is available for an animal to spend in all the zones. You could define T as the total amount of waking time or some such. Suppose that you have J zones. Then by definition you have:
Sum T_j = T
You could normalize the above by dividing the lhs and the rhs by T and you get
Sum P_j = 1
where P_j is the proportion of time that an animal spends in zone j.
Now the question you have is if P_j is significantly different from 1 / J for all j.
You could assume that P_j follows a [dirichlet distribution](http://en.wikipedia.org/wiki/Dirichlet_distribution) and estimate two models.
Null Model
Set the parameters of the distribution such that P_j = 1 / J. (Setting the parameters of the distribution to 1 will do.)
Alternative Model
Set the parameters of the distribution to be a function of zone specific covariates. You could then estimate the model parameters.
You would choose the alternative model if it outperforms the null model on some critera (e.g., likelihood ratio).
| null | CC BY-SA 2.5 | null | 2010-07-28T01:00:22.877 | 2010-07-28T01:00:22.877 | null | null | null | null |
865 | 2 | null | 225 | 1 | null | Just to expand on Rob's answer a bit, suppose that we want to know the cumulative distribution function (CDF) of the highest value of $N$ independent draws from a standard normal distribution, $X_1, ..., X_N$. Call this highest value $Y_1$, the first order statistic. Then the CDF is:
$$ \begin{align*}P(Y_1 < x) &= P(\max(X_1, ..., X_N) < x) \\
&= P(X_1 < x, ..., X_N < x)
\\
&= P(X_1 < x) \cdot \cdot \cdot P(X_N < x) \\
&= P(X < x)^{100},
\end{align*} $$
where the second line follows by independence of the draws. We can also write this as
$$F_{Y_1}(x) = F_X(x)^{100},$$
where $F$ represents the CDF and $f$ represents the PDF of the random variable given as a subscript to this function.
Rob uses the standard notation that $\Phi(x)$ is defined as $P(X < x)$ for a standard normal---i.e., $\Phi(x)$ is the standard normal CDF.
The probability density function (PDF) of the first order statistic is just the derivative of the CDF with respect to $X$:
$$f_{Y_1}(x) = 100 \cdot F_X(x)^{99} f_X(x)$$
the CDF at $x$ raised to 99 (that is, $N-1$) times the PDF at $x$ times 100 (that is, $N$).
| null | CC BY-SA 3.0 | null | 2010-07-28T01:06:56.963 | 2012-04-01T04:18:55.447 | 2012-04-01T04:18:55.447 | 401 | 401 | null |
866 | 1 | 876 | null | 206 | 165951 | Say I want to estimate a large number of parameters, and I want to penalize some of them because I believe they should have little effect compared to the others. How do I decide what penalization scheme to use? When is ridge regression more appropriate? When should I use lasso?
| When should I use lasso vs ridge? | CC BY-SA 2.5 | null | 2010-07-28T01:10:18.423 | 2017-10-11T16:20:05.347 | 2010-08-09T12:25:13.010 | 159 | 455 | [
"regression",
"lasso",
"ridge-regression"
] |
867 | 2 | null | 859 | 3 | null | (Caveat Emptor: I'm not an expert in this area)
If you just want to talk about differences in time spent per location, then submitting the "time-per-location" data as counts in a multinomial mixed model (see the MCMCglmm package for R), using subject as a random effect, should do the trick.
If you want to talk about differences in location preference through time, then maybe bin time to reasonable intervals (maybe to the resolution of your timing device?), classify each interval according to the mouse's location at that time (eg. if 3 locations, each interval gets labelled either 1, 2, or 3), and again use a multinomial mixed effects model with subject as a random effect but this time add interval as a fixed effect (though possibly only after factorizing interval, which drops power but should help capture non-linearities through time).
| null | CC BY-SA 2.5 | null | 2010-07-28T01:17:57.410 | 2010-07-28T01:17:57.410 | null | null | 364 | null |
868 | 1 | 917 | null | 4 | 2824 | I created a quick fun Excel Spreadsheet tonight to try and predict which video games I'll enjoy if I buy them. I'm wondering if this quick example makes sense from a Logistic Regression perspective and if I am computing all of the values correctly.
Unfortunately, if I did everything correctly I doubt I have much to look forward to on my XBOX or PS3 ;)
I laid out a few categories and weighted them like so (Real spreadsheet lists twice as many or so):
`
4 4 3 1`
Visually Stunning Exhilirating Artistic Sporty
Then I went through some games I have and rated them in each category (ratings of 0-4). I then set a separate cell to be the value of Beta_0 and tuned that until the resulting percentages all looked about right.
Next I entered in my expected ratings for the new games I was looking forward to and got percentages for those.
Example:
Beta_0 := -35
`
4 4 3 1`
Visually Stunning Exhilirating Artistic Sporty
4 4 0 1
Would be calculated as
P = 1 / [1 + e^(-35 + (4*4 + 4*4 + 3*0 + 1*1)]
P = 88.1%
If I were to automate the regression am I correct in thinking I'd be tuning Beta_0 to make it so the positive training examples come out high and the negative training examples come out low?
I'm completely new to this (just started today thanks to this site actually!) so please have no concern about bruising my ego, I'm eager to learn more.
Thanks!
| Training a Logistic Regression Model | CC BY-SA 2.5 | null | 2010-07-28T03:15:34.473 | 2011-01-07T01:55:24.827 | null | null | 9426 | [
"logit",
"logistic"
] |
869 | 2 | null | 868 | 2 | null | Usually in logistic regression you'd want "successes" to be 1 and "failures" to be 0, but so long as you are consistent in how you enter your data and interpret it, the coefficients don't really care.
| null | CC BY-SA 2.5 | null | 2010-07-28T03:49:27.933 | 2010-07-28T03:49:27.933 | null | null | 196 | null |
870 | 1 | 956 | null | 28 | 32628 | Given a list of p-values generated from independent tests, sorted in ascending order, one can use the [Benjamini-Hochberg procedure](http://www.math.tau.ac.il/%7Eybenja/MyPapers/benjamini_hochberg1995.pdf) for [multiple testing correction](https://en.wikipedia.org/wiki/False_discovery_rate#Independent_tests). For each p-value, the Benjamini-Hochberg procedure allows you to calculate the False Discovery Rate (FDR) for each of the p-values. That is, at each "position" in the sorted list of p-values, it will tell you what proportion of those are likely to be false rejections of the null hypothesis.
My question is, are these FDR values to be referred to as "[q-values](https://projecteuclid.org/journals/annals-of-statistics/volume-31/issue-6/The-positive-false-discovery-rate--a-Bayesian-interpretation-and/10.1214/aos/1074290335.full)", or as "corrected p-values", or as something else entirely?
EDIT 2010-07-12: I would like to more fully describe the correction procedure we are using. First, we sort the test results in increasing order by their un-corrected original p-value. Then, we iterate over the list, calculating what I have been interpreting as "the FDR expected if we were to reject the null hypothesis for this and all tests prior in the list," using the B-H correction, with an alpha equal to the observed, un-corrected p-value for the respective iteration. We then take, as what we've been calling our "q-value", the maximum of the previously corrected value (FDR at iteration i - 1) or the current value (at i), to preserve monotonicity.
Below is some Python code which represents this procedure:
```
def calc_benjamini_hochberg_corrections(p_values, num_total_tests):
"""
Calculates the Benjamini-Hochberg correction for multiple hypothesis
testing from a list of p-values *sorted in ascending order*.
See
http://en.wikipedia.org/wiki/False_discovery_rate#Independent_tests
for more detail on the theory behind the correction.
**NOTE:** This is a generator, not a function. It will yield values
until all calculations have completed.
:Parameters:
- `p_values`: a list or iterable of p-values sorted in ascending
order
- `num_total_tests`: the total number of tests (p-values)
"""
prev_bh_value = 0
for i, p_value in enumerate(p_values):
bh_value = p_value * num_total_tests / (i + 1)
# Sometimes this correction can give values greater than 1,
# so we set those values at 1
bh_value = min(bh_value, 1)
# To preserve monotonicity in the values, we take the
# maximum of the previous value or this one, so that we
# don't yield a value less than the previous.
bh_value = max(bh_value, prev_bh_value)
prev_bh_value = bh_value
yield bh_value
```
| Multiple hypothesis testing correction with Benjamini-Hochberg, p-values or q-values? | CC BY-SA 4.0 | null | 2010-07-28T03:54:56.447 | 2022-06-23T20:57:07.927 | 2022-06-23T20:57:07.927 | 79696 | 520 | [
"hypothesis-testing"
] |
871 | 1 | 875 | null | 54 | 41056 | I realize this is pedantic and trite, but as a researcher in a field outside of statistics, with limited formal education in statistics, I always wonder if I'm writing "p-value" correctly. Specifically:
- Is the "p" supposed to be capitalized?
- Is the "p" supposed to be italicized? (Or in mathematical font, in TeX?)
- Is there supposed to be a hyphen between "p" and "value"?
- Alternatively, is there no "proper" way of writing "p-value" at all, and any dolt will understand what I mean if I just place "p" next to "value" in some permutation of these options?
| Correct spelling (capitalization, italicization, hyphenation) of "p-value"? | CC BY-SA 3.0 | null | 2010-07-28T04:08:23.973 | 2018-11-17T09:19:45.610 | 2016-12-09T16:38:27.097 | 28666 | 520 | [
"hypothesis-testing",
"p-value",
"terminology"
] |
872 | 2 | null | 871 | 8 | null | This seems to be a style issue with different journals and publishers adopting different conventions (or allowing a mixed muddle of styles depending on authors' preferences). My own preference, for what it's worth, is p-value, hyphenated with no italics and no capitalization.
| null | CC BY-SA 2.5 | null | 2010-07-28T04:21:06.723 | 2010-07-28T04:21:06.723 | null | null | 159 | null |
873 | 2 | null | 871 | 5 | null | The [ASA House Style](http://www.amstat.org/publications/chance/assets/style.pdf) seems to recommend italicizing the p with hyphen: p-value. A google scholar search shows [varied spellings](http://scholar.google.com/scholar?q=p+value&hl=en&btnG=Search&as_sdt=80001&as_sdtp=on).
| null | CC BY-SA 2.5 | null | 2010-07-28T04:24:05.240 | 2010-07-28T04:24:05.240 | null | null | 251 | null |
874 | 2 | null | 866 | 53 | null | Ridge or lasso are forms of regularized linear regressions. The regularization can also be interpreted as prior in a maximum a posteriori estimation method. Under this interpretation, the ridge and the lasso make different assumptions on the class of linear transformation they infer to relate input and output data. In the ridge, the coefficients of the linear transformation are normal distributed and in the lasso they are Laplace distributed. In the lasso, this makes it easier for the coefficients to be zero and therefore easier to eliminate some of your input variable as not contributing to the output.
There are also some practical considerations. The ridge is a bit easier to implement and faster to compute, which may matter depending on the type of data you have.
If you have both implemented, use subsets of your data to find the ridge and the lasso and compare how well they work on the left out data. The errors should give you an idea of which to use.
| null | CC BY-SA 2.5 | null | 2010-07-28T04:26:17.297 | 2010-07-28T04:26:17.297 | null | null | 260 | null |
875 | 2 | null | 871 | 37 | null | There do not appear to be "standards". For example:
- The Nature style guide refers to "P value"
- This APA style guide refers to "p value"
- The Blood style guide says:
Capitalize and italicize the P that introduces a P value
Italicize the p that represents the Spearman rank correlation test
- Wikipedia uses "p-value" (with hyphen and italicized "p")
My brief, unscientific survey suggests that the most common combination is lower-case, italicized p without a hyphen.
| null | CC BY-SA 3.0 | null | 2010-07-28T04:29:37.107 | 2016-12-09T14:43:12.957 | 2016-12-09T14:43:12.957 | 28666 | 163 | null |
876 | 2 | null | 866 | 128 | null | Keep in mind that ridge regression can't zero out coefficients; thus, you either end up including all the coefficients in the model, or none of them. In contrast, the LASSO does both parameter shrinkage and variable selection automatically. If some of your covariates are highly correlated, you may want to look at the Elastic Net [3] instead of the LASSO.
I'd personally recommend using the Non-Negative Garotte (NNG) [1] as it's consistent in terms of estimation and variable selection [2]. Unlike LASSO and ridge regression, NNG requires an initial estimate that is then shrunk towards the origin. In the original paper, Breiman recommends the least-squares solution for the initial estimate (you may however want to start the search from a ridge regression solution and use something like GCV to select the penalty parameter).
In terms of available software, I've implemented the original NNG in MATLAB (based on Breiman's original FORTRAN code). You can download it from:
[http://www.emakalic.org/blog/wp-content/uploads/2010/04/nngarotte.zip](http://www.emakalic.org/blog/wp-content/uploads/2010/04/nngarotte.zip)
BTW, if you prefer a Bayesian solution, check out [4,5].
References:
[1] Breiman, L. Better Subset Regression Using the Nonnegative Garrote Technometrics, 1995, 37, 373-384
[2] Yuan, M. & Lin, Y. On the non-negative garrotte estimator Journal of the Royal Statistical Society (Series B), 2007, 69, 143-161
[3] Zou, H. & Hastie, T. Regularization and variable selection via the elastic net Journal of the Royal Statistical Society (Series B), 2005, 67, 301-320
[4] Park, T. & Casella, G. The Bayesian Lasso Journal of the American Statistical Association, 2008, 103, 681-686
[5] Kyung, M.; Gill, J.; Ghosh, M. & Casella, G. Penalized Regression, Standard Errors, and Bayesian Lassos Bayesian Analysis, 2010, 5, 369-412
| null | CC BY-SA 3.0 | null | 2010-07-28T05:55:31.407 | 2013-07-26T13:52:42.580 | 2013-07-26T13:52:42.580 | 17230 | 530 | null |
877 | 1 | null | null | 3 | 1116 | Has anyone gone through some papers using Vector Error Correction Models in causality applications with more than one cointegration vectors, say two. I guess there will be more than one ECM terms. How to assess the endogeneity of the left hand variables if t-stats on different ECM coefficients yield different (conflicting) results. For left hand side variables to be endogenous should we have both ECM coefficients negative and significant?
Javed Iqbal
| Time Series Econometrics: VECM with multiple cointegration vectors | CC BY-SA 2.5 | null | 2010-07-28T06:31:36.857 | 2010-07-28T06:31:36.857 | null | null | 531 | [
"econometrics"
] |
878 | 2 | null | 726 | 69 | null | >
Absence of evidence is not evidence of absence.
–[Martin Rees](https://en.wikiquote.org/wiki/Martin_Rees) ([Wikipedia](https://en.wikipedia.org/wiki/Evidence_of_absence))
| null | CC BY-SA 4.0 | null | 2010-07-28T06:49:12.123 | 2019-07-16T22:06:22.707 | 2019-07-16T22:06:22.707 | 143653 | null | null |
879 | 2 | null | 798 | 12 | null | Maybe the paper "[Variations on the histogram](http://pubs.research.avayalabs.com/pdfs/ALR-2007-003-paper.pdf)" by Denby and Mallows will be of interest:
>
This new display which we term "dhist" (for diagonally-cut histogram) preserves the desirable features of both the equal-width hist and the equal-area hist. It will show tall narrow bins like the e-a hist when there are spikes in the data and will show isolated outliers just like the usual histogram.
They also mention that code in R is available on request.
| null | CC BY-SA 2.5 | null | 2010-07-28T07:23:22.887 | 2010-07-28T07:23:22.887 | null | null | 251 | null |
880 | 1 | null | null | 9 | 1154 | My question is about cross validation when there are many more variables than observations. To fix ideas, I propose to restrict to the classification framework in very high dimension (more features than observation).
Problem: Assume that for each variable $i=1,\dots,p$ you have a measure of importance $T[i]$ than exactly measure the interest of feature $i$ for the classification problem. The problem of selecting a subset of feature to reduce optimally the classification error is then reduced to that of finding the number of features.
Question: What is the most efficient way to run cross validation in this case (cross validation scheme)? My question is not about how to write the code but on the version of cross validation to use when trying to find the number of selected feature (to minimize the classification error) but how to deal with the high dimension when doing cross validation (hence the problem above may be a bit like a 'toy problem' to discuss CV in high dimension).
Notations: $n$ is the size of the learning set, p the number of features (i.e. the dimension of the feature space). By very high dimension I mean p>>n (for example $p=10000$ and $n=100$).
| Cross validation in very high dimension (to select the number of used variables in very high dimensional classification) | CC BY-SA 2.5 | null | 2010-07-28T08:15:40.827 | 2019-08-02T13:35:07.420 | 2010-09-02T07:13:57.967 | 223 | 223 | [
"machine-learning",
"classification",
"cross-validation"
] |
881 | 1 | 1189 | null | 6 | 1483 | Here's something I've wondered about for a while, but haven't been able to discover the correct terminology. Say you have a relatively complicated density function that you suspect might have a close approximation as a sum of (properly weighted) simpler density functions. Have such things been studied? I'm particularly interested in reading about any applications.
Here's one example I've found:
[Expansion of probability density functions as a sum of gamma densities with applications in risk theory](http://www.soa.org/library/research/transactions-of-society-of-actuaries/1966/january/tsa66v18pt1n5211.pdf)
| Series expansion of a density function | CC BY-SA 2.5 | null | 2010-07-28T08:15:51.733 | 2011-03-28T08:58:47.087 | 2011-03-28T08:58:47.087 | null | 34 | [
"probability",
"mixture-distribution",
"density-function"
] |
882 | 2 | null | 881 | 4 | null | Histogram density estimator is estimating the density with a sum of piecewise functions (density of a uniform).
KDE is using a sum of smooth function (gaussian is an example) (as long as they are positive they can be transformed into a density by normalization)
The use of "mixture" in statistic is about convex combination of densities.
| null | CC BY-SA 2.5 | null | 2010-07-28T08:21:47.367 | 2010-08-03T19:20:24.017 | 2010-08-03T19:20:24.017 | 223 | 223 | null |
883 | 2 | null | 881 | 3 | null | You can do this with mixture modeling. There are a number of R packages on CRAN for doing this. Search for "mixture" at [http://cran.r-project.org/web/packages/](http://cran.r-project.org/web/packages/)
| null | CC BY-SA 2.5 | null | 2010-07-28T09:09:51.180 | 2010-07-28T09:09:51.180 | null | null | 159 | null |
884 | 1 | null | null | 28 | 1832 | >
Possible Duplicate:
How to understand degrees of freedom?
I was at a talk a few months back where the speaker used the term 'degrees of freedom'. She briefly said something along the lines of it meaning the number of values used to form a statistic that are free to vary.
What does this mean? I'm specifically looking for an intuitive explanation.
| What are "degrees of freedom"? | CC BY-SA 2.5 | null | 2010-07-28T09:54:14.730 | 2012-08-07T09:37:08.443 | 2017-04-13T12:44:25.243 | -1 | 541 | [
"degrees-of-freedom"
] |
886 | 1 | 934 | null | 20 | 16503 | The 'fundamental' idea of statistics for estimating parameters is [maximum likelihood](http://en.wikipedia.org/wiki/Maximum_likelihood). I am wondering what is the corresponding idea in machine learning.
Qn 1. Would it be fair to say that the 'fundamental' idea in machine learning for estimating parameters is: 'Loss Functions'
[Note: It is my impression that machine learning algorithms often optimize a loss function and hence the above question.]
Qn 2: Is there any literature that attempts to bridge the gap between statistics and machine learning?
[Note: Perhaps, by way of relating loss functions to maximum likelihood. (e.g., OLS is equivalent to maximum likelihood for normally distributed errors etc)]
| What is the 'fundamental' idea of machine learning for estimating parameters? | CC BY-SA 2.5 | null | 2010-07-28T11:31:59.857 | 2017-08-29T15:26:29.920 | 2017-04-08T15:37:35.440 | 11887 | null | [
"machine-learning",
"maximum-likelihood",
"loss-functions",
"pac-learning"
] |
887 | 1 | null | null | 5 | 930 | Suppose there is a very big (infinite?) population of normally distributed values with unknown mean and variance.
Suppose also that we have a sample, S, of n values from the entire population. We can calculate mean and standard deviation for this sample (we use n-1 for stdev calculation).
The first and most important question is how is stdev(S) related to the standard deviation of the entire population?
An illustration for this issue is the second question:
Suppose we have an additional number, x, and we would like to test whether it is an vis-a-vis the general population. My intuitive approach is to calculate Z as follows:
$Z = \frac{x - mean(S)}{stdev(S)}$
and then test it against standard distribution if n>30 or against t-distribution if n<30.
However, this approach doesn't account for n, the size of the sample. What is the right way to solve this question provided there is only single sample S?
| Basic question regarding variance and stdev of a sample | CC BY-SA 3.0 | null | 2010-07-28T12:02:21.063 | 2017-12-31T10:52:59.010 | 2017-12-31T10:52:59.010 | 128677 | 213 | [
"standard-deviation",
"variance",
"normality-assumption",
"sample",
"unbiased-estimator"
] |
889 | 2 | null | 887 | 1 | null | My first answer was full of errors. Here is a corrected version:
The correct way to test is as follows:
z = (mean(S) - mu) / (stdev(S) / sqrt(n) )
See: [Student's t-test](http://en.wikipedia.org/wiki/Student%27s_t-test#Independent_one-sample_t-test)
Note the following:
- The sample size is accounted for when you divide the standard deviation by the square root of the sample size.
- You should also note that the z-test is for testing whether the true mean of the population is some particular value. It does not make sense to substitute x instead of mu in the above statistic.
| null | CC BY-SA 2.5 | null | 2010-07-28T12:12:22.993 | 2010-07-28T12:20:09.950 | 2010-07-28T12:20:09.950 | null | null | null |
890 | 1 | null | null | 2 | 1136 | I am struggling a little bit at the moment with a question related to logistic regression. I have a model that predicts the occurrence of animal based on land cover with reference to forest. I am not grasping the concept of a reference class and struggle to extrapolate the model onto a new area. Any explanations or guidance towards papers, lecture notes etc would be highly appreciated.
| Reference category and prediction | CC BY-SA 3.0 | null | 2010-07-28T12:13:03.647 | 2012-10-27T22:09:44.950 | 2012-10-27T22:09:44.950 | null | null | [
"logistic"
] |
893 | 2 | null | 16921 | 23 | null | I really like first sentence from
[The Little Handbook of Statistical Practice. Degrees of Freedom Chapter](http://www.jerrydallal.com/LHSP/dof.htm)
>
One of the questions an instrutor
dreads most from a mathematically
unsophisticated audience is, "What
exactly is degrees of freedom?"
I think you can get really good understanding about degrees of freedom from reading this chapter.
| null | CC BY-SA 2.5 | null | 2010-07-28T12:48:14.233 | 2010-07-28T12:48:14.233 | null | null | 236 | null |
894 | 2 | null | 16921 | 90 | null | Or simply: the number of elements in a numerical array that you're allowed to change so that the value of the statistic remains unchanged.
```
# for instance if:
x + y + z = 10
```
you can change, for instance, x and y at random, but you cannot change z (you can, but not at random, therefore you're not free to change it - see Harvey's comment), 'cause you'll change the value of the statistic (Σ = 10). So, in this case df = 2.
| null | CC BY-SA 2.5 | null | 2010-07-28T12:49:31.780 | 2010-07-28T17:34:31.507 | 2010-07-28T17:34:31.507 | 1356 | 1356 | null |
895 | 2 | null | 887 | 3 | null | I'm finding it rather tricky to see what you are asking:
- If you want to know whether the Var(S) is different from the population variance, then see this previous answer.
- If you want to determine whether the mean(S) and the mean(X) are the same, then look at Independent two-sample t-tests.
- If you want to test whether mean(S) is equal to the population mean, then see @Srikant answer above, i.e. a one-sample t-test.
| null | CC BY-SA 2.5 | null | 2010-07-28T12:50:26.427 | 2010-07-28T12:50:26.427 | 2017-04-13T12:44:36.923 | -1 | 8 | null |
897 | 1 | 905 | null | 55 | 70748 | What is the difference between offline and [online learning](http://en.wikipedia.org/wiki/Online_machine_learning)? Is it just a matter of learning over the entire dataset (offline) vs. learning incrementally (one instance at a time)? What are examples of algorithms used in both?
| Online vs offline learning? | CC BY-SA 2.5 | null | 2010-07-28T13:32:32.843 | 2019-03-27T21:41:25.717 | 2018-11-05T11:38:08.473 | 11887 | 284 | [
"machine-learning",
"online-algorithms"
] |
898 | 1 | 918 | null | 7 | 2791 | I am interested in tools/techniques that can be used for analysis of [streaming data in "real-time"](http://en.wikipedia.org/wiki/Real-time_data)*, where latency is an issue. The most common example of this is probably price data from a financial market, although it also occurs in other fields (e.g. finding trends on Twitter or in Google searches).
In my experience, the most common software category for this is ["complex event processing"](http://en.wikipedia.org/wiki/Complex_event_processing). This includes commercial software such as [Streambase](http://www.streambase.com/index.htm) and [Aleri](http://www.sybase.com/products/financialservicessolutions/aleristreamingplatform) or open-source ones such as [Esper](http://esper.codehaus.org/) or [Telegraph](http://telegraph.cs.berkeley.edu/) (which was the basis for [Truviso](http://www.truviso.com/)).
Many existing models are not suited to this kind of analysis because they're too computationally expensive. Are any models** specifically designed to deal with real-time data? What tools can be used for this?
* By "real-time", I mean "analysis on data as it is created". So I do not mean "data that has a time-based relevance" (as in [this talk by Hilary Mason](http://www.hilarymason.com/blog/conference-web2-expo-sf/)).
** By "model", I mean a mathematical abstraction that describe the behavior of an object of study (e.g. in terms of random variables and their associated probability distributions), either for description or forecasting. This could be a machine learning or statistical model.
| Modeling of real-time streaming data? | CC BY-SA 2.5 | null | 2010-07-28T13:49:07.733 | 2010-11-11T05:30:32.227 | null | null | 5 | [
"modeling",
"software",
"real-time"
] |
899 | 1 | null | null | 14 | 4672 | I'm trying to separate two groups of values from a single data set. I can assume that one of the populations is normally distributed and is at least half the size of the sample. The values of the second one are both lower or higher than the values from the first one (distribution is unknown). What I'm trying to do is to find the upper and lower limits that would enclose the normally-distributed population from the other.
My assumption provide me with starting point:
- all points within the interquartile range of the sample are from the normally-distributed population.
I'm trying to test for outliers taking them from the rest of the sample until they don't fit into the 3 st.dev of the normally-distributed population. Which is not ideal, but seem to produce reasonable enough result.
Is my assumption statistically sound? What would be a better way to go about this?
p.s. please fix the tags someone.
| Separating two populations from the sample | CC BY-SA 3.0 | null | 2010-07-28T13:53:18.503 | 2013-03-17T16:07:09.307 | 2012-10-27T20:15:59.537 | 686 | 219 | [
"dataset",
"outliers",
"expectation-maximization"
] |
900 | 2 | null | 898 | 1 | null | I am not sure how far this would be relevant to what you want to do but see the paper on adaptive question design called [FASTPACE](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.120.8664&rep=rep1&type=pdf). The goal of the algorithm is to ask the next question from a survey respondent based on his/her previous questions and answers.
The data does not arrive as fast as stock prices but nevertheless latency is an issue as most survey respondents expect the next question to appear within a few seconds.
| null | CC BY-SA 2.5 | null | 2010-07-28T13:57:09.400 | 2010-07-28T13:57:09.400 | null | null | null | null |
901 | 2 | null | 73 | 2 | null | RODBC for accessing data from databases, sqldf for performing simple SQL queries on dataframes (although I am forcing myself to use native R commands), and ggplot2 and plyr
| null | CC BY-SA 2.5 | null | 2010-07-28T13:59:49.157 | 2010-07-28T13:59:49.157 | null | null | 11 | null |
902 | 2 | null | 898 | 2 | null | It is going to depend a lot on what exactly you are looking for, but start at [Data Streams: Algorithms and Application by Muthukrishnan ](http://www.cs.rutgers.edu/~muthu/stream-1-1.ps).
There are many others that can be found by googling "data stream algorithms", or following the references in the paper.
| null | CC BY-SA 2.5 | null | 2010-07-28T14:01:46.063 | 2010-07-28T16:49:31.297 | 2010-07-28T16:49:31.297 | 247 | 247 | null |
903 | 2 | null | 485 | 3 | null | There is a series of Google Tech Talk videos called [Stats 202 - Statistical Aspects of Data Mining](http://video.google.com/videosearch?q=mease+stats+202&sitesearch=#)
| null | CC BY-SA 2.5 | null | 2010-07-28T14:06:35.367 | 2010-07-28T14:06:35.367 | null | null | 11 | null |
904 | 2 | null | 812 | 14 | null | Functional Data often involves different question. I've been reading Functional Data Analysis, Ramsey and Silverman, and they spend a lot of times discussing curve registration, warping functions, and estimating derivatives of curves. These tend to be very different questions than those asked by people interested in studying high-dimensional data.
| null | CC BY-SA 2.5 | null | 2010-07-28T14:16:38.977 | 2010-07-28T14:16:38.977 | null | null | 549 | null |
905 | 2 | null | 897 | 51 | null | Online learning means that you are doing it as the data comes in. Offline means that you have a static dataset.
So, for online learning, you (typically) have more data, but you have time constraints. Another wrinkle that can affect online learning is that your concepts might change through time.
Let's say you want to build a classifier to recognize spam. You can acquire a large corpus of e-mail, label it, and train a classifier on it. This would be offline learning. Or, you can take all the e-mail coming into your system, and continuously update your classifier (labels may be a bit tricky). This would be online learning.
| null | CC BY-SA 4.0 | null | 2010-07-28T14:37:17.337 | 2018-11-05T13:16:57.520 | 2018-11-05T13:16:57.520 | null | 549 | null |
908 | 2 | null | 880 | 6 | null | You miss one important issue -- there is almost never such thing as T[i]. Think of a simple problem in which the sum of two attributes (of a similar amplitude) is important; if you'd remove one of them the importance of the other will suddenly drop. Also, big amount of irrelevant attributes is the accuracy of most classifiers, so along their ability to assess importance. Last but not least, stochastic algorithms will return stochastic results, and so even the T[i] ranking can be unstable. So in principle you should at least recalculate T[i] after each (or at least after each non trivially redundant) attribute is removed.
Going back to the topic, the question which CV to choose is mostly problem dependent; with very small number of cases LOO may be the best choice because all other start to reduce to it; still small is rather n=10 not n=100. So I would just recommend random subsampling (which I use most) or K-fold (then with recreating splits on each step). Still, you should also collect not only mean but also the standard deviation of error estimates; this can be used to (approximately) judge which changes of mean are significant ans so help you decide when to cease the process.
| null | CC BY-SA 2.5 | null | 2010-07-28T14:58:10.173 | 2010-07-28T14:58:10.173 | null | null | null | null |
909 | 2 | null | 899 | 2 | null | This assumes that you don't even know if the second distribution is normal or not; I basically handle this uncertainty by focusing only on the normal distribution. This may or may not be the best approach.
If you can assume that the two populations are completely separated (i.e., all values from distribution A are less than all values from distribution B), then one approach is to use the optimize() function in R to search for the break-point that yields estimates of the mean and sd of the normal distribution that make the data most likely:
```
#generate completely separated data
a = rnorm(100)
b = rnorm(100,10)
while(!all(a<b)){
a = rnorm(100)
b = rnorm(100,10)
}
#create a mix
mix = c(a,b)
#"forget" the original distributions
rm(a)
rm(b)
#try to find the break point between the distributions
break_point = optimize(
f = function(x){
data_from_a = mix[mix<x]
likelihood = dnorm(data_from_a,mean(data_from_a),sd(data_from_a))
SLL = sum(log(likelihood))
return(SLL)
}
, interval = c(sort(mix)[2],max(mix))
, maximum = TRUE
)$maximum
#label the data
labelled_mix = data.frame(
x = mix
, source = ifelse(mix<break_point,'A','B')
)
print(labelled_mix)
```
If you can't assume complete separation, then I think you'll have to assume some distribution for the second distribution and then use mixture modelling. Note that mixture modelling won't actually label the individual data points, but will give you the mixture proportion and estimates of the parameters of each distribution (eg. mean, sd, etc.).
| null | CC BY-SA 2.5 | null | 2010-07-28T15:24:38.010 | 2010-07-28T15:24:38.010 | null | null | 364 | null |
910 | 2 | null | 886 | 3 | null | There is a trivial answer -- there is no parameter estimation in machine learning! We don't assume that our models are equivalent to some hidden background models; we treat both reality and the model as black boxes and we try to shake the model box (train in official terminology) so that its output will be similar to that of the reality box.
The concept of not only likelihood but the whole model selection based on the training data is replaced by optimizing the accuracy (whatever defined; in principle the goodness in desired use) on the unseen data; this allows to optimize both precision and recall in a coupled manner. This leads to the concept of an ability to generalize, which is achieved in different ways depending on the learner type.
The answer to the question two depends highly on definitions; still I think that the nonparametric statistics is something that connects the two.
| null | CC BY-SA 2.5 | null | 2010-07-28T15:29:33.070 | 2010-07-28T15:29:33.070 | null | null | null | null |
911 | 2 | null | 856 | 3 | null | Interesting question. What do you mean by weight?
I would be inclined to do a bootstrap...pick your favorite statistic (i.e. Fisher's Exact), and compute it on your data. Then assign new cells to each instance according to your null hypothesis, and repeat the process 999 times. This should give a pretty good empirical distribution for your test statistic under the null hypothesis, and allow easy computation of your p-value!
| null | CC BY-SA 2.5 | null | 2010-07-28T15:32:38.310 | 2010-07-28T15:32:38.310 | null | null | 549 | null |
912 | 2 | null | 868 | 1 | null | The other issue is that you put in your data, and the algorithm learns the weights and the Beta_0 for you...I don't know if Excel can do logistic regression...if it doesn't, I'd be inclined to use R to learn your model (and predict future cases for you!).
| null | CC BY-SA 2.5 | null | 2010-07-28T15:37:01.803 | 2010-07-28T15:37:01.803 | null | null | 549 | null |
913 | 1 | 916 | null | 5 | 718 | Comparing two variables, I came up with the following chart. the x, y pairs represent independent observations of data on the field. I've doen [Pearson correlation](http://en.wikipedia.org/wiki/Correlation_and_dependence) on it and have found one of 0.6.
My end goal is to establish a relationship between y and x such that y = f(x).
What analaysis would you recommend to obtain some form ofa relationship between the two variables?
[Graph http://koopics.com/ask_math_chart.jpg](http://koopics.com/ask_math_chart.jpg)
| Relationships between two variables | CC BY-SA 2.5 | null | 2010-07-28T16:15:07.883 | 2010-09-19T01:47:02.270 | 2010-09-16T07:04:38.580 | null | 59 | [
"regression"
] |
914 | 2 | null | 913 | 3 | null | What you are looking for is called regression; there are a lot of methods you can do it, both statistical and machine learning ones. If you want to find f, you must use statistics; in that case you must first assume that f is of some form, like f:y=a*x+b and then use some regression method to fit the parameters.
The plot suggests there are a lot of outliers (elements that does not follow f(x)); you may need robust regression to get rid of them.
| null | CC BY-SA 2.5 | null | 2010-07-28T16:21:41.483 | 2010-07-28T16:31:04.797 | 2010-07-28T16:31:04.797 | null | null | null |
915 | 2 | null | 913 | 3 | null | And just eyeballing the data, you are probably going to want to transform the data, as (at least to me) it looks skewed. Looking at the histograms of the two variables should suggest which transforms may be beneficial.
As suggested by mbq, more text [here](http://en.wikipedia.org/wiki/Data_transformation_%28statistics%29).
| null | CC BY-SA 2.5 | null | 2010-07-28T16:24:38.283 | 2010-07-28T16:48:05.950 | 2010-07-28T16:48:05.950 | 247 | 247 | null |
916 | 2 | null | 913 | 5 | null | Normality seems to be strongly violated at least by your y variable. I would log transform y to see if that cleans things up a bit. Then, fit a regression to log(y) ~ x. The formula the regression will return will be of the form log(y) = \alpha + \beta*x which you can transform back to the original scale by y = exp(\alpha + \beta*x)
| null | CC BY-SA 2.5 | null | 2010-07-28T16:31:53.143 | 2010-07-28T16:31:53.143 | null | null | 287 | null |
917 | 2 | null | 868 | 5 | null | Like drknexus said, for a logistic regression, your outcome measure needs to be 0 and 1. I'd go back and recode your outcome as 0 (didn't like it), or 1 (did like it). Then, abandon excel and load the data into R (it's really not as intimidating as it looks). Your regression will look something like this:
```
glm(Liked ~ Visually.Stunning + Exhilarating + Artistic + Sporty, family = binomial, data = data)
```
The regression will return betas for each feature in terms of log-odds. So, for every 1 point increase in `Artistic`, for instance, you'll have a value for how much that increases or decreases the log-odds of your enjoyment. Most of the betas will be positive, unless you dislike sporty games or something.
Now, you'll have to ask yourself some interesting questions. The assumption of the model is that the values on each of these scores affect your enjoyment independently, which probably isn't true! A game that is very Visually.Stunning and Exhilarating is probably way better than you would expect given those component parts. And it's probably the case that if a game gets scores of 1 on all features except Sporty, which gets a 4, that high Sporty score is worth less than if the other scores were higher.
That is, many or all of your features probably interact. To fit an accurate model, then, you'll want to add in these interactions. That formula would look like this:
```
glm(Liked ~ Visually.Stunning * Exhilarating * Artistic * Sporty, family = binomial, data = data)
```
Now, there are two points of difficulty here. First, you need to have more data to fit a good model with this many interactions than the pure independence model. Second, you risk overfitting, which means that the model will very accurately describe the original data, but will be less good at making accurate predictions for future data.
Needless to say, some people spend all day fitting and refitting models like this one.
| null | CC BY-SA 2.5 | null | 2010-07-28T17:01:12.903 | 2010-07-28T17:01:12.903 | null | null | 287 | null |
918 | 2 | null | 898 | 3 | null | This area roughly falls into two categories. The first concerns stream processing and querying issues and associated models and algorithms. The second is efficient algorithms and models for learning from data streams (or data stream mining).
It's my impression that the CEP industry is connected to the first area. For example, StreamBase originated from the [Aurora](http://www.cs.brown.edu/research/aurora/) project at Brown/Brandeis/MIT. A similar project was Widom's [STREAM](http://infolab.stanford.edu/stream/) at Stanford. Reviewing the publications at either of those projects' sites should help exploring the area.
A nice paper summarizing the research issues (in 2002) from the first area is [Models and issues in data stream systems](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.106.9846) by Babcock et al. In stream mining, I'd recommend starting with [Mining Data Streams: A Review](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.80.798) by Gaber et al.
BTW, I'm not sure exactly what you're interested in as far as specific models. If it's stream mining and classification in particular, the [VFDT](http://en.wikipedia.org/wiki/Incremental_decision_tree#VFDT) is a popular choice. The two review papers (linked above) point to many other models and it's very contextual.
| null | CC BY-SA 2.5 | null | 2010-07-28T17:04:38.547 | 2010-07-28T20:07:08.813 | 2010-07-28T20:07:08.813 | 251 | 251 | null |
919 | 2 | null | 890 | 1 | null | I'm not sure I exactly understand your question, but I'm assuming your confusion involves a categorical predictor in your model. When it comes to continuous variables in a regression, the coefficients for each predictor are weights for the value of the predictor to produce the predicted y value:e.g. y = 2*x
However, with a categorical variable, weights are meaningless. What does 2*Male mean, or in your case, 2*forest.
So, the coefficients returned for levels of categorical variables represent how different they are from some reference level. In experimental settings, your reference level would be the control group, and then you would get a coefficient for every treatment group, indicating what the size of the effect of each treatment was.
In my own research, and I'm guessing in yours too, there isn't always a meaningful control category for the reference level. So, what I'd do is set the reference level to whatever category would make exposition of the comparisons easiest. Or, you could use different contrasts, like sum contrasts, but those have their own difficulties especially if you have sparse data for one or more categories.
| null | CC BY-SA 2.5 | null | 2010-07-28T17:14:08.763 | 2010-07-28T17:14:08.763 | null | null | 287 | null |
920 | 1 | 940 | null | 6 | 4729 | I completed a Monte Carlo simulation that consisted of one million ($10^6$) individual simulations. The simulation returns a variable, $p$, that can be either 1 or 0. I then weight the simulations based on predefined criteria and calculate the probability of $p$. I also calculate a risk ratio using $p$:
$$\text{Risk ratio} = P(p|\text{test case}) / P(p|\text{control case})$$
I had eight Monte Carlo runs, which consist of one control case and seven test cases.
I need to know if the probabilities of $p$ are statistically different compared to the other cases. I know I can use a multiple comparison test or nonparametric ANOVA to test individual variables, but how do I do this for probabilities?
---
For example are these two probabilities statistically different?:
Probabilities:
$P(p|\text{test #3}) = 4.08 \times 10^{-5}$
$P(p|\text{test #4}) = 6.10 \times 10^{-5}$
Risk Ratios:
$\text{Risk Ratio}(\text{test #3}) = 0.089$
$\text{Risk Ratio}(\text{test #4}) = 0.119$
| Test if probabilities are statistically different? | CC BY-SA 3.0 | null | 2010-07-28T17:15:09.090 | 2013-06-05T04:11:12.797 | 2013-06-05T04:11:12.797 | 805 | 559 | [
"hypothesis-testing"
] |
922 | 2 | null | 886 | 2 | null | I don't think there is a fundamental idea around parameter estimation in Machine Learning. The ML crowd will happily maximize the likelihood or the posterior, as long as the algorithms are efficient and predict "accurately". The focus is on computation, and results from statistics are widely used.
If you're looking for fundamental ideas in general, then in computational learning theory, [PAC](http://en.wikipedia.org/wiki/Probably_approximately_correct_learning) is central; in statistical learning theory, [structural risk miniminization](http://en.wikipedia.org/wiki/Structural_risk_minimization); and there are other areas (for example, see the [Prediction Science](http://hunch.net/?p=612) post by John Langford).
On bridging statistics/ML, the divide seems exagerrated. I liked gappy's [answer](https://stats.stackexchange.com/questions/6/the-two-cultures-statistics-vs-machine-learning/607#607) to the "Two Cultures" question.
| null | CC BY-SA 2.5 | null | 2010-07-28T17:28:47.880 | 2010-07-28T17:28:47.880 | 2017-04-13T12:44:33.550 | -1 | 251 | null |
924 | 1 | 926 | null | 7 | 828 | For 1,000,000 observations, I observed a discrete event, X, 3 times for the control group and 10 times for the test group. How do I determine for a large number of observations (1,000,000), if three is statistically different than ten?
| Determine if three is statistically different than ten for a very large number of observations (1,000,000) | CC BY-SA 2.5 | null | 2010-07-28T17:36:24.920 | 2010-10-08T16:05:48.613 | 2010-10-08T16:05:48.613 | 8 | 559 | [
"hypothesis-testing",
"large-data"
] |
925 | 2 | null | 924 | 0 | null | I would be really surprised if you find the difference statistically significant. Having said that you may want to use a test for a difference of proportions (3 out of 1M vs 10 out of 1M).
| null | CC BY-SA 2.5 | null | 2010-07-28T17:40:56.223 | 2010-07-28T17:40:56.223 | null | null | null | null |
926 | 2 | null | 924 | 5 | null | I think a simple chi-squared test will do the trick. Do you have 1,000,000 observations for both control and test? If so, your table of observations will be (in R code)
Edit: Woops! Left off a zero!
```
m <- rbind(c(3, 1000000-3), c(10, 1000000-10))
# [,1] [,2]
# [1,] 3 999997
# [2,] 10 999990
```
And chi-squared test will be
```
chisq.test(m)
```
Which returns chi-squared = 2.7692, df = 1, p-value = 0.0961, which is not statistically significant at the p < 0.05 level. I'd be surprised if these could be clinically significant anyway.
| null | CC BY-SA 2.5 | null | 2010-07-28T17:43:48.153 | 2010-07-28T17:51:05.423 | 2010-07-28T17:51:05.423 | 287 | 287 | null |
927 | 1 | 937 | null | 30 | 5970 | What are some podcasts related to statistical analysis? I've found some audio recordings of college lectures on ITunes U, but I'm not aware of any statistical podcasts. The closest thing I'm aware of is an operations research podcast [The Science of Better](http://www.scienceofbetter.org/podcast/). It touches on statistical issues, but it's not specifically a statistical show.
| Statistical podcasts | CC BY-SA 2.5 | null | 2010-07-28T17:43:49.697 | 2016-12-17T07:00:37.593 | 2015-06-25T07:43:49.163 | 35989 | 319 | [
"references"
] |
928 | 1 | 932 | null | 7 | 8576 | This one is bothering me for a while, and a great dispute was held around it. In psychology (as well as in other social sciences), we deal with different ways of dealing with numbers :-) i.e. the levels of measurement. It's also common practice in psychology to standardize some questionnaire, hence transform the data into percentile scores (in order to assess a respondent's position within the representative sample).
Long story short, if you have a variable that holds the data expressed in percentile scores, how should you treat it? As an ordinal, interval, or even ratio variable?!
It's not ratio, cause there no real 0 (0th percentile doesn't imply absence of measured property, but the variable's smallest value). I advocate the view that percentile scores are ordinal, since P70 - P50 is not equal to P50 - P30, while the other side says it's interval.
Please gentlemen, cut the cord. Ordinal or interval?
| Measurement level of percentile scores | CC BY-SA 2.5 | null | 2010-07-28T17:57:51.697 | 2022-03-10T12:48:17.403 | 2010-08-07T17:48:50.100 | null | 1356 | [
"measurement"
] |
929 | 1 | null | null | 0 | 233 | How comprehensive is the following book - What interpretations are missing?
Interpretations of Probability, Andrei Khrennikov, 2009, de Gruyter, ISBN 978-3-11-020748-4
[http://www.degruyter.com/cont/fb/ma/detailEn.cfm?isbn=9783110207484&sel=pi](http://www.degruyter.com/cont/fb/ma/detailEn.cfm?isbn=9783110207484&sel=pi)
Contents:http://www.degruyter.com/files/pdf/9783110207484Contents.pdf
| Probability Interpretations | CC BY-SA 2.5 | null | 2010-07-28T17:59:42.600 | 2010-07-28T19:14:49.470 | null | null | 560 | [
"probability"
] |
930 | 2 | null | 928 | 1 | null | Continuous (interval); this is a method how to convert ordinal data to something that may have some distribution that makes sense.
| null | CC BY-SA 2.5 | null | 2010-07-28T18:05:11.857 | 2010-07-28T18:05:11.857 | null | null | null | null |
931 | 2 | null | 927 | 6 | null | You may be interested in the following link: [http://www.ats.ucla.edu/stat/seminars/](http://www.ats.ucla.edu/stat/seminars/) where the UCLA Statistical Computing unit of the UCLA has very nice screen-casts available. I have found them very useful in the past. They function essentially as lectures. Top-quality teaching.
| null | CC BY-SA 2.5 | null | 2010-07-28T18:17:46.900 | 2010-07-28T18:17:46.900 | null | null | 561 | null |
932 | 2 | null | 928 | 5 | null | Background to understand my answer
The critical property that distinguishes between ordinal and interval scale is whether we can take ratio of differences. While you cannot take ratio of direct measures for either scale the ratio of differences is meaningful for interval but not ordinal (See: [http://en.wikipedia.org/wiki/Level_of_measurement#Interval_scale](http://en.wikipedia.org/wiki/Level_of_measurement#Interval_scale)).
Temperature is the classic example for an interval scale. Consider the following:
80 f = 26.67 c
40 f = 4.44 c and
20 f = -6.67 c
Differences between the first and the second is:
40 f and 22.23 c
Difference between the second and the third is:
20 f and 11.11 c
Notice that the ratio is the same irrespective of the scale on which we measure temperature.
A classic example of ordinal data is ranks. If three teams, A, B, and C are ranked 1st, 2nd, and 4th, respectively, then a statement like so does not make sense: "Team A's difference in strength vis-a-vis team B is half of team B's difference in strength relative to team C."
Answer to your question
Is ratio of differences in percentiles meaningful? In other words, is the ratio of difference in percentiles invariant to the underlying scale? Consider, for example: (P70-P50) / (P50-P30)?
Suppose that these percentiles are based on an underlying score between 0-100 and we compute the above ratio. Clearly, we would obtain the same ratio of percentile differences under arbitrary linear transformation of the score (e.g., multiply all scores by 10 so that the range is between 0-1000 and compute the percentiles).
Thus, my answer: Interval
| null | CC BY-SA 3.0 | null | 2010-07-28T18:18:26.243 | 2016-04-30T22:16:44.757 | 2016-04-30T22:16:44.757 | 114097 | null | null |
933 | 2 | null | 290 | 4 | null | I have done very well with reading the official documentation. It is well-written, sometimes injected with humour (!) and, if you're willing to spend the time to learn Stata properly, is an absolute goldmine.
| null | CC BY-SA 2.5 | null | 2010-07-28T18:21:10.107 | 2010-07-28T18:21:10.107 | null | null | 561 | null |
934 | 2 | null | 886 | 18 | null | If statistics is all about maximizing likelihood, then machine learning is all about minimizing loss. Since you don't know the loss you will incur on future data, you minimize an approximation, ie empirical loss.
For instance, if you have a prediction task and are evaluated by the number of misclassifications, you could train parameters so that resulting model produces the smallest number of misclassifications on the training data. "Number of misclassifications" (ie, 0-1 loss) is a hard loss function to work with because it's not differentiable, so you approximate it with a smooth "surrogate". For instance, log loss is an upper bound on 0-1 loss, so you could minimize that instead, and this will turn out to be the same as maximizing conditional likelihood of the data. With parametric model this approach becomes equivalent to logistic regression.
In a structured modeling task, and log-loss approximation of 0-1 loss, you get something different from maximum conditional likelihood, you will instead maximize [product](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.16.8122) of (conditional) marginal likelihoods.
To get better approximation of loss, people noticed that training model to minimize loss and using that loss as an estimate of future loss is an overly optimistic estimate. So for more accurate (true future loss) minimization they add a bias correction term to empirical loss and minimize that, this is known as structured risk minimization.
In practice, figuring out the right bias correction term may be too hard, so you add an expression "in the spirit" of the bias correction term, for instance, sum of squares of parameters. In the end, almost all parametric machine learning supervised classification approaches end up training the model to minimize the following
$\sum_{i} L(\textrm{m}(x_i,w),y_i) + P(w)$
where $\textrm{m}$ is your model parametrized by vector $w$, $i$ is taken over all datapoints $\{x_i,y_i\}$, $L$ is some computationally nice approximation of your true loss and $P(w)$ is some bias-correction/regularization term
For instance if your $x \in \{-1,1\}^d$, $y \in \{-1,1\}$, a typical approach would be to let $\textrm{m}(x)=\textrm{sign}(w \cdot x)$, $L(\textrm{m}(x),y)=-\log(y \times (x \cdot w))$, $P(w)=q \times (w \cdot w)$, and choose $q$ by cross validation
| null | CC BY-SA 3.0 | null | 2010-07-28T18:25:00.973 | 2017-08-29T15:26:29.920 | 2017-08-29T15:26:29.920 | 53690 | 511 | null |
935 | 2 | null | 929 | 2 | null | Though quantum probability and negative probability models are quite interesting, this is hardly exhaustive of nonstandard models of probability. There are for instance, [imprecise probability models](http://en.wikipedia.org/wiki/Imprecise_probability), and models that violate Kolmogorov's countable additivity axiom, and more.
As an aside, the book may be more properly called 'Models of Probability'. [Interpretations of probability](http://plato.stanford.edu/entries/probability-interpret/), generally involve characterizing the competing understandings of probability as logically prescribed values, limiting frequencies, propensities, subjective beliefs, etc. Models, or axiomatizations can certainly be motivated by these understandings, but the problem of creating a variant system is different than arguing for a particular interpretation.
| null | CC BY-SA 2.5 | null | 2010-07-28T18:36:09.920 | 2010-07-28T19:14:49.470 | 2010-07-28T19:14:49.470 | 39 | 39 | null |
936 | 2 | null | 927 | 7 | null | There is [econtalk](http://www.econtalk.org/), it is mostly about economics, but delves very often to issues of research, science, and statistics.
| null | CC BY-SA 2.5 | null | 2010-07-28T19:22:44.490 | 2010-07-28T19:22:44.490 | null | null | 253 | null |
937 | 2 | null | 927 | 14 | null | BBC's [More or Less](http://news.bbc.co.uk/2/hi/programmes/more_or_less/default.stm) is often concerned with numeracy and statistical literacy issues. But it's not specifically about statistics. Their [About](http://news.bbc.co.uk/2/hi/programmes/more_or_less/1628489.stm) page has some background.
>
More or Less is devoted to the powerful, sometimes beautiful, often abused but ever ubiquitous world of numbers.
The programme was an idea born of the sense that numbers were the principal language of public argument.
[...]
| null | CC BY-SA 2.5 | null | 2010-07-28T19:34:00.560 | 2010-07-28T19:34:00.560 | null | null | 251 | null |
939 | 1 | 941 | null | 4 | 1319 | Is the Yates' correction for continuity used only for 2X2 matrices?
| Yates' correction for continuity only for 2X2? | CC BY-SA 2.5 | null | 2010-07-28T20:43:33.327 | 2010-10-20T21:10:16.807 | 2010-10-20T21:10:16.807 | 8 | 559 | [
"contingency-tables",
"yates-correction"
] |
940 | 2 | null | 920 | 8 | null | If you have 1,000,000 independent "coin flips" that can produce 1 with probabilty (prob) and 0 with probability (1-prob), then the number of 1's observed will follow a [Binomial distribution](http://en.wikipedia.org/wiki/Binomial_distribution).
Tests of statistical significance are rejection tests, i.e. reject the hypothesis that the two parameters are equal if the probability that param2 is observed in test2 when the true value is param1 is less than a certain number, like 5%, 1%, or 0.1%. These tests are typically constructed from the cumulative distribution function.
The cumulative distribution function for a binomial is ugly, but can be found in R and probably some other statistics packages as well.
But the good news is that with 1,000,000 cases you don't need to do that.... you would if you had a relatively small number of cases.
Because you have 1,000,000 independent flips, the CDF of a normal distribution is a good approximation (the Law of Large Numbers plays a role here). The mean and variance you need to use are the obvious ones, and are in the [Binomial Wikipedia](http://en.wikipedia.org/wiki/Binomial_distribution#Normal_approximation) article... You are then comparing two normally distributed variables and can use all the standard tests you would use with normally distributed variables.
For instance, if the true probability were 40*10^-6 then in 1,000,000 tests you would expect to see 40 +/- 6 positive cases. If the acceptance interval for a test is, for instance, 5 standard deviations wide on each side, then this would be compatible with both observations. If it were just 3 std dev wide on each side, one case would fit and the other would be statistically different.
| null | CC BY-SA 2.5 | null | 2010-07-28T20:50:25.120 | 2010-07-28T21:19:40.940 | 2010-07-28T21:19:40.940 | 87 | 87 | null |
941 | 2 | null | 939 | 6 | null | It's derived for binomial/hypergeometric distributions, so it's applicable to 2x2 or 2x1 cases.
| null | CC BY-SA 2.5 | null | 2010-07-28T21:12:03.197 | 2010-07-28T21:12:03.197 | null | null | 251 | null |
942 | 1 | 962 | null | 27 | 12509 | I've been beginning to work my way through [Statistical Data Mining Tutorials by Andrew Moore](http://www.autonlab.org/tutorials/) (highly recommended for anyone else first venturing into this field). I started by reading this [extremely interesting PDF entitled "Introductory overview of time-series-based anomaly detection algorithms"](http://www.autonlab.org/tutorials/biosurv01.pdf) in which Moore traces through many of the techniques used in the creation of an algorithm to detect disease outbreaks. Halfway through the slides, on page 27, he lists a number of other "state of the art methods" used to detect outbreaks. The first one listed is wavelets. Wikipeida describes a wavelet as
>
a wave-like oscillation with an
amplitude that starts out at zero,
increases, and then decreases back to
zero. It can typically be visualized
as a "brief oscillation"
but does not describe their application to statistics and my Google searches yield highly academic papers that assume a knowledge of how wavelets relate to statistics or full books on the subject.
I would like a basic understanding of how wavelets are applied to time-series anomaly detection, much in the way Moore illustrates the other techniques in his tutorial. Can someone provide an explanation of how detection methods using wavelets work or a link to an understandable article on the matter?
| Application of wavelets to time-series-based anomaly detection algorithms | CC BY-SA 2.5 | null | 2010-07-28T21:13:23.387 | 2021-05-26T10:08:44.613 | 2011-02-08T16:48:14.277 | 223 | 75 | [
"time-series",
"outliers",
"signal-processing",
"wavelet"
] |
943 | 2 | null | 841 | 4 | null | If you can assume bivariate normality, then you can develop a likelihood-ratio test comparing the two possible covariance matrix structures. The unconstrained (H_a) maximum likelihood estimates are well known - just the sample covariance matrix, the constrained ones (H_0) can be derived by writing out the likelihood (and will probably be some sort of "pooled" estimate).
If you don't want to derive the formulas, you can use SAS or R to fit a repeated measures model with unstructured and compound symmetry covariance structures and compare the likelihoods.
| null | CC BY-SA 2.5 | null | 2010-07-28T21:31:14.630 | 2010-07-28T21:31:14.630 | null | null | 279 | null |
944 | 1 | 947 | null | 1 | 619 | When I type a left paren or any quote in the R console, it automatically creates a matching one to the right of my cursor. I guess the idea is that I can just type the expression I want inside without having to worry about matching, but I find it annoying, and would rather just type it myself. How can I disable this feature?
I am using R 2.8.0 on OSX 10.5.8.
| How can I get R to stop autocompleting my quotes/parens? | CC BY-SA 2.5 | null | 2010-07-28T21:45:46.273 | 2010-07-29T02:11:34.563 | null | null | null | [
"r"
] |
945 | 2 | null | 944 | 3 | null | Well either use a different IDE -- this is entirely a feature of the OS X app -- or try to configure the feature in question.
As for IDEs / R environments, I'm rather happy with [ESS](http://ess.r-project.org) which works on every platform R works on.
| null | CC BY-SA 2.5 | null | 2010-07-28T21:49:13.417 | 2010-07-28T21:49:13.417 | null | null | 334 | null |
946 | 1 | 963 | null | 3 | 2741 | New to the site. I am just getting started with R, and want to replicate a feature that is available in SPSS.
Simply, I build a "Custom Table" in SPSS with a single categorical variable in the column and many continuous/scale variables in the rows (no interactions, just stacked on top of each other).
The table reports the means and valid N's for each column (summary statistics are in the rows), and select the option to generate significance tests for column means (each column against the others) using alpha .05 and adjust for unequal variances.
Here is my question.
How can I replicate this in R? What is my best option to build this table and what tests are available that will get me to the same spot? Since I am getting used to R, I am still trying to navigate around what is available.
| Column Means Significance Tests in R | CC BY-SA 4.0 | null | 2010-07-28T21:50:29.093 | 2019-07-24T10:09:01.670 | 2019-07-24T10:09:01.670 | 11887 | 569 | [
"r",
"statistical-significance",
"spss",
"mean",
"multiple-comparisons"
] |
947 | 2 | null | 944 | 5 | null | On OSX, go to `R > Preferences > Editor` and deselect `Match braces/quotes`
| null | CC BY-SA 2.5 | null | 2010-07-28T21:51:15.083 | 2010-07-28T21:51:15.083 | null | null | 287 | null |
948 | 2 | null | 946 | 1 | null | ```
summary(df)
```
Will give you 5 number summaries and counts of `NA` for continuous variables, and counts for categorical variables.
As for the significance tests, you'll have to do that by hand with `t.test()` or `wilcox.test()`.
| null | CC BY-SA 2.5 | null | 2010-07-28T21:54:16.503 | 2010-07-28T21:54:16.503 | null | null | 287 | null |
949 | 1 | null | null | 37 | 30862 | Take $x \in \{0,1\}^d$ and $y \in \{0,1\}$ and suppose we model the task of predicting y given x using logistic regression. When can logistic regression coefficients be written in closed form?
One example is when we use a saturated model.
That is, define $P(y|x) \propto \exp(\sum_i w_i f_i(x_i))$, where $i$ indexes sets in the power-set of $\{x_1,\ldots,x_d\}$, and $f_i$ returns 1 if all variables in the $i$'th set are 1, and 0 otherwise. Then you can express each $w_i$ in this logistic regression model as a logarithm of a rational function of statistics of the data.
Are there other interesting examples when closed form exists?
| When is logistic regression solved in closed form? | CC BY-SA 3.0 | null | 2010-07-28T21:59:02.693 | 2022-10-06T16:40:30.970 | 2012-09-20T12:49:01.220 | 2970 | 511 | [
"logistic",
"generalized-linear-model"
] |
950 | 2 | null | 373 | 5 | null | One does not need to know about conditional probability or Bayes Theorem to figure out that it is best to switch your answer.
Suppose you initially pick Door 1. Then the probability of Door 1 being a winner is 1/3 and the probability of Doors 2 or 3 being a winner is 2/3. If Door 2 is shown to be a loser by the host's choice then the probabilty that 2 or 3 is a winner is still 2/3. But since Door 2 is a loser, Door 3 must have a 2/3 probability of being a winner.
| null | CC BY-SA 2.5 | null | 2010-07-28T23:28:44.003 | 2010-07-28T23:28:44.003 | null | null | 99 | null |
951 | 1 | 1000 | null | 7 | 1366 | What is the relationship between a Nonhomogeneous Poisson process and a process that has heavy tail distribution for its inter arrival times?
Any pointer to a resource that can shed some light on this question would be hugely appreciated
| Nonhomogeneous Poisson and Heavy tail inter arrival time distribution | CC BY-SA 2.5 | null | 2010-07-29T00:48:11.603 | 2020-04-25T21:54:28.887 | 2020-04-25T21:54:28.887 | 11887 | 172 | [
"distributions",
"poisson-distribution",
"heavy-tailed"
] |
952 | 1 | null | null | 4 | 584 |
### Context
I have a survey of 16 questions, each with four possible responses. The purpose of the survey is to measure the respondent's propensity towards four categories (which we will denote A, B, C, D). Each of the four responses per question are representative of an aspect of the four categories, A, B, C, D.
The respondent rank orders each of the four responses (we will denote the first response by "4", the second by "3", etc).
To score the categories, we add the responses up based on the coding above. There are 16 x (4 + 3 + 2 + 1) = 160 total points. The sums for each category are computed, and the maximum score is deemed the respondent's dominant category.
Therefore each survey looks like the following (in CSV format)
```
question_num, A, B, C, D
1, 4, 3, 1, 2
2, 3, 4, 1, 2
3, 3, 4, 2, 1
4, 4, 3, 1, 2
5, 4, 3, 1, 2
6, 4, 3, 2, 1
7, 4, 3, 1, 2
.
.
.
16, 3, 4, 1, 2
sums, 64, 48, 24, 24
```
I have about 325 surveys completed.
### Aim
I want to remove possible redundant items in the survey so I can reduce the burden on future respondents.
### Questions
- My first strategy was to do a multi-logistic regression with the response as the dominant category (described above). Is this a good idea?
- Would PCA be helpful?
- Are there any other strategies for identifying redundant items?
| How to reduce number of items on a multi-item scale where each item requires ranking four response options | CC BY-SA 3.0 | null | 2010-07-29T01:45:55.080 | 2018-10-01T06:01:52.400 | 2011-06-08T15:35:27.680 | 183 | 513 | [
"logistic",
"scales",
"survey",
"ranking"
] |
953 | 2 | null | 913 | 1 | null | I agree with the suggestions about running a regression possibly with log(y) as the outcome variable or some other suitable transformation. I just wanted to add one comment, if you are reporting the bivariate association, you might prefer:
(a) to correlate log(x) and log(y),
(b) Spearman's rho, which correlates the ranks of the two variables.
| null | CC BY-SA 2.5 | null | 2010-07-29T02:00:32.547 | 2010-07-29T02:00:32.547 | null | null | 183 | null |
954 | 2 | null | 944 | 4 | null | To follow on from Dirk's comment, if you don't like your current IDE, check out some of the existing discussion on R IDEs:
[https://stackoverflow.com/questions/1439059/best-ide-texteditor-for-r](https://stackoverflow.com/questions/1439059/best-ide-texteditor-for-r)
| null | CC BY-SA 2.5 | null | 2010-07-29T02:11:34.563 | 2010-07-29T02:11:34.563 | 2017-05-23T12:39:27.620 | -1 | 183 | null |
955 | 1 | null | null | 1 | 1847 | Just wonder, is there any data analysis/ statistic/ data mining work that are available on freelance basis?
This could be subjective and argumentative, which is why I put it as CW.
| Data Analysis Work-- Is there Any Freelance Opportunity? | CC BY-SA 2.5 | null | 2010-07-29T03:25:03.290 | 2010-09-16T07:08:24.013 | 2010-09-16T07:08:24.013 | null | 175 | [
"careers"
] |
956 | 2 | null | 870 | 19 | null | As Robin said, you've got the Benjamini-Hochberg method backwards. With that method, you set a value for Q (upper case Q; the maximum desired FDR) and it then sorts your comparisons into two piles. The goal is that no more than Q% of the comparisons in the "discovery" pile are false, and thus at least 100%-Q% are true.
If you computed a new value for each comparison, which is the value of Q at which that comparisons would just barely be considered a discovery, then those new values are q-values (lower case q; see the link to a paper by John Storey in the original question).
| null | CC BY-SA 3.0 | null | 2010-07-29T04:06:29.247 | 2013-03-07T01:26:38.717 | 2013-03-07T01:26:38.717 | 25 | 25 | null |
957 | 2 | null | 887 | 1 | null | I think you need to nail down the question you are asking, before you can compute an answer. I think this question is way too vague to answer: "test whether it is an vis-a-vis the general population".
The only question I think you can answer is this one: If the new value came from the same population as the others, what is the chance that it will be so far (or further) from the sample mean? That is the question that your equation will begin to answer, although it is not quite right. Here is a corrected equation that includes n.
t = (x - mean(S))/(stdev(S)/sqrt(n))
Compute the corresponding P value (with n-1 degrees of freedom) and you've answered the question.
| null | CC BY-SA 2.5 | null | 2010-07-29T04:15:44.310 | 2010-07-29T04:15:44.310 | null | null | 25 | null |
958 | 2 | null | 955 | 1 | null | The short answer is "yes". Such work does exist.
For example, there is sometimes consulting work in and around universities.
Also, some companies wish to outsource data analysis and statistical activities.
In general, I found that word of mouth was a powerful tool. Once you build up a good reputation in a given community, additional requests for work will follow.
see: [http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=statistics+jobs](http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=statistics+jobs)
| null | CC BY-SA 2.5 | null | 2010-07-29T05:55:55.163 | 2010-07-29T05:55:55.163 | null | null | 183 | null |
959 | 2 | null | 952 | 2 | null | I've never had to perform such analyses, but there is an academic literature on the factor analysis of ipsative tests that would be relevant:
e.g.,
- Brown, A. (2016). Item response models for forced-choice questionnaires: A common framework. Psychometrika, 81(1), 135-160.
- Jackson, D. J., & Alwin, D. F. (1980). The factor analysis of ipsative measures. Sociological Methods & Research, 9(2), 218-238.
http://deepblue.lib.umich.edu/bitstream/2027.42/68736/2/10.1177_004912418000900206.pdf
- Dunlap, W. P., & Cornwell, J. M. (1994). Factor analysis of ipsative measures. Multivariate Behavioral Research, 29(1), 115-126. http://www.informaworld.com/smpp/content~db=all~content=a785042624
| null | CC BY-SA 4.0 | null | 2010-07-29T06:01:01.627 | 2018-10-01T06:01:52.400 | 2018-10-01T06:01:52.400 | 183 | 183 | null |
960 | 2 | null | 952 | 2 | null | So restructure your data merging all user responses, so in such form:
```
Q1 Q2 Q3 ...
user1 rank for option1 for Q1, user1 rank for option1 for Q2, ...
user1 rank for option2, ...
...
user2 rank for option1, ...
...
user325 rank for option4, ...
```
And then cluster the questions. I recommend agglomerative clustering, there it is easy to see what questions can be removed.
| null | CC BY-SA 2.5 | null | 2010-07-29T07:05:38.310 | 2010-07-29T07:05:38.310 | null | null | null | null |
961 | 1 | 971 | null | 12 | 6130 | I have a dataset made up of elements from three groups, let's call them G1, G2, and G3.
I analysed certain characteristics of these elements and divided them into 3 types of "behaviour" T1, T2, and T3 (I used cluster analysis to do that).
So, now I have a 3 x 3 contingency table like this with the counts of elements in the three groups divided by type:
```
| T1 | T2 | T3 |
------+---------+---------+---------+---
G1 | 18 | 15 | 65 |
------+---------+---------+---------+---
G2 | 20 | 10 | 70 |
------+---------+---------+---------+---
G3 | 15 | 55 | 30 |
```
Now, I can run a Fisher test on these data in R
```
data <- matrix(c(18, 20, 15, 15, 10, 55, 65, 70, 30), nrow=3)
fisher.test(data)
```
and I get
```
Fisher's Exact Test for Count Data
data: data
p-value = 9.028e-13
alternative hypothesis: two.sided
```
So my questions are:
- is it correct to use Fisher test this way?
- how do I know who is different from who? Is there a post-hoc test I can use? Looking at the data I would say the 3rd group has a different behaviour from the first two, how do I show that statistically?
- someone pointed me to logit models: are they a viable option for this type of analysis?
- any other option to analyse this type of data?
Thank you a lot
nico
| Statistical test for n x m contingency tables | CC BY-SA 2.5 | null | 2010-07-29T07:07:36.513 | 2018-03-27T11:46:40.597 | 2010-09-16T06:54:07.353 | null | 582 | [
"contingency-tables"
] |
962 | 2 | null | 942 | 19 | null | Wavelets are useful to detect singularities in a signal (see for example the paper [here](http://www.math.u-bordeaux1.fr/~jbigot/Site/Publications_files/Spectrometry.pdf) (see figure 3 for an illustration) and the references mentioned in this paper. I guess singularities can sometimes be an anomaly?
The idea here is that the Continuous wavelet transform (CWT) has maxima lines that propagates along frequencies, i.e. the longer the line is, the higher is the singularity. See Figure 3 in the paper to see what I mean! note that there is free Matlab code related to that paper, it should be [here](https://sites.google.com/site/webpagejbigot/software).
---
Additionally, I can give you some heuristics detailing why the DISCRETE (preceding example is about the continuous one) wavelet transform (DWT) is interesting for a statistician (excuse non-exhaustivity) :
- There is a wide class of (realistic (Besov space)) signals that are transformed into a sparse sequence by the wavelet transform. (compression property)
- A wide class of (quasi-stationary) processes that are transformed into a sequence with almost uncorrelated features (decorrelation property)
- Wavelet coefficients contain information that is localized in time and in frequency (at different scales). (multi-scale property)
- Wavelet coefficients of a signal concentrate on its singularities.
| null | CC BY-SA 3.0 | null | 2010-07-29T07:10:39.967 | 2015-05-29T10:54:19.570 | 2015-05-29T10:54:19.570 | -1 | 223 | null |
963 | 2 | null | 946 | 3 | null | As I read in help for the `t.test`, it is only applicable for the 2-sample tests. If you want to perform it to every combination of the columns of a matrix A, taken 2 at a time, you could do something like this (for the moment, I can't recall a better way)
```
apply(combn(1:dim(A)[2],2),2,function(x) t.test(A[,x[1]],A[,x[2]]))
```
or, if you just want the p.values
```
t(apply(combn(1:dim(A)[2],2),2,function(x) c(x[1],x[2],(t.test(A[,x[1]],A[,x[2]]))$p.value)))
```
| null | CC BY-SA 2.5 | null | 2010-07-29T08:15:59.907 | 2010-07-29T08:22:43.180 | 2010-07-29T08:22:43.180 | 339 | 339 | null |
964 | 1 | 1011 | null | 4 | 453 | So in R, for instance, this would be:
```
my_ts_logged_diffed = diff(log(some_ts_object))
plot(my_ts_logged_diffed)
```
This seems to be part of every experienced analyst/forecaster analytical workflow--in particular, a visual examination of the plotted data. What are they looking for--i.e., what useful information does this transformation help reveal?
Similarly, I have a pretty good selection of time series textbooks, tutorials, and the like; nearly all of them mention this analytical step, but none of them say why it's done (i am sure there's a good reason, and one that's apparently too obvious to even mention).
(i do indeed routinely rely on this transformation but only for the limited purpose of testing for a normal distribution (i think the test is called Shapiro-Wilk). The application of the test just involves (assuming i am applying it correctly) comparing a couple of parameters (a 'W' parameter and the p-value) against a baseline--the Test doesn't appear to require plotting the data).
| What are analysts looking for when they plot a differenced, logged time series? | CC BY-SA 2.5 | null | 2010-07-29T08:28:36.520 | 2010-11-02T01:47:17.577 | 2010-07-31T00:48:08.403 | 159 | 438 | [
"time-series",
"data-transformation"
] |
965 | 2 | null | 964 | 4 | null | Most growth/decay processes will at most change the moving quantity at an exponential rate. The differences of the logs of the quantity relate to the local slope λ, so for a underlying exponential growth or decay process it would be flat in t. Any deviation from flat gives you hints if and where there are switchovers between different purly exponential pieces; also for chaotic behavior you would expect more than linear growth or decay, i.e. a "local λ" curve that is not flat for small t-ranges.
| null | CC BY-SA 2.5 | null | 2010-07-29T08:46:34.730 | 2010-07-29T08:46:34.730 | null | null | 56 | null |
Subsets and Splits