Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
7126
|
2
| null |
7103
|
1
| null |
Your distribution is parametric, and you should just store the parameters that are sufficient statistics, if you can identify them. That includes the distribution family. For a time series, you can take advantage of autocorrelation and store the parameters of the predictive distribution conditional on its previous values.
The entropy of the prior (predictive) distribution of the parameters determines the upper bound for compression strength, but you may not need to compress them further. If you do, use arithmetic compression. Decreasing entropy, say by discretizing quantiles, will give greater compression.
| null |
CC BY-SA 2.5
| null |
2011-02-12T11:19:00.807
|
2011-02-12T11:26:39.763
|
2011-02-12T11:26:39.763
|
2456
|
2456
| null |
7127
|
2
| null |
7111
|
5
| null |
To add on Gilead's answer, they are computationally less expensive algorithms for truncated PCAs. NIPALS is indeed very popular, but I have had a lot of success with approximate methods that perform a succession of fits on partial data (what is often called PCA by random projection). This was discussed in a [metaoptimize](http://metaoptimize.com/qa/questions/3258/randomized-methods-for-computing-svd#3506) thread.
As you mention Python, let me point out that the algorithm is implemented in the [scikit-learn](https://scikit-learn.org/): the [PCA](https://scikit-learn.org/stable/modules/decomposition.html#pca-using-randomized-svd) class. In particular, it is used in an example demonstrating [eigenfaces](https://scikit-learn.org/stable/auto_examples/applications/plot_face_recognition.html).
| null |
CC BY-SA 4.0
| null |
2011-02-12T12:46:44.533
|
2020-02-05T16:42:23.473
|
2020-02-05T16:42:23.473
|
83252
|
1265
| null |
7128
|
1
| null | null |
5
|
610
|
### Study:
I simulated some surfaces materials at audio and haptic level, and I asked subjects to evaluate on a 9-point Likert scale the degree of coherence between the two stimuli.
For example there are stimuli with metal at auditory level and snow at haptic level, or wood both at auditory and haptic level.
So it is like a dissimilarity rating experiment.
The experiment had only 12 participants, so for each stimulus I have 12 response (no repeated measures involved).
### Question:
How should I analyse the following experiment?
### Initial Thoughts:
So far the only way I am thinking to use is ANOVA.
### Update
Hello again,
I used the cmdscale command in R, but I did not get thecperceptual map I would love to see.
I as you an help! Maybe the problem is that I misunderstood if my table is well built for the purpose of the analsys.
Let´s summarize my experiment, my goals and the problem:
I simulated some surfaces materials at audio and haptic level, and I asked
subjects to evaluate on a 9-point Likert scale the degree of coherence between
the two stimuli. For example there are trials where at auditory level a metal
surface was simulated and at haptic level the snow surface, or wood both at
auditory and haptic level.
The experiment had only 12 participants, so for each trial I have 12 response
(no repeated measures involved).
In total there were 36 trials, and each trial was evaluated only once by each
of the 12 participants. So each subject provided 36 ratings on the 9 point Likert
scale, one for each trial.
Basically, these are my data:
```
WD MT SW GR SN DL
```
WD 7.00 6.50 4.91 4.83 5.50 5.00
MT 7.33 6.91 2.08 3.16 4.25 3.25
SW 2.91 1.75 7.91 6.25 6.83 5.41
GR 2.91 2.66 6.25 6.41 7.25 6.75
SN 4.00 4.00 5.58 6.00 7.00 6.58
DL 3.91 3.08 5.16 6.25 6.50 6.83
On the rows the haptic stimuli and on the columns the auditory stimuli. In each
cell there is the average score for each trial (e.g. the trial GR-MT is 2.66, that
is the average score given by participants to the trial where the material "gravel"
was provided at haptic level and the material "metal" was provided ar auditory level).
Now I want to analyze the data in the correct ways, and as said MDS is the best
analysis instead of ANOVA as I was thinking.
My first goal is to print a perceptual map where to place the pairs of
audio-haptic stimuli (e.g. WD-WD, MT-DL, etc.) and see how far are the trials
from each other.
I used cmdscale in R but I did not get the wanted result. Any suggestion?
My second goal would be to find some p-values like I normally get with ANOVA.
For example I would like to understand if having the coherent couple of stimuli
SW-SW (which means "snow" both at audio and haptic level) produces significant
differences n the evaluations rather than the couple SW-MT (which means "snow"
at audio and "metal" at haptic level)
Again I would like to undestand if there is any statistical difference between
all the couples of stimuli corresponding to solid surfaces (like the couples MT-MT,
MT-WD, WD-WD, MT-MT) and all the couples where a solid surface and a aggregate surface
are presented (like the couples MT-SN, or WD-GR, etc.).
...I want to get as many information as possible from that table.
I really thanks anyone who can provide any suggestion or useful information.
|
Dissimilarity rating experiment: how to analyze the results
|
CC BY-SA 2.5
| null |
2011-02-12T15:22:27.697
|
2011-02-16T01:26:19.933
|
2011-02-15T01:10:26.023
|
183
|
4701
|
[
"rating"
] |
7129
|
1
|
7130
| null |
5
|
45499
|
When presenting data using a percentage, is it a good thing to have decimal places, say 2 decimal places instead of rounding off to whole numbers?
For example, instead of 23.43%, you round off to 23%.
I am looking at this from the perspective of whether the 2 decimal places accuracy will make much difference since we are dealing with percentage and not raw data value.
|
Should percentages be reported with decimal places?
|
CC BY-SA 3.0
| null |
2011-02-12T15:39:25.123
|
2023-05-17T01:52:02.650
|
2018-03-30T19:02:30.897
|
22228
|
546
|
[
"reporting",
"percentage"
] |
7130
|
2
| null |
7129
|
8
| null |
It depends on the size of the differences between classes. In most applications, saying the 73% prefer option A and 27% prefer option B is perfectly acceptable. But if you're dealing in an election where candidate X has 50.15% of votes and candidate Y has 49.86%, the decimal places are very much necessary.
Of course, you need to take care to make sure that all classes add up to 100%. In my electoral example above, they add up to 100.01%. In that case you might even consider adding a third decimal place.
| null |
CC BY-SA 2.5
| null |
2011-02-12T16:12:18.203
|
2011-02-12T16:12:18.203
| null | null |
666
| null |
7131
|
2
| null |
7054
|
1
| null |
In light of the clarifications in the comments to the question, I'd suggest using logistic regression with indicator variables for each region and a linear trend over year, and also consider adding an interaction between year and region to allow the linear trend to differ between regions.
If you want something simpler, you could apply Pearson's chi-squared test to the data for each year separately. It looks like there is very strong evidence that region 1 has higher death rates than either of the other two regions in all years except 2008, when region 3 had too few cases to be able to estimate the proportion of deaths with any precision.
Statistics can say nothing about the reason that region 1 differs from the others, of course.
| null |
CC BY-SA 2.5
| null |
2011-02-12T16:39:30.410
|
2011-02-12T16:39:30.410
| null | null |
449
| null |
7132
|
1
|
7135
| null |
6
|
2654
|
When conducting a t-test we can always use a Welch approximation of the df, and let go of the assumption of equal variances.
Is there something similar for (even one way) anova? (any R implementation of that?)
|
Is there an anova procedure that doesn't assume equal variance?
|
CC BY-SA 2.5
| null |
2011-02-12T16:46:03.520
|
2011-02-13T10:51:01.757
| null | null |
253
|
[
"r",
"anova",
"variance",
"degrees-of-freedom"
] |
7133
|
2
| null |
7128
|
1
| null |
ANOVA sounds reasonable to me — two-way ANOVA, to be specific, treating subjects as the blocking factor.
| null |
CC BY-SA 2.5
| null |
2011-02-12T16:46:10.710
|
2011-02-12T16:46:10.710
| null | null |
449
| null |
7134
|
1
|
16943
| null |
33
|
1003
|
Sometimes in reports I include a disclaimer about the p-values and other inferential statistics I've provided. I say that since the sample wasn't random, then such statistics would not strictly apply. My specific wording is usually given in a footnote:
>
"While, strictly speaking, inferential
statistics are only applicable in the
context of random sampling, we follow
convention in reporting significance
levels and/or confidence intervals as
convenient yardsticks even for
nonrandom samples. See Michael Oakes's
Statistical inference: A commentary for the social and behavioural
sciences (NY: Wiley, 1986).
On a couple of occasions--once for a peer-reviewed paper, once or twice in a non-academic setting--the editor or reviewer objected to this disclaimer, calling it confusing, and felt that the inferential findings should simply stand as written (and be given the mantle of authority). Has anyone else encountered this problem and found a good solution? On the one hand, people's understanding of p-values is generally dismal, even in the context of random sampling, so perhaps it doesn't matter much what we say. On the other, to contribute further to misunderstandings seems to make one part of the problem. I should add that I frequently deal with survey studies, where random assignment does not apply and where Monte Carlo simulations would often fail to address the issue of representativeness.
|
Accommodating entrenched views of p-values
|
CC BY-SA 3.0
| null |
2011-02-12T17:12:53.863
|
2014-02-28T04:51:36.680
|
2011-10-13T03:32:05.563
|
919
|
2669
|
[
"inference",
"p-value"
] |
7135
|
2
| null |
7132
|
4
| null |
The latest version of [ez](http://cran.r-project.org/web/packages/ez/index.html) lets you pass a `white.adjust` argument to `car::Anova()`, which implements a correction for heteroscedasticity. See `?car::Anova()` for details.
| null |
CC BY-SA 2.5
| null |
2011-02-12T17:26:21.177
|
2011-02-12T17:26:21.177
| null | null |
364
| null |
7136
|
2
| null |
7134
|
13
| null |
The use of inferential statistics can be justified not only based on a population model, but also based on a randomization model. The latter does not make any assumptions about the way the sample has been obtained. In fact, Fisher was the one that suggested that the randomization model should be the basis for statistical inference (as opposed to Neyman and Pearson). See, for example:
Ernst, M. D. (2004). Permutation methods: A basis for exact inference. Statistical Science, 19, 676-685. [[link (open access)]](http://projecteuclid.org/euclid.ss/1113832732)
Ludbrook, J. & Dudley, H. (1998). Why permutation tests are superior to t and F tests in biomedical research. American Statistician, 52, 127-132. [[link (if you have JSTOR access)]](http://www.jstor.org/stable/2685470)
I somehow doubt though that the editors or reviewers in question were using this as the reason for calling your disclaimer "confusing".
| null |
CC BY-SA 2.5
| null |
2011-02-12T17:29:41.060
|
2011-02-12T17:29:41.060
| null | null |
1934
| null |
7138
|
2
| null |
7132
|
8
| null |
There is a function named `oneway.test()` in the base `stats` package, which implements Welch correction for a one-way ANOVA. Its use is similar to the standard `t.test()` function. It is also referred to as O'Brien transformation (Biometrics 40 (1984), 1079--1087) and might be applied with two or more independent samples.
| null |
CC BY-SA 2.5
| null |
2011-02-12T17:54:14.780
|
2011-02-13T10:51:01.757
|
2011-02-13T10:51:01.757
|
930
|
930
| null |
7139
|
1
| null | null |
4
|
1708
|
I came across this study as part of a mock exam paper and was confused to say the least.
### Context:
The study investigates cognitive and behavioural factor related to the experience of anxiety in MRI scanners.
Participants completed the following questionnaires 5 mins after the scan:
- a measure of the frequency of anxiety experienced in the scanner
- a measure of the frequency of physical symptoms of panic
- a measure of the frequency of coping strategy employment
- a measure of the frequency of claustrophobic-related thoughts
Participants were also asked to fill in and submit general measures at home after the scan:
- level of claustrophobic fears regarding restriction and suffocation
- depression
- anxiety
- health anxiety
The return rate was 75%, i.e. only 97 of the original 130 participants completed the second batch of questionnaires.
Overall, the level of anxiety in the scanner was RELATED TO:
- level of claustrophobic fears regarding restriction and suffocation
- frequency of coping strategy to manage scanner anxiety
- no. symptoms of panic during the scan
- no. claustrophobia-related thoughts about the scan.
No stats are given.
### Questions:
- What type of design is this?
- Is it possible to understand the causes of anxiety experienced in MRI using this type of design?
### Initial Thoughts:
In my mind, causality can not be inferred because there is no comparison with a control group, and correlation would probably have been used rather than AVOVA. Is this the answer to the questions above or have I missed the point?
|
Can causality be inferred in a study with an experience followed by two sets of measures
|
CC BY-SA 2.5
| null |
2011-02-12T18:47:04.910
|
2011-02-14T08:32:27.437
|
2011-02-13T10:46:39.673
|
930
| null |
[
"causality"
] |
7140
|
2
| null |
7139
|
1
| null |
I think you're on the right track. Before drawing any conclusions about causality, I'd want to know...
...to what degree the 97 were representative of the 130, and the 130, to the population of interest.
...the magnitude of any relationships found as well as their statistical sig. (Whether correlation or anova was used is not, I think, directly relevant to your question.)
...the psychometric properties (validity and reliability indicators) of all measures used (perhaps one would want to adjust for reliability attenuation, for instance).
...what the subjects' characteristics were at baseline, i.e., before undergoing the scanner experience.
...as you've said, how a control group would compare.
The bottom line is, this is a correlational/observational study and, certainly as reported, says little about causal relationships.
| null |
CC BY-SA 2.5
| null |
2011-02-12T23:40:45.743
|
2011-02-12T23:40:45.743
| null | null |
2669
| null |
7141
|
1
| null | null |
7
|
224
|
Suppose I draw $n$ observations $X_1,X_2,\ldots,X_n$ independently from a distribution where $X_i \sim \mathcal{N}(\mu_i,\sigma^2)$, where the mean is assumed to be Lipschitz: $\left| \mu_i - \mu_{i+1}\right| \le \gamma,$ with $\gamma$ known. I want to test the null hypothesis:
$$H_0: \mu_n = 0$$
against the local alternative $H_1: \mu_n > 0$.
A few questions:
- Is this a known problem? If so, for what terms should I google search?
- What is a reasonable way to perform this test? I can imagine there might be some way to do it by downweighting less recent observations--maybe throw the innovations into the noise term, then use a weighted t-test? The problem with that approach is that the change in mean may be deterministic, so I should pose the problem as a minimax problem maybe. Perhaps another approach would be to ignore the non-stationarity and use a regular t-test on some subset of the observations $X_j,X_{j+1},\ldots,X_n$, with $n-j$ chosen by some function of $\gamma, \sigma$ (probably their ratio?), and accept that the test will not maintain the nominal Type I rate. What are some better ideas?
Edit: per the excellent comments so far, I believe this question can be reformulated as follows: Let $V$ be the convex set in $\mathbb{R}^n$ defined by:
$$
V = \left\{\mathbf{x} | x_n = 0, x_{i-1} \le x_i + \gamma, x_{i-1} \ge x_i - \gamma, i = 2,\ldots,n \right\}
$$
Observing vector $X = f + \sigma\epsilon$, where the elements of $\epsilon$ are drawn independently from a standard Gaussian, test the null hypothesis:
$$H_0: f \in V$$
In this sense, it is similar to the paper of Baraud et. al. mentioned by @Robin Girard, except the Baraud paper provides a method for the case where $V$ is a linear subspace of $\mathbb{R}^n$, not a convex polytope.
Under this formulation, it seems that, under the null, the squared distance from $X$ to $f$ should be distributed as a Chi-squared, up to the scaling factor of $\sigma^2$. Then the distance from $X$ to $V$ should be no greater than that. The proposed test would then be to find the projection of $X$ onto $V$, and test the distance from $X$ to that projection based on the Chi-squared bound. I imagine this test will have fairly low power. Perhaps truncating the observation to a final subset of the $X_i$ would have higher power...
|
Location test under a bounded non-stationarity?
|
CC BY-SA 2.5
| null |
2011-02-13T03:57:04.277
|
2011-09-11T08:49:49.133
|
2011-02-15T03:25:37.087
|
795
|
795
|
[
"time-series",
"hypothesis-testing",
"stationarity"
] |
7142
|
2
| null |
7139
|
1
| null |
### Design:
The study is an observational design.
Some may call it a correlational design, but I'm not a fan of such terminology because it can encourage the false assumption that the ability to draw a causal inference is related to the statistical test used.
Following up on this point, ANOVA does not permit causal inference any more than correlation.
The issue is whether a variable was experimentally manipulated.
It's just that most of the time variables manipulated are categorical and not numeric, so we end up using ANOVAs to analyse their effect.
As a general rule validating a causal proposition is easier where you experimentally manipulate the variable of interest.
However, I would argue that correlations drawn from an observational design are still informative, particularly if they are combined with solid theoretical knowledge about the phenomena.
### Causal Inference:
Many of the things related to anxiety experienced in the scanner are themselves related to the concept of anxiety. For example, panic, claustrophobia related thoughts and so on. Thus, showing that they are correlated, to my mind does not suggest causality. Rather, I would say that they are syndrome of co-occurring mental states that presumably have some common cause.
I suppose this would be an example of me using prior knowledge of the phenomena and combining this with the correlational evidence.
The study is also limited to examining what causes individual differences in MRI anxiety.
It does not examine how MRI implementation could be altered to make it more or less anxiety evoking.
However, I suppose you could use the reported levels of anxiety in the scanner as evidence that the scanner causes anxiety.
Presumably we have an implicit control group in our minds that says most of the time people are not anxious about being in a scanner (particularly if they are not actually in a scanner).
I don't think the return rate is a major issue, although it might be worth considering whether this is biasing your results.
Some of the other variables are presumably getting at the question of whether anxiety experienced in a scanner is related (or even caused) by a general disposition of the person.
The fact that the general measures of anxiety, depression, and so on are measured after the MRI study is potentially a confound, although a researcher would probably argue that such measures are relatively stable over time.
| null |
CC BY-SA 2.5
| null |
2011-02-13T05:34:38.733
|
2011-02-13T05:34:38.733
| null | null |
183
| null |
7144
|
2
| null |
7111
|
14
| null |
What you're doing right now is close, but you need to make sure you multiply the eigenvectors of `(data . data.T) / lines` on the left by `data.T`, in order to get the eigenvectors of `(data.T . data) / lines`. This is sometimes called the "transpose trick".
Here are some more details. Suppose you have a matrix $A$ that you want to perform PCA on; for simplicity, suppose that the columns of $A$ have already been normalized to have zero mean, so that we just need to compute the eigenvectors of the covariance matrix $A^T A$.
Now if $A$ is an $m \times n$ matrix, with $n >> m$, then $A^T A$ is a very large $n \times n$ matrix. So instead of computing the eigenvectors of $A^T A$, we might like to compute the eigenvectors of the much smaller $m \times m$ matrix $A A^T$ -- assuming we can figure out a relationship between the two. So how are the eigenvectors of $A^T A$ related to the eigenvectors of $A A^T$?
Let $v$ be an eigenvector of $A A^T$ with eigenvalue $\lambda$. Then
- $AA^T v = \lambda v$
- $A^T(A A^T v) = A^T(\lambda v)$
- $(A^T A)(A^T v) = \lambda (A^T v)$
In other words, if $v$ is an eigenvector of $A A^T$, then $A^T v$ is an eigenvector of $A^T A$, with the same eigenvalue. So when performing a PCA on $A$, instead of directly finding the eigenvectors of $A^T A$ (which may be very expensive), it's easier to find the eigenvectors $v$ of $AA^T$ and then multiply these on the left by $A^T$ to get the eigenvectors $A^T v$ of $A^T A$.
| null |
CC BY-SA 3.0
| null |
2011-02-13T07:21:25.333
|
2011-06-07T21:15:04.847
|
2011-06-07T21:15:04.847
|
1106
|
1106
| null |
7146
|
1
|
7148
| null |
13
|
2784
|
Hi i am analaysing wind data for estimating energy from a wind turbine.
I have taken 10 years of wind data and graphed a histogram;
my second stage was to fit a Weibull distribution to the data.
I used R with the package `lmom` to compute the Weibul shape and scale
this is the code i used:
```
>library(lmom)
wind.moments<-samlmu(as.numeric(pp$WS))
moments<-pelwei(wind.moments)
x.wei<-rweibull(n=length(pp$WS), shape=moments["delta"], scale=moments["beta"])
hist(as.numeric(pp$WS), freq=FALSE)
lines(density(x.wei), col="red", lwd=4)
```
It seems like there is some lag between the data and the density function; can you help me with this?
Another question is can you help me in calculating the anual energy from the density function?

thank you
|
Analysing wind data with R
|
CC BY-SA 2.5
| null |
2011-02-13T07:56:23.410
|
2011-04-04T15:12:40.560
|
2011-02-13T08:47:00.057
|
3178
|
3178
|
[
"r",
"distributions"
] |
7147
|
2
| null |
7110
|
11
| null |
I will add that in time series context it is usually assumed that data observed is a realisation of stochastic process. Hence in time series a lot of attention is given to properties of stochastic processes, such as stationarity, ergodicity, etc. In longitudinal context in my understanding data comes from usual samples (by sample I mean sequence of iid variables) observed at different points in time, so classical statistic methods are applied, since they always assume that sample is observed.
For short answer, one might say that time series are studied in econometrics, longitudinal design -- in statistics. But that does not answer the question, just shifts it to another question. On the other hand a lot of short answers do exactly that.
| null |
CC BY-SA 2.5
| null |
2011-02-13T08:38:52.733
|
2011-02-13T08:38:52.733
| null | null |
2116
| null |
7148
|
2
| null |
7146
|
5
| null |
`lmom` function `pelwei` fits a three parameter Weibull distribution, with location, scale and shape parameters. `rweibull` generates random numbers for a two-parameter Weibull distribution. You need to subtract the location parameter `moments["zeta"]`. That should give a better fit, but it doesn't appear it will give a good fit to your particular data.
I notice [http://www.reuk.co.uk/Wind-Speed-Distribution-Weibull.htm](http://www.reuk.co.uk/Wind-Speed-Distribution-Weibull.htm) says "Wind speeds in most of the world can be modelled using the Weibull Distribution.". Perhaps you're just unlucky and live in a part of the world where they can't!
| null |
CC BY-SA 2.5
| null |
2011-02-13T09:51:27.640
|
2011-02-13T09:51:27.640
| null | null |
449
| null |
7149
|
2
| null |
6770
|
4
| null |
You can simply use the geometric distribution, which gives you the probability of a given number of Bernouilli trials before a success or in these case before the series is broken. The distribution is given as follow :
P = (1-p)^(k) * p
with p = .5 for a fair coin and k = 1 : 9
The idea is to compute the joint probability of k consecutive head/tail and one tail/head :
P{tail, tail,..., head} = P{tail}^k * P{head}.
Then you multiply this probability by 10^7 the number of trials in your simulation.
The results are the following :
- 2,500,000
- 1,250,000
- 625,000
- 312,500
- 156,250
- etc.
which are quite close of your simulated results...
For the case of a run of length 10 or more, you have to do 1-(others prob, including k=0) which gives 9765.625.
You can compute these probabilities easily in R with P <- dgeom(1:9, prob=.5) and for P(10) : P <- 1- sum(dgeom(0:9, prob=.5))
| null |
CC BY-SA 2.5
| null |
2011-02-13T10:28:48.027
|
2011-02-13T11:02:14.957
|
2011-02-13T11:02:14.957
|
3108
|
3108
| null |
7150
|
2
| null |
7146
|
6
| null |
I recreated your plot with data from [http://hawaii.gov/dbedt/ert/winddata/krab0192.txt](http://hawaii.gov/dbedt/ert/winddata/krab0192.txt) (I took the 1200 measurements). I got a decent fit of the data, generally using your code:
```
library(lmom)
daten <- read.delim("wind.txt")
wind.avg <- na.omit(as.numeric(daten[,"X12"]))
wind.moments<-samlmu(wind.avg)
moments<-pelwei(wind.moments)
x.wei<-rweibull(n=length(wind.avg), shape=moments["delta"], scale=moments["beta"])
hist(as.numeric(wind.avg), freq=FALSE)
lines(density(x.wei), col="red", lwd=4)
```

Sorry, I'm not shure were your problem could be, but I think you should be able to fit weibull to your data. What makes me suspicious is the bell-curve of your density plot, I have no idea where that came from.
Here are the moments I generated:
wind.moments
```
l_1 l_2 t_3 t_4
15.17287544 4.80372580 0.14963501 0.06954438
```
moments
```
zeta beta delta
0.516201 16.454233 1.745413
```
WTR to the annual output: I suppose I'd generate discrete values for the probability density function, multiply these values with the output function and sum it up. Alternatively, you could just use your raw data, multiply the values with the output function, sum it up and calculate the annual average, you should control for seasonality in a suitable way (e. g. make sure to use whole years, or to weight accordingly).
Here is the uncontrolled output (using the formula from [http://www.articlesbase.com/diy-articles/determining-wind-turbine-annual-power-output-a-simple-formula-based-upon-blade-diameter-and-average-wind-speed-at-your-location-513080.html](http://www.articlesbase.com/diy-articles/determining-wind-turbine-annual-power-output-a-simple-formula-based-upon-blade-diameter-and-average-wind-speed-at-your-location-513080.html))
```
years <- length(wind.avg)/365
diameter <- 150
Power = (0.01328*diameter^2)*((wind.avg)^3)
(annual.power <- sum(Power)/years)
[1] 791828306
```
| null |
CC BY-SA 2.5
| null |
2011-02-13T11:21:23.203
|
2011-02-14T12:42:02.070
|
2011-02-14T12:42:02.070
|
1766
|
1766
| null |
7151
|
2
| null |
7128
|
2
| null |
It's hard to name an appropriate method without knowing the research question you're trying to answer. With that in mind, multidimensional scaling (MDS) takes measures of global dissimilarity between pairs of stimuli as input data: observers are asked to rate the similarity of two stimuli without being given explicit criteria for that judgement.
MDS tries to place the stimuli within a (low-dimensional) space such that the distance between the stimuli represent their dissimilarity. One is typically interested in the number of dimensions of the resulting space, and tries to infer some meaning about these dimensions. There are many variants of MDS. In R, check out the functions `cmdscale()` as well as `isoMDS()` and `sammon()` from package `MASS`.
| null |
CC BY-SA 2.5
| null |
2011-02-13T11:22:57.903
|
2011-02-13T12:34:08.807
|
2011-02-13T12:34:08.807
|
1909
|
1909
| null |
7152
|
1
|
7470
| null |
11
|
29836
|
How should you deal with a cell value in a contingency table that is equal to zero in statistical calculations? (Note that such a value can be structural, i.e., it must be zero by definition, or random, i.e., it could have been some other value, but zero was observed.)
|
How should you handle cell values equal to zero in a contingency table?
|
CC BY-SA 3.0
| null |
2011-02-13T13:14:55.147
|
2019-12-09T23:41:48.123
|
2013-12-08T03:41:59.073
|
7290
|
2956
|
[
"contingency-tables"
] |
7153
|
1
|
7163
| null |
1
|
190
|
I would like to have a good idea on how to design clinical trials in oncology. In that issue, I am looking for a compact book that could give me a good overview, with the emphasis on statistical considerations.
Would you have a recommendation for me?
Thank you in advance,
Marco
|
Book on designing clinical trials in oncology
|
CC BY-SA 2.5
| null |
2011-02-13T14:05:19.347
|
2011-02-13T23:51:13.627
|
2011-02-13T23:51:13.627
|
183
|
3019
|
[
"references",
"experiment-design",
"clinical-trials"
] |
7155
|
1
|
7158
| null |
47
|
10555
|
People often talk about dealing with outliers in statistics. The thing that bothers me about this is that, as far as I can tell, the definition of an outlier is completely subjective. For example, if the true distribution of some random variable is very heavy-tailed or bimodal, any standard visualization or summary statistic for detecting outliers will incorrectly remove parts of the distribution you want to sample from. What is a rigorous definition of an outlier, if one exists, and how can outliers be dealt with without introducing unreasonable amounts of subjectivity into an analysis?
|
Rigorous definition of an outlier?
|
CC BY-SA 2.5
| null |
2011-02-13T15:07:40.937
|
2013-08-27T22:26:05.173
| null | null |
1347
|
[
"outliers",
"definition"
] |
7156
|
1
|
23742
| null |
4
|
840
|
I have a dataset with yearly levels of corruption in a number of countries, as well as whether they changed their government that year.
```
year, corruption, change of president
2001, 5, 0
2002, 7, 1
2003, 8, 0
etc.
```
I want to test whether corruption is affected by a change in power (defined as the election of a new president who isn't part of the same political party as the previous one).
The idea is to either look at the slope of corruption/year for the two years leading into the election, and the two years after, e.g. $t-2$, $t-1$ and $t$ for the before slope.
The other idea is to look at the average level of corruption three years before and after.
The rate might make more sense since there are more things that affect corruption and different countries may be on different trajectories. However, there could also be some benefit to just looking at average levels three years before and after.
Any thoughts on which one I should look at, and also how to go about measuring this?
|
Difference between two slopes
|
CC BY-SA 2.5
| null |
2011-02-13T16:00:25.593
|
2012-02-27T13:59:25.527
|
2011-02-13T22:06:14.257
|
2970
|
3182
|
[
"hypothesis-testing"
] |
7157
|
2
| null |
7155
|
14
| null |
You are correct that removing outliers can look like a subjective exercise but that doesn't mean that it's wrong. The compulsive need to always have a rigorous mathematical reason for every decision regarding your data analysis is often just a thin veil of artificial rigour over what turns out to be a subjective exercise anyway. This is especially true if you want to apply the same mathematical justification to every situation you come across. (If there were bulletproof clear mathematical rules for everything then you wouldn't need a statistician.)
For example, in your long tail distribution situation, there's no guaranteed method to just decide from the numbers whether you've got one underlying distribution of interest with outliers or two underlying distributions of interest with outliers being part of only one of them. Or, heaven forbid, just the actual distribution of data.
The more data you collect, the more you get into the low probability regions of a distribution. If you collect 20 samples it's very unlikely you'l get a value with a z-score of 3.5. If you collect 10,000 samples it's very likely you'll get one and it's a natural part of the distribution. Given the above, how do you decide just because something is extreme to exclude it?
Selecting the best methods in general for analysis is often subjective. Whether it's unreasonably subjective depends on the explanation for the decision and on the outlier.
| null |
CC BY-SA 3.0
| null |
2011-02-13T16:03:53.543
|
2012-11-18T08:13:16.550
|
2012-11-18T08:13:16.550
|
601
|
601
| null |
7158
|
2
| null |
7155
|
26
| null |
As long as your data comes from a known distribution with known properties, you can rigorously define an outlier as an event that is too unlikely to have been generated by the observed process (if you consider "too unlikely" to be non-rigorous, then all hypothesis testing is).
However, this approach is problematic on two levels: It assumes that the data comes from a known distribution with known properties, and it brings the risk that outliers are looked at as data points that were smuggled into your data set by some magical faeries.
In the absence of magical data faeries, all data comes from your experiment, and thus it is actually not possible to have outliers, just weird results. These can come from recording errors (e.g. a 400000 bedroom house for 4 dollars), systematic measurement issues (the image analysis algorithm reports huge areas if the object is too close to the border) experimental problems (sometimes, crystals precipitate out of the solution, which give very high signal), or features of your system (a cell can sometimes divide in three instead of two), but they can also be the result of a mechanism that no one's ever considered because it's rare and you're doing research, which means that some of the stuff you do is simply not known yet.
Ideally, you take the time to investigate every outlier, and only remove it from your data set once you understand why it doesn't fit your model. This is time-consuming and subjective in that the reasons are highly dependent on the experiment, but the alternative is worse: If you don't understand where the outliers came from, you have the choice between letting outliers "mess up" your results, or defining some "mathematically rigorous" approach to hide your lack of understanding. In other words, by pursuing "mathematical rigorousness" you choose between not getting a significant effect and not getting into heaven.
EDIT
If all you have is a list of numbers without knowing where they come from, you have no way of telling whether some data point is an outlier, because you can always assume a distribution where all data are inliers.
| null |
CC BY-SA 3.0
| null |
2011-02-13T16:32:51.340
|
2013-08-27T20:01:56.297
|
2013-08-27T20:01:56.297
|
17230
|
198
| null |
7160
|
2
| null |
7152
|
18
| null |
Zeros in tables are sometimes classified as structural, i.e.zero by design or by definition, or as random, i.e. a possible value that was observed. In the case of a study where no instances were observed despite being possible, the question often comes up: What is the one-sided 95% confidence interval above zero? This can be sensibly answered. It is, for instance, addressed in ["If Nothing Goes Wrong, Is Everything All Right? Interpreting Zero Numerators" Hanley and Lippman-Hand. JAMA. 1983;249(13):1743-45.](http://www.med.mcgill.ca/epidemiology/hanley/reprints/If_Nothing_Goes_1983.pdf) Their bottom line was that the upper end of the confidence interval around the observed value of zero was 3/n where n was the number of observations. This "rule of 3" has been further addressed in later analyses and to my surprise I found it even has a [Wikipedia page](http://en.wikipedia.org/wiki/Rule_of_three_%28medicine%29). The best discussion I found was by [Jovanovic and Levy in the American Statistician](http://www.jstor.org/pss/2685405). That does not seem to be available in full-text in the searches, but can report after looking through it a second time that they modified the formula to be 3/(n+1) after sensible Bayesian considerations, which tightens up the CI a bit. There is a more recent review in [International Statistical Review (2009), 77, 2, 266–275](http://dx.doi.org/10.1111/j.1751-5823.2009.00078.x).
Addenda: After looking more closely at the last citation, above I also remember finding the extensive discussion in Agresti & Coull "The American Statistician", Vol. 52, No. 2 (May, 1998), pp. 119-126 informative. The "Agresti-Coull" intervals are incorporated into various SAS and R functions. One R function with it is binom.confint {package:binom} by Sundar Dorai-Raj.
There are several methods for dealing with situations where an accumulation of "zero" observations distort an otherwise nice, tractable distribution of say costs or health-care usage patterns. These include zero-inflated and hurdle models as described by Zeileis in ["Regression Models for Count Data in R"](http://cran.r-project.org/web/packages/pscl/vignettes/countreg.pdf). Searching Google also demonstrates that Stata and SAS have facilities to handle such models.
After seeing the citation to Browne (and correcting the Jovanovic and Levy modification), I am adding this snippet from the even more entertaining rejoinder to Browne:
>
"But as the sample size becomes smaller, prior information becomes even more important since there are so few data points to “speak for themselves.” Indeed, small sample sizes provide not only the most compelling opportunity to think hard about the prior, but an obligation to do so.
"More generally, we would like to take this opportunity to speak out against
the mindless, uncritical use of simple formulas or rules."
And I add the [citation to the Winkler, et al paper that was in dispute.](http://dx.doi.org/10.1198/000313002753631295)
| null |
CC BY-SA 4.0
| null |
2011-02-13T17:16:00.367
|
2019-12-09T23:41:48.123
|
2019-12-09T23:41:48.123
|
102647
|
2129
| null |
7162
|
2
| null |
7155
|
6
| null |
Definition 1: As already mentioned, an outlier in a group of data reflecting the same process (say process A) is an observation (or a set of observations) that is unlikely to be a result of process A.
This definition certainly involves an estimation of the likelihood function of the process A (hence a model) and setting what unlikely means (i.e. deciding where to stop...). This definition is at the root of the answer I gave [here](https://stats.stackexchange.com/questions/213/what-is-the-best-way-to-identify-outliers-in-multivariate-data/386#386). It is more related to ideas of hypothesis testing of significance or goodness of fit.
Definition 2 An outlier is an observation $x$ in a group of observations $G$ such that when modeling the group of observation with a given model the accuracy is higher if $x$ is removed and treated separately (with a mixture, in the spirit of what I mention [here](https://stats.stackexchange.com/questions/1444/how-should-i-transform-non-negative-data-including-zeros/1445#1445)).
This definition involves a "given model" and a measure of accuracy. I think this definition is more from the practical side and is more at the origin of outliers. At the Origin, outlier detection was a tool for robust statistics.
Obviously these definitions can be made very similar if you understand that calculating likelihood in the first definition involves modeling and calculation of a score :)
| null |
CC BY-SA 3.0
| null |
2011-02-13T19:30:44.873
|
2013-08-27T20:00:52.887
|
2017-04-13T12:44:45.640
|
-1
|
223
| null |
7163
|
2
| null |
7153
|
2
| null |
I asked a very similar question several months back in this post: [Good Text on Clinical Trials](https://stats.stackexchange.com/questions/2770/good-text-on-clinical-trials).
I decided to go with [Clinical Trials: A Methodologic Perspective](http://rads.stackoverflow.com/amzn/click/0471727814), by Steven Piantadosi I have been incredibly happy with the text. Now it's not aimed at oncology exactly, so this may not be the perfect text for you, but a lot of the fundamentals of clinical trials are addressed in the text, and from there, there are probably scores of journal articles related to oncology trials.
| null |
CC BY-SA 2.5
| null |
2011-02-13T19:34:36.337
|
2011-02-13T19:34:36.337
|
2017-04-13T12:44:48.803
|
-1
|
1118
| null |
7164
|
1
| null | null |
10
|
2335
|
I have a logistic regression model: $softmax(WX)$ where $W$ is my parameter matrix and $X$ is my input. I want a density function over the outputs of that model.
Say I know that my $X$ are distributed according to some density $p$. From the [change of variables](http://en.wikipedia.org/wiki/Probability_density_function#Dependent_variables_and_change_of_variables) I know that the density of the outputs will be something like $p'(x) = p(x) det|J^{-1}|$ where $J = \frac{\partial{softmax(WX)}}{\partial{X}}$.
However, this holds only for a bijective function $softmax(WX)$.
What can I do if this is not the case - especially if $W$ is not square?
|
Change of variable with non bijective function
|
CC BY-SA 2.5
| null |
2011-02-13T20:15:29.043
|
2019-07-12T15:14:14.910
| null | null |
2860
|
[
"distributions",
"probability"
] |
7165
|
1
|
7168
| null |
12
|
649
|
What book is the most thorough treatment of fundamental concepts in statistics?
I am not asking for a book on details of the methods of calculations and procedures, I am mainly interested in a book that thoroughly explains the foundational concepts ... an intuitive/illustrated/visual approach to the core ideas ... rather than load of mathematics equations etc. Size of the book is no problem ... even a large multi-volume text would do ... heck even a web resource would do.
|
A resource on concepts underlying statistics, not the techniques used in applied stats
|
CC BY-SA 2.5
| null |
2011-02-13T20:24:38.900
|
2011-02-14T20:08:32.530
|
2011-02-13T23:28:51.173
|
159
|
1620
|
[
"references"
] |
7166
|
1
|
7195
| null |
9
|
1045
|
a friend of mine has asked me to help him with predictive modelling of car traffic in a medium sized parking garage. The garage has its busy and easy days, its peak hours, dead hours opening hours (it is opened during 12 hours during weekdays and during 8 hours during weekends).
The goal is to predict how many cars will enter the garage during a given day (say, tomorrow) and how are these cars supposed to be distributed over the day.
Please point me to general references (preferably, publicly available) to strategies and techniques.
Thank you
|
General approaches to model car traffic in a parking garage
|
CC BY-SA 2.5
| null |
2011-02-13T21:04:03.320
|
2011-04-13T10:34:55.570
|
2011-04-13T10:34:55.570
|
449
|
3184
|
[
"time-series",
"multivariate-analysis",
"predictive-models",
"queueing"
] |
7167
|
2
| null |
7146
|
4
| null |
Here's a recent post at SO on wind turbines. My answer on that link has three links that you might be interested in:
[https://stackoverflow.com/questions/4843194/r-language-sorting-data-into-ranges-averaging-ignore-outliers/4844783#4844783](https://stackoverflow.com/questions/4843194/r-language-sorting-data-into-ranges-averaging-ignore-outliers/4844783#4844783)
I just checked one of the Weibull links in the above SO answer. For some reason, the link is down. Here are some links that provide the same basic information:
[http://www.gso.uri.edu/ozone/](http://www.gso.uri.edu/ozone/)
[http://www.weru.ksu.edu/new_weru/publications/pdf/Comparison%20of%20the%20Weibull%20model%20with%20measured%20wind%20speed%20distributions%20for%20stochastic%20wind%20genera.pdf](http://www.weru.ksu.edu/new_weru/publications/pdf/Comparison%20of%20the%20Weibull%20model%20with%20measured%20wind%20speed%20distributions%20for%20stochastic%20wind%20genera.pdf)
[http://www.kfupm.edu.sa/ri/publication/cer/41_JP_Weibull_parameters_for_wind_speed_distribution_in_Saudi_Arabia.pdf](http://www.kfupm.edu.sa/ri/publication/cer/41_JP_Weibull_parameters_for_wind_speed_distribution_in_Saudi_Arabia.pdf)
[http://journal.dogus.edu.tr/13026739/2008/cilt9/sayi1/M00195.pdf](http://journal.dogus.edu.tr/13026739/2008/cilt9/sayi1/M00195.pdf)
[http://www.eurojournals.com/ejsr_26_1_01.pdf](http://www.eurojournals.com/ejsr_26_1_01.pdf)
Also, from the power generated from wind, the seasonality is obvious.


| null |
CC BY-SA 2.5
| null |
2011-02-13T21:20:12.683
|
2011-02-14T01:24:19.100
|
2017-05-23T12:39:26.167
|
-1
|
2775
| null |
7168
|
2
| null |
7165
|
8
| null |
It's hard to know exactly what you're looking for based on your post. Maybe you can edit it to clarify a little. I will say that to really understand statistics well, then you'll need to learn some math.
For fairly broad, low-level, introductory concepts, both
- Gonick and Smith, A Cartoon Guide to Statistics, and
- D. Huff, How to Lie with Statistics
are light, easy reads that present a lot of the core ideas. Another book directed at a more "popular" audience that I think every person should read is J. A. Paulos' Innumeracy. It is not about probability or statistics, per se, and has more elementary probability than statistics, but it's framed in a way that I think most people can easily relate to.
If you have some calculus background and want to understand (introductory, frequentist) theoretical statistics, find a copy of Mood, Graybill and Boes, Introduction to the Theory of Statistics, 3rd. ed. It's old, but in my opinion, still better than any of the more "modern" treatments. But, it's a book for which you'll have to be comfortable with mathematical notation.
For a "modern" view of applied statistics and the interface between it and machine learning, along with good examples, and good intuition, then Hastie et al., Elements of Statistical Learning, is the most popular choice. Many people also tend to like Harrell's Regression Modeling Strategies, which is a solid book, though I'm apparently not quite as big of a fan as others tend to be. Again, in both cases, you'll need to at least be comfortable with some calculus, linear algebra, and standard math notation.
| null |
CC BY-SA 2.5
| null |
2011-02-13T21:40:37.220
|
2011-02-13T22:37:46.510
|
2011-02-13T22:37:46.510
|
2970
|
2970
| null |
7169
|
2
| null |
7058
|
4
| null |
This sounds like a problem of feature selection, if this is the case, I think you want to compute the [mutual information](http://en.wikipedia.org/wiki/Mutual_information) between all subsets of features and the classification output. The subset with the highest mutual information will be the set of features that contains the most 'information' about the resulting classification of the record.
If you only have 3 features, you can compute all possible subsets in a reasonable amount of time, if your feature set grows larger, you'll have to approximate this (typically using a greedy approach: take feature with the highest MI at each step).
| null |
CC BY-SA 2.5
| null |
2011-02-13T22:04:12.390
|
2011-02-13T22:04:12.390
| null | null |
1913
| null |
7170
|
2
| null |
6753
|
1
| null |
Equi-depth histograms are a solution to the problem of [quantization](http://en.wikipedia.org/wiki/Quantization_%28signal_processing%29) (mapping continuous values to discrete values).
For finding the best number of bins, I think it really depends on what you are trying to do with the histogram. In general I think it would be best to ensure your error of choice was below some threshold (eg. Sum of squared errors < THRESH) and bin the values in that manner.
Alternatively, the number of bins can be passed in as a parameter (if you're concerned about the space consumption of the histogram).
| null |
CC BY-SA 2.5
| null |
2011-02-13T22:14:43.607
|
2011-02-13T22:14:43.607
| null | null |
1913
| null |
7171
|
2
| null |
7165
|
5
| null |
If you're interested in the philosophy of Statistics, you can't do much better than [Abelson's "Statistics As Principled Argument"](http://rads.stackoverflow.com/amzn/click/0805805281).
| null |
CC BY-SA 2.5
| null |
2011-02-13T22:17:14.337
|
2011-02-13T22:17:14.337
| null | null |
666
| null |
7173
|
1
|
7174
| null |
3
|
9104
|
I have a case-control study in which the cases are firms with health insurance and the controls are firms with no health insurance. I am studying the factors affecting enrolment in health insurance and was therefore using a logistic regression, which includes several covariates on firm characteristics that were measured in a survey. I have randomly sampled the firms from a database that includes two strata: insured and uninsured firms. I selected 65 from each group. However, within the group I also sampled from four strata that correspond to industry. I am therefore wondering if I need to use conditional logistic regression, as opposed to unconditional logistic regression. However, I was under the impression that conditional logistic regression was for matched case-control studies or panel studies. In other feedback I've received I've been told that because I sampled on the outcome, I need to use the conditional model. Could someone please help me figure out which m? Any references would also be much appreciated. Thank you.
|
How to decide between a logistic regression or conditional logistic regression?
|
CC BY-SA 2.5
| null |
2011-02-13T23:42:23.543
|
2017-08-01T03:12:36.997
|
2013-09-03T10:49:44.973
|
21599
|
834
|
[
"logistic",
"survey",
"clogit"
] |
7174
|
2
| null |
7173
|
4
| null |
I don't agree that you sampled on the outcome, since you sampled on company and enrollment is your outcome. You may want to deal with the company as a random effect and the other features as fixed effects. So I am suggesting yet a third alternative: generalized mixed models.
After clarification:
If the outcome is company enrollment rather than employee enrollment, then it is an ordinary case-control study for which unconditional logistic regression should be the standard approach. Conditional logistic regression is not necessary unless there were further conditions on the sampling regarding other company features.
Further clarification:
If you were using R, then the package to identify and install would be not surprisingly: "sampling" by Thomas Lumley. It provides for the appropriate incorporation of the two-way sampling strategy you have outlined in the design phase prior to estimation with the svyglm() function. Stata also has a set of survey functions and I imagine they can also be used with the general linear modeling functions it provides. SAS didn't have such facilities in the past so the SUDAAN program was needed as an added (expensive) purchase, but I have a vague memory that this may have changed with its latest releases. (I don't know about SPSS with regard to sampling support for GLM models.)
| null |
CC BY-SA 2.5
| null |
2011-02-13T23:50:05.563
|
2011-02-16T21:06:49.663
|
2011-02-16T21:06:49.663
|
2129
|
2129
| null |
7175
|
1
|
7182
| null |
14
|
18329
|
I'm experimenting with classifying data into groups. I'm quite new to this topic, and trying to understand the output of some of the analysis.
Using examples from [Quick-R](http://www.statmethods.net/advstats/cluster.html), several `R` packages are suggested. I have tried using two of these packages (`fpc` using the `kmeans` function, and `mclust`). One aspect of this analysis that I do not understand is the comparison of the results.
```
# comparing 2 cluster solutions
library(fpc)
cluster.stats(d, fit1$cluster, fit2$cluster)
```
I've read through the relevant parts of the `fpc` [manual](http://cran.r-project.org/web/packages/fpc/fpc.pdf) and am still not clear on what I should be aiming for. For example, this is the output of comparing two different clustering approaches:
```
$n
[1] 521
$cluster.number
[1] 4
$cluster.size
[1] 250 119 78 74
$diameter
[1] 5.278162 9.773658 16.460074 7.328020
$average.distance
[1] 1.632656 2.106422 3.461598 2.622574
$median.distance
[1] 1.562625 1.788113 2.763217 2.463826
$separation
[1] 0.2797048 0.3754188 0.2797048 0.3557264
$average.toother
[1] 3.442575 3.929158 4.068230 4.425910
$separation.matrix
[,1] [,2] [,3] [,4]
[1,] 0.0000000 0.3754188 0.2797048 0.3557264
[2,] 0.3754188 0.0000000 0.6299734 2.9020383
[3,] 0.2797048 0.6299734 0.0000000 0.6803704
[4,] 0.3557264 2.9020383 0.6803704 0.0000000
$average.between
[1] 3.865142
$average.within
[1] 1.894740
$n.between
[1] 91610
$n.within
[1] 43850
$within.cluster.ss
[1] 1785.935
$clus.avg.silwidths
1 2 3 4
0.42072895 0.31672350 0.01810699 0.23728253
$avg.silwidth
[1] 0.3106403
$g2
NULL
$g3
NULL
$pearsongamma
[1] 0.4869491
$dunn
[1] 0.01699292
$entropy
[1] 1.251134
$wb.ratio
[1] 0.4902123
$ch
[1] 178.9074
$corrected.rand
[1] 0.2046704
$vi
[1] 1.56189
```
---
My primary question here is to better understand how to interpret the results of this cluster comparison.
---
Previously, I had asked more about the effect of scaling data, and calculating a distance matrix. However that was answered clearly by mariana soffer, and I'm just reorganizing my question to emphasize that I am interested in the intrepretation of my output which is a comparison of two different clustering algorithms.
Previous part of question:
If I am doing any type of clustering, should I always scale data? For example, I am using the function `dist()` on my scaled dataset as input to the `cluster.stats()` function, however I don't fully understand what is going on. I read about `dist()` [here](http://stat.ethz.ch/R-manual/R-patched/library/stats/html/dist.html) and it states that:
>
this function computes and returns the distance matrix computed by using the specified distance measure to compute the distances between the rows of a data matrix.
|
Understanding comparisons of clustering results
|
CC BY-SA 2.5
| null |
2011-02-14T00:21:17.413
|
2016-06-03T09:03:44.003
|
2011-02-20T21:59:36.737
|
2635
|
2635
|
[
"r",
"clustering"
] |
7176
|
1
|
7189
| null |
6
|
1953
|
Everything I've read about combining errors in quadrature when multiplying or dividing quantities with associated errors says that this works for "small error". What about large error? Say I want to compute A/B where A is +/- 1% and B is +/- 50%, can I still reasonably add the errors in quadrature?
|
Propagation of large errors
|
CC BY-SA 2.5
| null |
2011-02-14T00:28:54.207
|
2011-02-14T07:51:51.503
| null | null |
629
|
[
"error-propagation"
] |
7177
|
2
| null |
7176
|
1
| null |
The formula for error propagation
$\sigma_f^2 = \Sigma (\frac{\delta f}{\delta x} \sigma_x)^2$
works exactly for normally distributed errors and linear functions $f(x_1,x_2,...)$
Since (most) functions can be linearly approximated, the above also works for small errors. For large errors, a symmetric distribution of $x$ leads to an asymmetric distribution of the error in $f$ (e.g. if $f(x)=x^{10}$, $f(0)=0$, $f(1)=1$, $f(2)=1024$, so the formula won't hold if $x=1$ and $\sigma_x=1$).
With large error, you may be able to calculate the transformation of the error distribution, otherwise you can perform a Monte Carlo simulation to estimate the distribution of $\sigma_f$.
| null |
CC BY-SA 2.5
| null |
2011-02-14T02:10:59.663
|
2011-02-14T07:19:16.113
|
2011-02-14T07:19:16.113
|
449
|
198
| null |
7178
|
2
| null |
7176
|
1
| null |
The first problem with large errors is that the expected value of the multiplication or division of the uncertain values will not be the multiplication or the division of the expected values. So while it is true that $E[X+Y]=E[X]+E[Y]$ and $E[X-Y]=E[X]-E[Y]$, it would usually not be true to say $E[XY]=E[X]E[Y]$ or $E[X/Y]=E[X]/E[Y]$, though they will be close for small errors. But for large errors, that effect will disrupt your propagation of error calculations.
The second problem will be that the propagation of errors is asymmetric in multiplication and division, and that also becomes more important as the relative errors increase.
Suppose for example you had $A$ being 270, 540 or 810 and $B$ being 3, 6 or 9. Then $A/B$ could be 30, 45, 60, 90 (three ways), 135, 180, or 270. While 90 may be the mode and median as well as 540/6, the mean is 110, and 30 is much closer to 90 (or to 110) than 270 is.
| null |
CC BY-SA 2.5
| null |
2011-02-14T02:30:12.930
|
2011-02-14T02:30:12.930
| null | null |
2958
| null |
7179
|
1
|
7193
| null |
2
|
138
|
Sorry if this is similar to another question, but I'm still trying to learn data analysis and I don't know what to search for.
What I'd like to do is take some information that a potential customer enters into a web form and figure out the probability that they'll become a customer... giving our sales team an idea of which leads are most valuable. Specifically, being a travel company, there are criteria like origin and destination, date of travel vs. date they fill the form, and even how much/little they fill out the 3 subjective questions (about 100-150 characters seems ideal).
I'm not even sure what the terminology is, so if anyone can provide some direction about where I can go to learn about this, I'd be very appreciative.
Thanks!
|
Resources for beginners - how to determine probability of user action based on certain criteria?
|
CC BY-SA 2.5
| null |
2011-02-14T04:33:34.633
|
2011-02-14T09:37:18.190
| null | null |
3187
|
[
"dataset",
"references"
] |
7180
|
1
| null | null |
8
|
5596
|
I have a dataset of project case studies for a new type of research method for Government agencies to support decision making activities. My task is to develop an estimation method based on past experience for future projects for estimation purposes.
My dataset is limited to 50 cases. I have 30+ (potential) predictors recorded and one response variable (i.e. hours taken to complete the project).
Not all predictors are significant, using step-wise selection techniques I'm expecting number of prediction variables is likely to be in the 5-10 variable range. Although I'm struggling to get a predictor set using the standard appraoches in tools like PASW (SPSS).
I'm well aware of all the material talking about rules of thumb for sample sizes and predictor variable to case ratios. My dilemma is that it's taken close to 10 years to collect 50 cases as it is, so it's about as good as it will get.
My question is what should I do to get the most out of this small sample set?
That is any good references for dealing with small smaple sets? Changes in p-value significance? Changes to step-wise selection approaches? Use of transforms such as centre-ing or log?
Any advice is appreciated.
|
Multiple regression with small data sets
|
CC BY-SA 2.5
| null |
2011-02-14T05:32:01.177
|
2011-02-14T06:47:37.777
| null | null |
3189
|
[
"regression",
"small-sample"
] |
7181
|
2
| null |
7180
|
3
| null |
As you want to select a few predictors from your data set, I would suggest a simple linear regression with $L_1$ penalty or using the LASSO (penalized linear regression). Your case is suited for regression with [LASSO](http://en.wikipedia.org/wiki/Lasso_%28statistics%29#LASSO_method) penalty as your sample size, $n = 50$, and the number of predictors, $p=30$. Changing the tuning parameter will select the number of predictors you want to choose.
If you can give details about the distribution of your variables, I can be more specific.
I don't use SPSS, but this can be done easily in `R` using the `glmnet` function in the package of the same name. If you look in the manual, it contains a generic example (very first one, for gaussian case) which will solve your problem. I am sure, similar solution must exist in SPSS.
| null |
CC BY-SA 2.5
| null |
2011-02-14T05:53:15.473
|
2011-02-14T06:47:37.777
|
2011-02-14T06:47:37.777
|
1307
|
1307
| null |
7182
|
2
| null |
7175
|
13
| null |
First let me tell you that I am not going to explain exactly all the measures here, but I am going to give you an idea about how to compare how good the clustering methods are (let's assume we are comparing 2 clustering methods with the same number of clusters).
- For example the bigger the diameter of the cluster, the worst the clustering, because the points that belong to the cluster are more scattered.
- The higher the average distance of each clustering, the worst the clustering method. (Let's assume that the average distance is the average of the distances from each point in the cluster to the center of the cluster.)
These are the two metrics that are the most used. Check these links to understand what they stand for:
- inter-cluster distance (the higher the better, is the summatory of the distance between the different cluster centroids)
- intra-cluster distance (the lower the better, is the summatory of the distance between the cluster members to the center of the cluster)
To better understanding the metrics above, check [this](http://www.stanford.edu/~maureenh/quals/html/ml/node82.html).
Then you should read the manual of the library and functions you are using to understand which measures represent each of these, or if these are not included try to find the meaning of the included. However, I would not bother and stick with the ones I stated here.
Let's go on with the questions you made:
- Regarding scaling data: Yes you should always scale the data for clustering, otherwise the different scales of the different dimensions (variables) will have different influences in how the data are clustered, with the higher the values in the variable, the more influential that variable will be in how the clustering is done, while indeed they should all have the same influence (unless for some particular strange reason you do not want it that way).
- The distance functions compute all the distances from one point (instance) to another. The most common distance measure is Euclidean, so for example, let's suppose you want to measure the distance from instance 1 to instance 2 (let's assume you only have 2 instances for the sake of simplicity). Also let's assume that each instance has 3 values (x1, x2, x3), so I1=0.3, 0.2, 0.5 and I2=0.3, 0.3, 0.4 so the Euclidean distance from I1 and I2 would be: sqrt((0.3-0.2)^2+(0.2-0.3)^2+(0.5-0.4)^2)=0.17, hence the distance matrix will result in:
i1 i2
i1 0 0.17
i2 0.17 0
Notice that the distance matrix is always symmetrical.
The Euclidean distance formula is not the only one that exists. There are many other distances that can be used to calculate this matrix. Check for example in Wikipedia [Manhattain Distance](http://en.wikipedia.org/wiki/Manhattan_distance) and how to calculate it. At the end of the Wikipedia page for [Euclidean Distance](http://en.wikipedia.org/wiki/Euclidean_distance) (where you can also check its formula) you can check which other distances exist.
| null |
CC BY-SA 3.0
| null |
2011-02-14T06:03:07.077
|
2016-06-03T09:03:44.003
|
2016-06-03T09:03:44.003
|
99963
|
1808
| null |
7185
|
1
|
7199
| null |
15
|
12435
|
Can anyone tell me the difference between using `aov()` and `lme()` for analyzing longitudinal data and how to interpret results from these two methods?
Below, I analyze the same dataset using `aov()` and `lme()` and got 2 different results. With `aov()`, I got a significant result in the time by treatment interaction, but fitting a linear mixed model, time by treatment interaction is insignificant.
```
> UOP.kg.aov <- aov(UOP.kg~time*treat+Error(id), raw3.42)
> summary(UOP.kg.aov)
Error: id
Df Sum Sq Mean Sq F value Pr(>F)
treat 1 0.142 0.1421 0.0377 0.8471
Residuals 39 147.129 3.7725
Error: Within
Df Sum Sq Mean Sq F value Pr(>F)
time 1 194.087 194.087 534.3542 < 2e-16 ***
time:treat 1 2.077 2.077 5.7197 0.01792 *
Residuals 162 58.841 0.363
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
> UOP.kg.lme <- lme(UOP.kg~time*treat, random=list(id=pdDiag(~time)),
na.action=na.omit, raw3.42)
> summary(UOP.kg.lme)
Linear mixed-effects model fit by REML
Data: raw3.42
AIC BIC logLik
225.7806 248.9037 -105.8903
Random effects:
Formula: ~time | id
Structure: Diagonal
(Intercept) time Residual
StdDev: 0.6817425 0.5121545 0.1780466
Fixed effects: UOP.kg ~ time + treat + time:treat
Value Std.Error DF t-value p-value
(Intercept) 0.5901420 0.1480515 162 3.986059 0.0001
time 0.8623864 0.1104533 162 7.807701 0.0000
treat -0.2144487 0.2174843 39 -0.986042 0.3302
time:treat 0.1979578 0.1622534 162 1.220053 0.2242
Correlation:
(Intr) time treat
time -0.023
treat -0.681 0.016
time:treat 0.016 -0.681 -0.023
Standardized Within-Group Residuals:
Min Q1 Med Q3 Max
-3.198315285 -0.384858426 0.002705899 0.404637305 2.049705655
Number of Observations: 205
Number of Groups: 41
```
|
What is the difference between using aov() and lme() in analyzing a longitudinal dataset?
|
CC BY-SA 3.0
| null |
2011-02-14T06:46:27.827
|
2012-09-07T02:47:09.103
|
2012-09-07T02:47:09.103
|
7290
|
1663
|
[
"r",
"mixed-model",
"repeated-measures",
"panel-data"
] |
7186
|
2
| null |
3
|
9
| null |
First of all let me tell you that in my opinion the best tool of all by far is R, which has tons of libraries and utilities I am not going to enumerate here.
Let me expand the discussion about weka
There is a library for R, which is called RWeka, which you can easily install in R, and use many of the functionalities from this great program along with the ones in R, let me give you a code example for doing a simple decision tree read from a standard database that comes with this package (it is also very easy to draw the resulting tree but I am going to let you do the research about how to do it, which is in the RWeka documentation:
```
library(RWeka)
iris <- read.arff(system.file("arff", "iris.arff", package = "RWeka"))
classifier <- IBk(class ~., data = iris)
summary(classifier)
```
There are also several python libraries for doing this (python is very very easy to learn)
First let me enumerate the packages you can use, I am not going to go in detail about them;
Weka (yes you have a library for python), NLKT (the most famous open source package for textmining besides datamining), [statPy](http://www.astro.cornell.edu/staff/loredo/statpy/), [sickits](http://scikit-learn.sourceforge.net/index.html), and scipy.
There is also orange which is excellent (I will also talk about it later), here is a code example for doing a tree from the data in the table cmpart1, that also performs 10 folds validation, you can also graph the tree
```
import orange, orngMySQL, orngTree
data = orange.ExampleTable("c:\\python26\\orange\\cmpart1.tab")
domain=data.domain
n=10
buck=len(data)/n
l2=[]
for i in range(n):
tmp=[]
if i==n-1:
tmp=data[n*buck:]
else:
tmp=data[buck*i:buck*(i+1)]
l2.append(tmp)
train=[]
test=[]
di={'yy':0,'yn':0,'ny':0,'nn':0}
for i in range(n):
train=[]
test=[]
for j in range(n):
if j==i:
test=l2[i]
else:
train.extend(l2[j])
print "-----"
trai=orange.Example(domain, train)
tree = orngTree.TreeLearner(train)
for ins in test:
d1= ins.getclass()
d2=tree(ins)
print d1
print d2
ind=str(d1)+str(d2)
di[ind]=di[ind]+1
print di
```
To end with some other packages I used and found interesting
[Orange](http://orange.biolab.si/):data visualization and analysis for novice and experts. Data mining through visual programming or Python scripting. Components for machine learning. Extensions for bioinformatics and text mining. (I personally recommend this, I used it a lot integrating it in python and it was excelent) I can send you some python code if you want me to.
[ROSETTA](http://bioinf.icm.uu.se/rosetta/): toolkit for analyzing tabular data within the framework of rough set theory. ROSETTA is designed to support the overall data mining and knowledge discovery process: From initial browsing and preprocessing of the data, via computation of minimal attribute sets and generation of if-then rules or descriptive patterns, to validation and analysis of the induced rules or patterns.(This I also enjoyed using very much)
[KEEL](http://www.keel.es):assess evolutionary algorithms for Data Mining problems including regression, classification, clustering, pattern mining and so on. It allows us to perform a complete analysis of any learning model in comparison to existing ones, including a statistical test module for comparison.
[DataPlot](http://www.itl.nist.gov/div898/software/dataplot/homepage.htm): for scientific visualization, statistical analysis, and non-linear modeling. The target Dataplot user is the researcher and analyst engaged in the characterization, modeling, visualization, analysis, monitoring, and optimization of scientific and engineering processes.
[Openstats](http://www.statpages.org/miller/openstat/): Includes A Statistics and Measurement Primer, Descriptive Statistics, Simple Comparisons, Analyses of Variance, Correlation, Multiple Regression, Interrupted Time Series, Multivariate Statistics, Non-Parametric Statistics, Measurement, Statistical Process Control, Financial Procedures, Neural Networks,
Simulation.
| null |
CC BY-SA 4.0
| null |
2011-02-14T07:39:27.737
|
2022-11-27T23:33:13.540
|
2022-11-27T23:33:13.540
|
11887
|
1808
| null |
7188
|
2
| null |
7115
|
2
| null |
Lets suppose we can calculate the distance from one noun to another in the following way. Use the Worldnet (which I guess you know), and utilize a function that exists, but you can build it yourself, that counts for how many points of the taxonomy of words you need to get from one word to another (for example from cat to dog you might have 4 but from nail to music you might have 25) Then using this numbers calculated among the nouns of the sentences just invent a metric (for example use simply the average of distance between the nouns, or use the minimum distance between the nouns) that will help you carry on your task.
| null |
CC BY-SA 2.5
| null |
2011-02-14T07:51:24.210
|
2011-02-14T07:51:24.210
| null | null |
1808
| null |
7189
|
2
| null |
7176
|
5
| null |
For large error, the standard error of $A/B$ depends on the distributions of $A$ and $B$, not just on their standard errors. The distribution of $A/B$ is known as a [ratio distribution](http://en.wikipedia.org/wiki/Ratio_distribution), but which ratio distribution depends on the distributions of $A$ and $B$.
If we assume that $A$ and $B$ both have Gaussian (normal) distributions, then $A/B$ has a [Gaussian ratio distribution](http://en.wikipedia.org/wiki/Ratio_distribution#Gaussian_ratio_distribution), for which a closed form exists but is rather complicated. In general, this will be an asymmetric distribution, so it is not well characterised simply by its mean and standard deviation. However, it is possible to find a confidence interval for $A/B$ using [Fieller's theorem](http://en.wikipedia.org/wiki/Fieller%27s_theorem).
| null |
CC BY-SA 2.5
| null |
2011-02-14T07:51:51.503
|
2011-02-14T07:51:51.503
| null | null |
449
| null |
7190
|
1
| null | null |
3
|
285
|
I have a set of integer that I am trying to see use different methods to categorize them into four groups, and the 2x2 table for the outcome of the 2 methods is displayed as below:
```
method_B
method_A 0 1 2 3
0 182 11 0 0
1 41 127 2 0
2 0 12 18 0
3 0 0 0 4
```
I am thinking of using `chisq.test` for comparing whether the difference in grouping distribution produced by 2 methods are likely to be caused by chance, but I am worrying about too many zeroes in the 2x2 table.
Should I use `chi.sq` test? Or is there any equivalent `fisher.test` for my case?
P.S. I tried `fisher.test` but it gives me `FEXACT ERROR 501`, which I have no idea what it is.
|
Statistical test for difference in categorization method
|
CC BY-SA 2.5
| null |
2011-02-14T08:16:36.003
|
2011-02-14T08:35:26.457
|
2011-02-14T08:18:23.990
| null |
588
|
[
"categorical-data",
"chi-squared-test"
] |
7191
|
2
| null |
7139
|
1
| null |
Here is another question from this websites that says among other things the following:
What can a statistical model say about causation?
This led to his motto:
NO CAUSATION WITHOUT MANIPULATION
which emphasized the importance of restrictions around experiments that consider causation. Andrew Gelman makes a similar point:
"To find out what happens when you change something, it is necessary to change it."...There are things you learn from perturbing a system that you'll never find out from any amount of passive observation.
check it out: [Statistics and causal inference?](https://stats.stackexchange.com/questions/2245/statistics-and-causal-inference)
| null |
CC BY-SA 2.5
| null |
2011-02-14T08:32:27.437
|
2011-02-14T08:32:27.437
|
2017-04-13T12:44:53.777
|
-1
|
1808
| null |
7192
|
2
| null |
7190
|
2
| null |
I don't think either chi-squared test or Fisher's exact test is useful here. It's pretty obvious that the results from method A and from method B aren't independent of each other. A statistic to quantify how well the two methods agree is more useful. The obvious choice is [Cohen's kappa](http://en.wikipedia.org/wiki/Cohen%27s_kappa).
| null |
CC BY-SA 2.5
| null |
2011-02-14T08:35:26.457
|
2011-02-14T08:35:26.457
| null | null |
449
| null |
7193
|
2
| null |
7179
|
3
| null |
Your problem can be described as "binary classification problem" with the response variable "become_customer" $\in$ {yes,no}.
As far as see, your next steps should be:
- create a tabular dataset with one row = one visitor and columns = predictors + response variable. Predictor variables reflect the input into the forms (as described by you), meanwhile response is "yes", if the visitor became a customer, else "no".
- learn and validate (keyword: Crossvalidation; metrics like AUC, Precision, Recall etc.) an expressive (non black-box) classification model. On suggestion is "Decision Trees", which are mainly used in the area of Direct Marketing / Performance Marketing since they can give you a description of potential customers.
- By calibrating the output of the scores of the classification model you can additionally get an estimate of the conversion rate for each customer.
| null |
CC BY-SA 2.5
| null |
2011-02-14T09:37:18.190
|
2011-02-14T09:37:18.190
| null | null |
264
| null |
7194
|
2
| null |
7165
|
1
| null |
I like Kennedy's Guide to Econometrics, which treats every topic on three levels, the first of which is a non-technical description, in so far as this is possible.
| null |
CC BY-SA 2.5
| null |
2011-02-14T11:08:24.803
|
2011-02-14T11:08:24.803
| null | null |
1766
| null |
7195
|
2
| null |
7166
|
6
| null |
The field that is relevant to the problem is the [Queuing theory](http://en.wikipedia.org/wiki/Queueing_model), a particular sub-field is a [Birth-death](http://en.wikipedia.org/wiki/Birth-death_process) processes. An article that in my opinion is helpful to your task is R.C. Larson and K.Satsunama (2010) [Congestion Pricing: A Parking Queue Model](http://www.jise.info/issues/volume4no1/01-V4N1-Full.pdf), following the links in references would give more ideas on where to proceed.
Note, that recently R package [queueing](http://cran.r-project.org/web/packages/queueing/) has been released (with misprint in the title however). Finally, I think, that this link for [queuing software](http://web2.uwindsor.ca/math/hlynka/qsoft.html) could be helpful.
| null |
CC BY-SA 2.5
| null |
2011-02-14T11:47:39.873
|
2011-02-14T11:47:39.873
| null | null |
2645
| null |
7196
|
1
|
7203
| null |
3
|
234
|
The data I want to analyze consist of set of votes similar to the voting system here on stackexchange. Votes are binary, i.e, items can receive up- or down-votes. The data have been gathered in an A/B controlled experiment.
I want to compare the control group to the treatment group according to some gold standard. That is, for some items I know what the "correct" vote (according to the gold standard) should have been. However, the number of votes per item varies and can be different between control and treatment group. My hypothesis is, that the treatment reduces the voting "error". But I am not sure, which statistical test to use.
|
Appropriate test to compare online voting data
|
CC BY-SA 2.5
| null |
2011-02-14T13:47:16.960
|
2011-02-14T16:51:13.537
|
2011-02-14T14:48:20.330
| null |
3191
|
[
"hypothesis-testing"
] |
7197
|
1
| null | null |
4
|
1887
|
I'm looking for sample R code, or pointers to sample R code for the following. (Gentle) critique of the approach would also be appreciated. I'm not a statistician and I'm pretty new to R.
I have duration of hospitalization ("length of stay"=LOS) data for 2,000 patients from my hospital and 50,000 patients from a comparison data set. Each patient has a discharge year (YEAR) and is assigned to a diagnosis group (DG). My goal is to compare (with confidence intervals) the overall average LOS at my hospital to the expected average LOS if those patients had been part of the comparison data set.
Note that for a specific year and specific DG there may be anywhere from a few to 1,000s of patients, and, of course, the distribution of LOS is not normal. There could possibly even be a DG with 1 or more patients for a given year in my hospital's data but none in the comparison data set.
The approach I was considering was to create a comparison group where a random patient from the comparison data set is chosen for each patient from my hospital. The random patient would be matched by YEAR and DG. I would calculate the expected average LOS for this group and then repeat the process 10,000 times to determine the 2.5th to 97.5th percentile. I would repeat for each year and plot my hospital's average LOS versus the 95% CI for the expected average LOS.
To deal with the issue of there not being a match for a patient for a given DG for a given YEAR I was thinking of loosening the match criteria to pick patients from the previous or next year. I could keep broadening the match year until there were at least N patients from which to randomly pick.
Thoughts?
|
Calculating confidence interval for average hospital length of stay, case-mix adjusted, in R
|
CC BY-SA 3.0
| null |
2011-02-14T14:36:39.887
|
2011-06-26T21:39:02.970
|
2011-06-26T13:44:06.477
| null |
690
|
[
"r",
"confidence-interval",
"bootstrap"
] |
7198
|
2
| null |
7185
|
0
| null |
It appears to me you have multiple measures for each id at each time. You need to aggregate these for the aov because it unfairly increases power in that analysis. I'm not saying doing the aggregate will make the outcomes the same but it should make them more similar.
```
dat.agg <- aggregate(UOP.kg ~ time + treat + id, raw3.42, mean)
```
Then run your aov model as before replacing the data with dat.agg.
Also, I believe anova(lme) is more what you want to do to compare the outcomes. The direction and magnitude of an effect is not the same as the ratio of model variance to error.
(BTW, if you do the lme analysis on the aggregate data, which you shouldn't, and check anova(lme) you'll get almost the same results as aov)
| null |
CC BY-SA 2.5
| null |
2011-02-14T15:10:27.500
|
2011-02-14T15:10:27.500
| null | null |
601
| null |
7199
|
2
| null |
7185
|
20
| null |
Based on your description, it appears that you have a repeated-measures model with a single treatment factor. Since I do not have access to the dataset (`raw3.42`), I will use the Orthodont data from the `nlme` package to illustrate what is going on here. The data structure is the same (repeated measurements for two different groups - in this case, males and females).
If you run the following code:
```
library(nlme)
data(Orthodont)
res <- lme(distance ~ age*Sex, random = ~ 1 | Subject, data = Orthodont)
anova(res)
```
you will get the following results:
```
numDF denDF F-value p-value
(Intercept) 1 79 4123.156 <.0001
age 1 79 122.450 <.0001
Sex 1 25 9.292 0.0054
age:Sex 1 79 6.303 0.0141
```
If you run:
```
res <- aov(distance ~ age*Sex + Error(Subject), data = Orthodont)
summary(res)
```
you will get:
```
Error: Subject
Df Sum Sq Mean Sq F value Pr(>F)
Sex 1 140.46 140.465 9.2921 0.005375 **
Residuals 25 377.91 15.117
Error: Within
Df Sum Sq Mean Sq F value Pr(>F)
age 1 235.356 235.356 122.4502 < 2e-16 ***
age:Sex 1 12.114 12.114 6.3027 0.01410 *
Residuals 79 151.842 1.922
```
Note that the F-tests are exactly identical.
For `lme()`, you used `list(id=pdDiag(~time))`, which not only adds a random intercept to the model, but also a random slope. Moreover, by using `pdDiag`, you are setting the correlation between the random intercept and slope to zero. This is a different model than what you specified via `aov()` and hence you get different results.
| null |
CC BY-SA 2.5
| null |
2011-02-14T15:35:47.063
|
2011-02-14T15:35:47.063
| null | null |
1934
| null |
7200
|
1
|
7206
| null |
24
|
7340
|
I know that an easy to handle formula for the CDF of a normal distribution is somewhat missing, due to the complicated error function in it.
However, I wonder if there is a a nice formula for $N(c_{-} \leq x < c_{+}| \mu, \sigma^2)$. Or what the "state of the art" approximation for this problem might be.
|
Evaluate definite interval of normal distribution
|
CC BY-SA 2.5
| null |
2011-02-14T15:43:39.727
|
2022-05-21T11:21:16.657
| null | null |
2860
|
[
"normal-distribution",
"approximation"
] |
7201
|
2
| null |
7155
|
9
| null |
I don't think it is possible to define an outlier without assuming a model of the underlying process giving rise to the data. Without such a model we have no frame of reference to decide whether the data are anomalous or "wrong". The definition of an outlier that I have found useful is that an outlier is an observation (or observations) that cannot be reconciled to a model that otherwise performs well.
| null |
CC BY-SA 2.5
| null |
2011-02-14T16:36:53.473
|
2011-02-14T16:36:53.473
| null | null |
887
| null |
7202
|
1
| null | null |
2
|
7239
|
### Context:
I have "pooled data" with time and cross section dimensions.
It is unbalanced data without a full range of time observations for each cross section of observations.
It is kind of like unbalanced panel data?
I wish to proceed with the limited number of observations without losing any data.
### Questions:
- How can I get regression results (generalized least squares?) in SPSS with this kind of data?
- What should I pay attention to when evaluating the results?
|
Regression with Pooled data in SPSS
|
CC BY-SA 2.5
| null |
2011-02-14T16:40:18.977
|
2012-04-13T05:03:47.760
|
2011-02-15T11:20:15.173
|
183
| null |
[
"regression",
"spss",
"panel-data"
] |
7203
|
2
| null |
7196
|
2
| null |
If you know what the correct vote should be, you could start with a simple chi-square or KS test, testing for the number of votes misclassified for each group (treatment and control) as compared to the gold standard.
-Ralph Winters
| null |
CC BY-SA 2.5
| null |
2011-02-14T16:51:13.537
|
2011-02-14T16:51:13.537
| null | null |
3489
| null |
7205
|
2
| null |
7202
|
1
| null |
You should be able to weight by duration of observation, perhaps by simple division, in essence turning data into an annualized rates. There will be issues relating to whether persons (or other units of analysis) have systematically higher or lower rates in the early period of observation or enrollment.
| null |
CC BY-SA 2.5
| null |
2011-02-14T16:58:27.340
|
2011-02-14T16:58:27.340
| null | null |
2129
| null |
7206
|
2
| null |
7200
|
44
| null |
It depends on exactly what you are looking for. Below are some brief details and references.
Much of the literature for approximations centers around the function
$$
Q(x) = \int_x^\infty \frac{1}{\sqrt{2\pi}} e^{-\frac{u^2}{2}} \, \mathrm{d}u
$$
for $x > 0$. This is because the function you provided can be decomposed as a simple difference of the function above (possibly adjusted by a constant). This function is referred to by many names, including "upper-tail of the normal distribution", "right normal integral", and "Gaussian $Q$-function", to name a few. You'll also see approximations to Mills' ratio, which is
$$
R(x) = \frac{Q(x)}{\varphi(x)}
$$
where $\varphi(x) = (2\pi)^{-1/2} e^{-x^2 / 2}$ is the Gaussian pdf.
Here I list some references for various purposes that you might be interested in.
Computational
The de-facto standard for computing the $Q$-function or the related complementary error function is
>
W. J. Cody, Rational Chebyshev Approximations for the Error Function, Math. Comp., 1969, pp. 631--637.
Every (self-respecting) implementation uses this paper. (MATLAB, R, etc.)
"Simple" Approximations
[Abramowitz and Stegun](http://people.math.sfu.ca/%7Ecbm/aands/page_299.htm) have one based on a polynomial expansion of a transformation of the input. Some people use it as a "high-precision" approximation. I don't like it for that purpose since it behaves badly around zero. For example, their approximation does not yield $\hat{Q}(0) = 1/2$, which I think is a big no-no. Sometimes [bad things](http://www.codeplanet.eu/files/download/accuratecumnorm.pdf#page=5) happen because of this.
Borjesson and Sundberg give a simple approximation which works pretty well for most applications where one only requires a few digits of precision. The absolute relative error is never worse than 1%, which is quite good considering its simplicity. The basic approximation is
$$
\hat{Q}(x) = \frac{1}{(1-a) x + a \sqrt{x^2 + b}} \varphi(x)
$$
and their preferred choices of the constants are $a = 0.339$ and $b = 5.51$. That reference is
>
P. O. Borjesson and C. E. Sundberg. Simple approximations of the error function Q(x) for communications
applications. IEEE Trans. Commun., COM-27(3):639–643, March 1979.
Here is a plot of its absolute relative error.

The electrical-engineering literature is awash with various such approximations and seem to take an overly intense interest in them. Many of them are poor though or expand to very strange and convoluted expressions.
You might also look at
>
W. Bryc. A uniform approximation to the right normal integral. Applied Mathematics and Computation,
127(2-3):365–374, April 2002.
Laplace's continued fraction
Laplace has a beautiful continued fraction which yields successive upper and lower bounds for every value of $x > 0$. It is, in terms of Mills' ratio,
$$
R(x) = \frac{1}{x+}\frac{1}{x+}\frac{2}{x+}\frac{3}{x+}\cdots ,
$$
where the notation I've used is fairly standard for a [continued fraction](https://en.wikipedia.org/wiki/Continued_fraction), i.e., $1/(x+1/(x+2/(x+3/(x+\cdots))))$. This expression doesn't converge very fast for small $x$, though, and it diverges at $x = 0$.
This continued fraction actually yields many of the "simple" bounds on $Q(x)$ that were "rediscovered" in the mid-to-late 1900s. It's easy to see that for a continued fraction in "standard" form (i.e., composed of positive integer coefficients), truncating the fraction at odd (even) terms gives an upper (lower) bound.
Hence, Laplace tells us immediately that
$$
\frac{x}{x^2 + 1} < R(x) < \frac{1}{x} \>,
$$
both of which are bounds that were "rediscovered" in the mid-1900's. In terms of the $Q$-function, this is equivalent to
$$
\frac{x}{x^2 + 1} \varphi(x) < Q(x) < \frac{1}{x} \varphi(x) .
$$
An alternative proof of this using simple integration by parts can be found in S. Resnick, [Adventures in Stochastic Processes](https://rads.stackoverflow.com/amzn/click/com/0817635912), Birkhauser, 1992, in Chapter 6 (Brownian motion). The absolute relative error of these bounds is no worse than $x^{-2}$, as shown in [this related answer](https://stats.stackexchange.com/a/13809/2970).
Notice, in particular, that the inequalities above immediately imply that $Q(x) \sim \varphi(x)/x$. This fact can be established using L'Hopital's rule as well. This also helps explain the choice of the functional form of the Borjesson-Sundberg approximation. Any choice of $a \in [0,1]$ maintains the asymptotic equivalence as $x \to \infty$. The parameter $b$ serves as a "continuity correction" near zero.
Here is a plot of the $Q$-function and the two Laplace bounds.

C-I. C. Lee has a paper from the early 1990's that does a "correction" for small values of $x$. See
>
C-I. C. Lee. On Laplace continued fraction for the normal integral. Ann. Inst. Statist. Math., 44(1):107–120,
March 1992.
---
Durrett's [Probability: Theory and Examples](https://rads.stackoverflow.com/amzn/click/com/0534424414) provides the classical upper and lower bounds on $Q(x)$ on pages 6–7 of the 3rd edition. They're meant for larger values of $x$ (say, $x > 3$) and are asymptotically tight.
Hopefully this will get you started. If you have a more specific interest, I might be able to point you somewhere.
| null |
CC BY-SA 4.0
| null |
2011-02-14T17:01:06.277
|
2022-05-21T11:21:16.657
|
2022-05-21T11:21:16.657
|
79696
|
2970
| null |
7207
|
1
|
7210
| null |
250
|
118263
|
I understand the formal differences between them, what I want to know is when it is more relevant to use one vs. the other.
- Do they always provide complementary insight about the performance of a given classification/detection system?
- When is it reasonable to provide them both, say, in a paper? instead of just one?
- Are there any alternative (maybe more modern) descriptors that capture the relevant aspects of both ROC and precision recall for a classification system?
I am interested in arguments for both binary and multi-class (e.g. as one-vs-all) cases.
|
ROC vs precision-and-recall curves
|
CC BY-SA 3.0
| null |
2011-02-14T17:10:17.143
|
2022-02-23T15:40:56.883
|
2015-03-09T12:32:29.693
|
2798
|
2798
|
[
"machine-learning",
"roc",
"precision-recall"
] |
7208
|
1
|
7338
| null |
8
|
2393
|
I am using [Cohen's Kappa](http://en.wikipedia.org/wiki/Cohen%27s_kappa) to calculate the inter-agreement between two judges.
It is calculated as:
$ \frac{P(A) - P(E)}{1 - P(E)} $
where $P(A)$ is the proportion of agreement and $P(E)$ the probability of agreement by chance.
Now for the following dataset, I get the expected results:
```
User A judgements:
- 1, true
- 2, false
User B judgements:
- 1, false
- 2, false
Proportion agreed: 0.5
Agreement by chance: 0.625
Kappa for User A and B: -0.3333333333333333
```
We can see that both judges have not agreed very well. However in the following case where both judges evaluate one criteria, kappa evaluates to zero:
```
User A judgements:
- 1, false
User B judgements:
- 1, false
Proportion agreed: 1.0
Agreement by chance: 1.0
Kappa for User A and B: 0
```
Now I can see that the agreement by chance is obviously 1, which leads to kappa being zero, but does this count as a reliable result? The problem is that I normally don't have more than two judgements per criteria, so these will all never evaluate to any kappa greater than 0, which I think is not very representative.
Am I right with my calculations? Can I use a different method to calculate inter-agreement?
Here we can see that kappa works fine for multiple judgements:
```
User A judgements:
- 1, false
- 2, true
- 3, false
- 4, false
- 5, true
User A judgements:
- 1, true
- 2, true
- 3, false
- 4, true
- 5, false
Proportion agreed: 0.4
Agreement by chance: 0.5
Kappa for User A and B: -0.19999999999999996
```
|
Can one use Cohen's Kappa for two judgements only?
|
CC BY-SA 2.5
| null |
2011-02-14T17:16:52.580
|
2011-02-18T01:26:44.263
|
2011-02-17T22:01:30.213
|
930
|
1205
|
[
"reliability",
"information-retrieval"
] |
7209
|
1
| null | null |
5
|
945
|
My aim is to predict quarterly customer-default probabilities: I have data on ~ 2 million individuals, who default on average with a probability of ~ 0.3 percent.
Therefore I am thinking about undersampling the majority class (non-defaults) to save computing-time (kernel methods can be quite costly, I know about correcting the probabability predictions) -- the other option being just taking a sub-sample of the data.
What do you think would be a good ratio of defaults to non-defaults in my learning sample?
Thanks for your help!
|
How much undersampling should be done?
|
CC BY-SA 2.5
| null |
2011-02-14T17:40:19.597
|
2011-02-14T23:36:54.893
|
2011-02-14T18:40:11.043
| null |
2549
|
[
"sampling"
] |
7210
|
2
| null |
7207
|
331
| null |
The key difference is that ROC curves will be the same no matter what the baseline probability is, but PR curves may be more useful in practice for needle-in-haystack type problems or problems where the "positive" class is more interesting than the negative class.
To show this, first let's start with a very nice way to define precision, recall and specificity. Assume you have a "positive" class called 1 and a "negative" class called 0. $\hat{Y}$ is your estimate of the true class label $Y$. Then:
$$
\begin{aligned}
&\text{Precision} &= P(Y = 1 | \hat{Y} = 1) \\
&\text{Recall} = \text{Sensitivity} &= P(\hat{Y} = 1 | Y = 1) \\
&\text{Specificity} &= P(\hat{Y} = 0 | Y = 0)
\end{aligned}
$$
The key thing to note is that sensitivity/recall and specificity, which make up the ROC curve, are probabilities conditioned on the true class label. Therefore, they will be the same regardless of what $P(Y = 1)$ is. Precision is a probability conditioned on your estimate of the class label and will thus vary if you try your classifier in different populations with different baseline $P(Y = 1)$. However, it may be more useful in practice if you only care about one population with known background probability and the "positive" class is much more interesting than the "negative" class. (IIRC precision is popular in the document retrieval field, where this is the case.) This is because it directly answers the question, "What is the probability that this is a real hit given my classifier says it is?".
Interestingly, by Bayes' theorem you can work out cases where specificity can be very high and precision very low simultaneously. All you have to do is assume $P(Y = 1)$ is very close to zero. In practice I've developed several classifiers with this performance characteristic when searching for needles in DNA sequence haystacks.
IMHO when writing a paper you should provide whichever curve answers the question you want answered (or whichever one is more favorable to your method, if you're cynical). If your question is: "How meaningful is a positive result from my classifier given the baseline probabilities of my problem?", use a PR curve. If your question is, "How well can this classifier be expected to perform in general, at a variety of different baseline probabilities?", go with a ROC curve.
| null |
CC BY-SA 3.0
| null |
2011-02-14T18:11:52.737
|
2013-06-05T17:48:31.693
|
2013-06-05T17:48:31.693
|
7290
|
1347
| null |
7211
|
1
| null | null |
12
|
12708
|
Can you recommend a text mining package in R that can be used against large volumes of data?
Secondly, is there a GUI available for any of the text mining packages in R?
Thirdly, is there another open source text mining program that is easy and intuitive to use?
|
What are the text-mining packages for R and are there other open source text-mining programs?
|
CC BY-SA 2.5
| null |
2011-02-14T19:29:14.993
|
2016-04-02T16:53:50.907
|
2011-02-14T23:38:31.837
| null | null |
[
"r",
"text-mining"
] |
7212
|
2
| null |
7211
|
4
| null |
See the [tm package](http://tm.r-forge.r-project.org/) and [this presentation](http://www.rinfinance.com/agenda/2010/Theussl+Feinerer+Hornik.pdf) by [Stefan Theussl](http://statmath.wu-wien.ac.at/~theussl/) given at the 2010 [R/Finance conference](http://www.rinfinance.com).
| null |
CC BY-SA 2.5
| null |
2011-02-14T19:35:54.927
|
2011-02-14T19:35:54.927
| null | null |
1657
| null |
7213
|
2
| null |
7165
|
2
| null |
I think Harvey Motulsky's Intuitive Biostatistics is pretty good for non-mathematical "intuitive" explanations to basic statistical methods most commonly employed in the biological and medical sciences.
| null |
CC BY-SA 2.5
| null |
2011-02-14T20:08:32.530
|
2011-02-14T20:08:32.530
| null | null |
3183
| null |
7214
|
2
| null |
7211
|
17
| null |
Please see the [CRAN Task View on Natural Language Processing](http://cran.r-project.org/web/views/NaturalLanguageProcessing.html) which includes, among others, the [tm](http://cran.r-project.org/package=tm) package already mentioned by Josh.
| null |
CC BY-SA 2.5
| null |
2011-02-14T20:50:04.587
|
2011-02-14T20:50:04.587
| null | null |
334
| null |
7215
|
6
| null | null |
0
| null |
I have been one of the pro tem moderators and I would like to continue my mission as a regular mod.
So, if you like what I was doing here (compulsive edits, protecting wide scope, journal club), vote for me!
| null |
CC BY-SA 2.5
| null |
2011-02-14T21:19:12.923
|
2011-02-14T21:19:12.923
|
2011-02-14T21:19:12.923
| null | null | null |
7216
|
2
| null |
7211
|
3
| null |
Sure, RapidMiner with the text mining extension.
There are many videos that show how it is done.
| null |
CC BY-SA 2.5
| null |
2011-02-14T21:24:14.247
|
2011-02-14T21:24:14.247
| null | null | null | null |
7217
|
2
| null |
7211
|
1
| null |
GATE is very comprehensive. It also allows you to work with different languages and has an ontology editor.
| null |
CC BY-SA 3.0
| null |
2011-02-14T21:44:54.347
|
2011-11-21T22:20:49.540
|
2011-11-21T22:20:49.540
|
930
|
3489
| null |
7218
|
1
|
7221
| null |
10
|
3170
|
Suppose I have a large population of data points $(x,y)$ and that the Pearson's correlation is
$$\textrm{corr}(X,Y) = \rho$$
What can I reasonably say about the correlation I expect to observe in a sample of size $n$? If the sample correlation is $\rho_s$, roughly what is the spread is $\rho_s$? Is $\rho_s$ biased?
If we make some assumptions like normality, can we compute the exact likelihood function of $\rho_s$ as a function of $\rho$?
(Ultimately, I am wondering about the problem of whether an observed high correlation is a fluke or not, and all I have is the sample size and the correlation.)
|
Distribution of sample correlation
|
CC BY-SA 2.5
| null |
2011-02-14T22:05:32.307
|
2020-11-29T11:41:58.980
|
2020-11-29T11:41:58.980
|
11887
|
2665
|
[
"distributions",
"correlation"
] |
7219
|
2
| null |
6
|
197
| null |
The biggest difference I see between the communities is that statistics emphasizes inference, whereas machine learning emphasized prediction. When you do statistics, you want to infer the process by which data you have was generated. When you do machine learning, you want to know how you can predict what future data will look like w.r.t. some variable.
Of course the two overlap. Knowing how the data was generated will give you some hints about what a good predictor would be, for example. However, one example of the difference is that machine learning has dealt with the p >> n problem (more features/variables than training samples) since its infancy, whereas statistics is just starting to get serious about this problem. Why? Because you can still make good predictions when p >> n, but you can't make very good inferences about what variables are actually important and why.
| null |
CC BY-SA 2.5
| null |
2011-02-14T22:09:41.657
|
2011-02-14T22:09:41.657
| null | null |
1347
| null |
7220
|
2
| null |
770
|
4
| null |
Depends on what you mean.
I've gotten deeper and deeper into statistics based on exposure to machine learning. (I had previously been more of a general AI guy, and hadn't had good experience with statistics, but gained greater understanding and appreciation of statistics as time has gone on.) So it's certainly a useful gateway.
I've been mostly self-taught in statistics, however, and that leaves a lot of holes. I understand and use some fairly advanced techniques, but I wouldn't have to get too far off of the path to get stymied, while someone with a firmer statistical foundation would not. And I think that would apply in your scenario of using Machine Learning as the way to learn statistics: your knowledge would be fragile.
| null |
CC BY-SA 2.5
| null |
2011-02-14T22:29:44.980
|
2011-02-14T23:04:02.267
|
2011-02-14T23:04:02.267
|
1764
|
1764
| null |
7221
|
2
| null |
7218
|
10
| null |
To quote the [Wikipedia article on the Fisher transformation](http://en.wikipedia.org/wiki/Fisher_transformation) :
If $(X, Y)$ has a bivariate normal distribution, and if the $(X_i, Y_i)$ pairs used to form the sample correlation coefficient $r$ are independent for $i = 1, \ldots, n,$ then $$z = {1 \over 2}\ln{1+r \over 1-r} = \operatorname{arctanh}(r)$$ is approximately normally distributed with mean
${1 \over 2}\ln{{1+\rho} \over {1-\rho}},$
and standard error
${1 \over \sqrt{N-3}},$
where $N$ is the sample size.
| null |
CC BY-SA 2.5
| null |
2011-02-14T22:34:14.100
|
2011-02-15T08:55:41.067
|
2011-02-15T08:55:41.067
|
449
|
449
| null |
7222
|
2
| null |
7209
|
2
| null |
In a first approximation, 1:1 is a good proportion, but:
- Some methods are more vulnerable to unequal classes, some are less -- plain decision tree will almost always vote for a much larger class, 1-NN will be not affected at all. It is a good idea to check this out (in literature or by asking here) in context of your problem.
- You should beware of possible inhomogeneities (let's say hidden "subclasses") -- subsampling may change the proportions between subclasses and thus lead to strange effects. To this end, it is good to at least try few subsampling realizations to have a change to spot possible problems. (Well, I called it a problem, but this may be also a quite enlightening insight in data)
| null |
CC BY-SA 2.5
| null |
2011-02-14T23:36:54.893
|
2011-02-14T23:36:54.893
| null | null | null | null |
7223
|
1
| null | null |
12
|
1833
|
I would like to pose this question in two parts. Both deal with a generalized linear model, but the first deals with model selection and the other deals with regularization.
Background: I utilize GLMs (linear, logistic, gamma regression) models for both prediction and for description. When I refer to the "normal things one does with a regression" I largely mean description with (i) confidence intervals around coefficients, (ii) confidence intervals around predictions and (iii) hypothesis tests concerning linear combinations of the coefficients such as "is there a difference between treatment A and treatment B?".
Do you legitimately lose the ability to do these things using the normal theory under each of the following? And if so, are these things really only good for models used for pure prediction?
I. When a GLM has been fit via some model selection process (for concreteness say its a stepwise procedure based on AIC).
II. When a GLM has been fit via a regularization method (say using glmnet in R).
My sense is that for I. the answer is technically that you should use a bootstrap for the "normal things one does with a regression", but no one really abides by that.
Add:
After getting a few responses and reading elsewhere, here is my take on this (for anyone else benefit as well as to receive correction).
I.
A) RE: Error Generalize. In order to generalize error rates on new data, when there is no hold out set, cross validation can work but you need to repeat the process completely for each fold - using nested loops - thus any feature selection, parameter tuning, etc. must be done independently each time. This idea should hold for any modeling effort (including penalized methods).
B) RE: Hypothesis testing and confidence intervals of GLM. When using model selection (feature selection, parameter tuning, variable selection) for a generalized linear model and a hold out set exists, it is permissible to train the model on a partition and then fit the model on the remaining data or the full data set and use that model/data to perform hypothesis tests etc. If a hold out set does not exist, a bootstrap can be used, as long as the full process is repeated for each bootstrap sample. This limits the hypothesis tests that can be done though since perhaps a variable will not always be selected for example.
C) RE: Not carrying about prediction on future data sets, then fit a purposeful model guided by theory and a few hypothesis tests and even consider leaving all variables in the model (significant or not) (along the lines of Hosmer and Lemeshow). This is small variable set classical type of regression modeling and then allows the use of CI's and hypothesis test.
D) RE: Penalized regression. No advice, perhaps consider this suitable for prediction only (or as a type of feature selection to then apply to another data set as in B above) as the bias introduced makes CI's and hypothesis tests unwise - even with the bootstrap.
|
GLM after model selection or regularization
|
CC BY-SA 4.0
| null |
2011-02-14T23:51:15.593
|
2018-06-20T09:40:15.533
|
2018-06-20T09:40:15.533
|
128677
|
2040
|
[
"regression",
"model-selection",
"regularization"
] |
7224
|
1
| null | null |
110
|
14811
|
I'm working on a little project involving the faces of twitter users via their profile pictures.
A problem I've encountered is that after I filter out all but the images that are clear portrait photos, a small but significant percentage of twitter users use a picture of Justin Bieber as their profile picture.
In order to filter them out, how can I tell programmatically whether a picture is that of Justin Bieber?
|
Detecting a given face in a database of facial images
|
CC BY-SA 2.5
| null |
2011-02-14T22:41:09.187
|
2012-02-19T18:09:33.437
|
2011-02-15T15:22:25.870
| null |
21466
|
[
"machine-learning",
"clustering",
"image-processing"
] |
7225
|
1
| null | null |
20
|
10320
|
I am using the R package penalized to obtain shrunken estimates of coefficients for a dataset where I have lots of predictors and little knowledge of which ones are important. After I've picked tuning parameters L1 and L2 and I'm satisfied with my coefficients, is there a statistically sound way to summarize the model fit with something like R-squared?
Furthermore, I'm interested in testing the overall significance of the model (i.e. does R²=0, or do all the =0).
I've read through the answers on a similar question asked [here](https://stats.stackexchange.com/questions/2121/how-can-i-estimate-coefficient-standard-errors-when-using-ridge-regression), but it didn't quite answer my question. There's an excellent tutorial on the R package that I'm using [here](http://cran.r-project.org/web/packages/penalized/vignettes/penalized.pdf), and the author Jelle Goeman had the following note at the end of the tutorial regarding confidence intervals from penalized regression models:
>
It is a very natural question to ask for standard errors of regression coefficients or other estimated quantities. In principle such standard errors can easily be calculated, e.g. using the bootstrap.
Still, this package deliberately does not provide them. The reason for this is that standard errors are not very meaningful for strongly biased estimates such as arise from penalized estimation methods. Penalized estimation is a procedure that reduces the variance of estimators by introducing substantial bias. The bias of each estimator is therefore a major component of its mean squared error, whereas its variance may contribute only a small part.
Unfortunately, in most applications of penalized regression it is impossible to obtain a sufficiently precise estimate of the bias. Any bootstrap-based cal- culations can only give an assessment of the variance of the estimates. Reliable estimates of the bias are only available if reliable unbiased estimates are available, which is typically not the case in situations in which penalized estimates are used.
Reporting a standard error of a penalized estimate therefore tells only part of the story. It can give a mistaken impression of great precision, completely ignoring the inaccuracy caused by the bias. It is certainly a mistake to make confidence statements that are only based on an assessment of the variance of the estimates, such as bootstrap-based confidence intervals do.
|
Estimating R-squared and statistical significance from penalized regression model
|
CC BY-SA 2.5
| null |
2011-02-15T00:38:52.977
|
2018-04-05T13:56:12.613
|
2020-06-11T14:32:37.003
|
-1
|
36
|
[
"regression",
"lasso",
"stepwise-regression",
"ridge-regression"
] |
7226
|
2
| null |
7197
|
1
| null |
Given that you're comparing n groups ($n > 2$) and that your count data are, as you say, likely non-normally distributed, I wonder if perhaps applying Kruskal–Wallis analysis of variance to your data might be an option.
| null |
CC BY-SA 3.0
| null |
2011-02-15T00:58:32.400
|
2011-06-26T21:39:02.970
|
2011-06-26T21:39:02.970
|
930
|
2730
| null |
7227
|
2
| null |
7224
|
49
| null |
A better idea might be to trash all images that appear in the feed of more than one user - no recognition needed.
| null |
CC BY-SA 2.5
| null |
2011-02-15T01:03:52.237
|
2011-02-15T01:03:52.237
| null | null |
3203
| null |
7228
|
2
| null |
7128
|
0
| null |
A few thoughts
- you might want to rearrange your data frame either in wide (rows are cases and columns are ratings for stimuli pairs) or long format (rows are case by stimuli pair combinations).
Then you could run mixed models lme4 or perhaps aov (see here for info on different ANOVAs).
- You might find useful the tutorial on multidimensional scaling on the Quick-R website.
It shows how to prepare the data and produce a plot.
However, from what I understand of your data, you may have to transform your ratings. At the moment, higher scores seem to indicate greater similarity, whereas MDS will expect that higher scores mean greater dissimilarity.
At a basic level you could use (9 minus rating).
| null |
CC BY-SA 2.5
| null |
2011-02-15T01:14:20.777
|
2011-02-16T01:26:19.933
|
2011-02-16T01:26:19.933
|
183
|
183
| null |
7229
|
2
| null |
7224
|
16
| null |
I have a feeling that [http://www.tineye.com/commercial_api](http://www.tineye.com/commercial_api) may be the solution here.
Simply throw the Twitter profile image to Tineye, see if it returns images (and associated URLs) that can clearly be identified (or automatically scored using simple word-count logic) as being related to (or of) that little sack of **.
Simples!
| null |
CC BY-SA 2.5
| null |
2011-02-15T01:19:42.080
|
2011-02-15T01:19:42.080
| null | null |
3205
| null |
7231
|
2
| null |
7224
|
2
| null |
You could try [locality sensitive hashing](http://en.wikipedia.org/wiki/Locality_sensitive_hashing).
| null |
CC BY-SA 2.5
| null |
2011-02-15T02:28:19.450
|
2011-02-15T02:28:19.450
| null | null |
900
| null |
7232
|
2
| null |
7224
|
12
| null |
Since you are able to filter to only those that are clear portrait photos, I'm assuming you have some method of feature generation to transform the raw images into features that are useful for machine learning purposes. If that's true, you could try to train a classification algorithm (there are lots of them: neural networks, etc.) by feeding the algorithm a bunch of known Bieber photos as well as a bunch of known non-Biebers. Once you have trained the model, it could be used to predict whether a new image is Bieber or not.
This sort of supervised learning technique does require you to have data where you know the correct answer (Bieber or not), but those could probably be found from a Google image search. It also requires that you have the right sorts of features, and I don't know enough about image processing or your algorithm to know if that is a major drawback.
| null |
CC BY-SA 2.5
| null |
2011-02-15T02:52:31.463
|
2011-02-15T02:52:31.463
| null | null |
2485
| null |
7233
|
1
|
7243
| null |
1
|
1898
|
How to interpret residuals in random and fixed effects models? I am a medical doctor. Kindly explain in simple words.
Updated Here is an example:
```
Weight (Fixed) Weight (Random) Residual (Fixed) Residual (Random)
Relative weight Relative weight Std Residual Std Residual
32.27 21.56 -4.61 -1.62
25.14 21.02 -0.41 -0.46
18.52 20.21 2.16 0.52
14.39 19.39 2.10 0.62
9.68 17.81 2.56 1.04
```
|
Interpret residuals in random and fixed effects models in meta-analysis
|
CC BY-SA 3.0
| null |
2011-02-15T03:05:32.217
|
2018-05-26T19:37:57.020
|
2018-01-30T18:08:43.177
|
101426
|
2956
|
[
"residuals",
"meta-analysis"
] |
7235
|
2
| null |
7223
|
6
| null |
You might check out David Freedman's paper, "[A Note on Screening Regression Equations.](http://www.psych.umn.edu/faculty/waller/classes/mult11/readings/freedman83.pdf)" (ungated)
Using completely uncorrelated data in a simulation, he shows that, if there are many predictors relative to the number of observations, then a standard screening procedure will produce a final regression that contains many (more than by chance) significant predictors and a highly significant F statistic. The final model suggests that it is effective at predicting the outcome, but this success is spurious. He also illustrates these results using asymptotic calculations. Suggested solutions include screening on a sample and assessing the model on the full data set and using at least an order of magnitude more observations than predictors.
| null |
CC BY-SA 2.5
| null |
2011-02-15T03:40:52.463
|
2011-02-15T03:40:52.463
| null | null |
401
| null |
7236
|
1
| null | null |
6
|
1998
|
I've got a dataset similar to this:
```
pat_id epis Care Type
1 1722650 Acute Care
1 1723120 Rehabilitation care
2 1584309 Acute Care
2 1585705 Rehabilitation care
3 1726487 GEM
3 1664031 Acute Care
3 1726488 Acute Care
3 1726489 Rehabilitation care
```
Each patient has multiple "episodes/care types". I want to sample 50 patients but also approximately proportional to the "Care Type" from the population (say, 50% acute care, 30% rehabilitation care, 20% GEM).
One way I thought of doing is to split the data set into say "acute", "rehab" and "gem" then sample 25 from "acute", 15 from "rehab" and 10 from "gem". But there would be an overlap using this approach "e.g. Patient 1375 would be in gem, acute and rehab.
Is there a R package that would handle this sort of sampling easily?
|
Multistage sampling in R
|
CC BY-SA 3.0
| null |
2011-02-15T04:22:10.493
|
2011-06-17T17:37:47.020
|
2011-06-17T17:37:47.020
| null | null |
[
"r",
"sampling"
] |
7238
|
1
| null | null |
3
|
2169
|
I have two vectors of matching lengths. They are readings from two different sensors (one is from a smartphone and the other is from a wiimote) of the same hand movement. I am trying to find the time offset between them to synchronise the readings for further processing. The readings I get are of the format (Time(ms) Value) for accelerations in the X,Y and Z direction.
For the synchronization, I plotted the cross-correlation function `xcorr2()` between the two sets. I am getting the same graph (a weird triangle peak and a straight line at the bottom) for Accelerations along the x, y and z directions (which I guess is good) but I don't know how to interpret it. What do the axes in the graph represent?
Can anyone explain to me what `xcorr2()` means in a qualitative sense. From the correlation function, how do I determine the offset (i.e. how many seconds is sensor1 behind sensor2)?
Thanks!
Imelza
|
MATLAB interpretation of xcorr2 cross correlation function
|
CC BY-SA 3.0
| null |
2011-02-15T05:53:31.987
|
2013-08-16T13:05:11.887
|
2012-10-19T09:34:45.953
| null | null |
[
"data-visualization",
"correlation",
"matlab",
"interpretation",
"cross-correlation"
] |
7239
|
1
|
7264
| null |
4
|
5943
|
I am fitting a simple linear regression model with R & JAGS:
```
model {
for (i in 1:length(ri)){
ri[i] ~ dnorm(mu[i],tau)
mu[i] <- alpha + b.vr*vr[i] + b.ir*ir[i]
}
#posterior predictive distribution
for (j in 1:length(pvr)){
pmu[j] <- alpha + b.vr*pvr[j] + b.ir*pir[j]
pri[j] ~ dnorm(pmu[j],tau)
}
#priors for regression
alpha ~ dnorm(0,0.01)
b.vr ~ dunif(-10,0)
b.ir ~ dunif(0,10)
#prior for precision
tau ~ dgamma(0.001,0.001)
sigma <- 1/sqrt(tau)
}
```
I am trying to calculate the posterior predictive distribution with new data (`pvr` and `pir` in the model). But somehow the reported results (`pri`) do not make sense (the means of `pri` are smaller than expected).
Could someone explain me, is something wrong with the model specification?
|
JAGS: posterior predictive distribution
|
CC BY-SA 2.5
| null |
2011-02-15T08:40:40.117
|
2011-02-18T21:58:49.337
|
2011-02-15T18:23:17.730
|
8
|
1443
|
[
"markov-chain-montecarlo",
"bugs",
"gibbs",
"jags"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.