Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4821 | 2 | null | 4816 | 24 | null | The Sharpest bound I know is that of [Massart and Laurent](https://projecteuclid.org/journals/annals-of-statistics/volume-28/issue-5/Adaptive-estimation-of-a-quadratic-functional-by-model-selection/10.1214/aos/1015957395.full) Lemma 1 p1325.
A corollary of their bound is:
$$P(X-k\geq 2\sqrt{kx}+2x)\leq \exp (-x) $$
$$P(k-X\geq 2\sqrt{kx})\leq \exp (-x) $$
| null | CC BY-SA 4.0 | null | 2010-11-23T12:02:06.880 | 2022-08-12T20:27:22.900 | 2022-08-12T20:27:22.900 | 79696 | 223 | null |
4822 | 2 | null | 4818 | 16 | null | A close-to-intuitive answer:
Take a closer look at the formula for the McNemar test, given the table
```
pos | neg
----|-----|-----
pos | a | b
----|-----|-----
neg | c | d
```
The McNemar statistic `M` is calculated as:
$$ M = {(b-c)^2 \over b+c} $$
The definition of a $\chi^2$ distribution with k degrees of freedom is that it consists of the sum of squares of k independent standard normal variables. if the 4 numbers are large enough, `b` and `c`, and thus `b-c` and `b+c` can be approximated by a normal distribution. Given the formula for M, it's easily seen that with large enough values `M` will indeed follow approximately a $\chi^2$ distribution with 1 degree of freedom.
---
EDIT :
As onstop rightfully indicated, the normal approximation is in fact completely equivalent. That's rather trivial given the argument using the approximation of `b-c` by the normal distribution.
The exact binomial version is also equivalent to the sign test, in the sense that in this version the binomial distribution is used to compare `b` to $Binom(b+c,0.5)$. Or we can say that under the null hypothesis the distribution of b can be approximated by $N(0.5\times(b+c),0.5^2\times(b+c)$.
Or, equivalently:
$$\frac{b-(\frac{b+c}{2})}{\frac{\sqrt{b+c}}{2}}\sim N(0,1)$$
which simplifies to
$$ \frac{b-c}{\sqrt{b+c}}\sim N(0,1)$$
or, when taken the square on both sides, to $M \sim \chi^2_1$.
Hence, the normal approximation is used. It is the same as the $\chi^2$ approximation.
| null | CC BY-SA 2.5 | null | 2010-11-23T12:33:32.047 | 2010-11-23T21:09:56.640 | 2010-11-23T21:09:56.640 | 1124 | 1124 | null |
4823 | 1 | 4824 | null | 13 | 43131 | I've got a dataframe like the following:
```
case simulation temp plank oxygen
1 1 1 8 7 11
2 2 1 16 10 15
...
17 17 2 26 12 17
18 18 2 15 8 12
19 19 2 28 11 21
20 20 2 24 6 14
```
I'd like to get summaries split by the levels of the simulation variable. For example, I'd like the mean of `temp` for simulation==1 and simulation==2, and the same for standard deviation.
At the moment I'm using the following code, which is pretty horrible:
```
df <- read.csv("data.csv")
attach(df)
# Create subset variables
temp1 = subset(temp, simulation==1)
temp2 = subset(temp, simulation==2)
plank1 = subset(plank, simulation==1)
plank2 = subset(plank, simulation==2)
oxygen1 = subset(oxygen, simulation==1)
oxygen2 = subset(oxygen, simulation==2)
print(sd(temp1))
print(sd(temp2))
print(sd(plank1))
print(sd(plank2))
```
I'm sure there must be an automatic way to do this in R, but I can't find it. I've tried using `summary(df ~ simulation)`, but that doesn't produce anything useful.
| Automatically produce summary by factor variable in R | CC BY-SA 2.5 | null | 2010-11-23T12:44:40.340 | 2010-11-23T14:31:45.363 | null | null | 261 | [
"r"
]
|
4824 | 2 | null | 4823 | 19 | null | Check out the `by()` or `tapply()` functions. Basically,
```
tapply(y, g, mean)
```
will give you the mean of `y` by levels of `g`. If you want to get a data.frame from the resulting aggregated measures, use `aggregate()`.
A more elaborated solution is available through the `summary.formula()` function in the [Hmisc](http://cran.r-project.org/web/packages/Hmisc/index.html) package.
| null | CC BY-SA 2.5 | null | 2010-11-23T12:49:02.623 | 2010-11-23T12:49:02.623 | null | null | 930 | null |
4825 | 2 | null | 3113 | 7 | null | Some people seem to use a concept of Pitman Asymptotic Relative Efficiency (ARE) to inflate the sample size obtained by using a sample size formula for a parametric test. Ironically, in order to compute it, one has to assume a distribution again... see e.g. [Sample size for the Mann-Whitney U test](http://www.childrens-mercy.org/stats/size/mann.asp) There are some links in the end of the article that provide pointers for further reading.
| null | CC BY-SA 2.5 | null | 2010-11-23T14:04:56.840 | 2010-11-23T19:45:50.180 | 2010-11-23T19:45:50.180 | 1573 | 1573 | null |
4826 | 2 | null | 4823 | 3 | null | package `doBy` has a `summaryBy` function that has a formula based syntax like the one you tried.
Also, i think that question would have been better asked on stackexchange.
| null | CC BY-SA 2.5 | null | 2010-11-23T14:31:45.363 | 2010-11-23T14:31:45.363 | null | null | 1979 | null |
4827 | 2 | null | 4817 | 7 | null | Good (2005) defines the one-sample sign-test for the location parameter $\theta$ for a continuous symmetric variable $X$ as follows:
- Take the difference $D_i$ of each observation to the location parameter $\theta_0$ under the null hypothesis.
- Define an indicator variable $Z_i$ as $0$ when $D_i < 0$, and as $1$ when $D_i > 0$. Since $X$ is continuous, $P(D_i = 0) = 0$.
- Calculate test statistic $T=\sum_i Z_i$.
- The distribution of $T$ is is found by generating all $2^N$ possible outcomes of the $Z_i$ indicator variables (2 possibilities for each observation with equal probability $\frac{1}{2}$ under H0). This leads to the binomial distribution as in the sign test for 2 dependent samples.
The justification for step 4 is:
>
Suppose we had lost track of the signs
of the deviations [...]. We could
attach new signs at random [...]. If
we are correct in our hypothesis that
the variable has a symmetric
distribution about $\theta_0$, the
resulting values should have precisely
the same distribution as the original
observations. That is, the absolute
values of the deviations are
sufficient for regenerating the
sample. (p34f)
I agree that this reasoning seems somewhat different from a 2-sample permutation test where you re-assign experimental conditions to observations with the justification of exchangeability under H0.
Good, P. 2005. Permutation, Parametric, and Bootstrap Tests of Hypotheses. New York: Springer.
| null | CC BY-SA 2.5 | null | 2010-11-23T14:35:44.533 | 2010-11-23T15:31:39.493 | 2010-11-23T15:31:39.493 | 1909 | 1909 | null |
4828 | 2 | null | 4807 | 2 | null | When I initially wrote the comments below I had assumed that the Heckman estimator could be used for dichotomous outcomes, but the second paper I cite says there is no direct analog. Hopefully someone can point to different and more applicable resources. I still leave my initial comment up as I still feel those papers are helpful. I'm not sure how acceptable it would be viewed to use OLS (as oppossed to logistic regression) simply so you can incorporate the Heckman correction estimate.
---
The work of James Heckman would be applicable to your problem, especially if you have an instrument with which you can estimate the probability of being chosen for a C-section independent of trauma risk.
[Sample Selection Bias as a Specification Error](http://dx.doi.org/10.2307/1912352)
by: James J. Heckman
Econometrica, Vol. 47, No. 1. (1979), pp. 153-161.
[PDF version](http://faculty.smu.edu/millimet/classes/eco7321/papers/heckman02.pdf)
Also as an intro into the logic of the Heckman selection estimator intended for a largely non-technical audience, I enjoy this paper
[Is the Magic Still There? The Use of the Heckman Two-Step Correction for Selection Bias in Criminology](http://dx.doi.org/10.1007/s10940-007-9024-4)
by: Shawn Bushway, Brian Johnson, Lee Slocum
Journal of Quantitative Criminology, Vol. 23, No. 2. (1 June 2007), pp. 151-178.
[PDF version](http://www.ccjs.umd.edu/sites/ccjs.umd.edu/files/pubs/HeckmanCorrection.pdf)
| null | CC BY-SA 3.0 | null | 2010-11-23T15:05:47.490 | 2013-07-25T12:09:30.270 | 2013-07-25T12:09:30.270 | 930 | 1036 | null |
4829 | 2 | null | 4753 | 2 | null | I might suggest non-negative matrix factorization. The iterative algorithm of Lee and Seung is easy to implement and should be amenable to sparse matrices (although it involves Hadamard products, which some sparse matrix packages may not support.).
| null | CC BY-SA 2.5 | null | 2010-11-23T17:00:09.463 | 2010-11-23T17:00:09.463 | null | null | 795 | null |
4830 | 1 | 4873 | null | 13 | 7835 | Full Disclosure: This is homework. I've included a link to the dataset ( [http://www.bertelsen.ca/R/logistic-regression.sav](http://www.bertelsen.ca/R/logistic-regression.sav) )
My goal is to maximize the prediction of loan defaulters in this data set.
Every model that I have come up with so far, predicts >90% of non-defaulters, but <40% of defaulters making the classification efficiency overall ~80%. So, I wonder if there are interaction effects between the variables? Within a logistic regression, other than testing each possible combination is there a way to identify potential interaction effects? Or alternatively a way to boost the efficiency of classification of defaulters.
I'm stuck, any recommendations would be helpful in your choice of words, R-code or SPSS syntax.
My primary variables are outlined in the following histogram and scatterplot (with the exception of the dichotomous variable)
A description of the primary variables:
```
age: Age in years
employ: Years with current employer
address: Years at current address
income: Household income in thousands
debtinc: Debt to income ratio (x100)
creddebt: Credit card debt in thousands
othdebt: Other debt in thousands
default: Previously defaulted (dichotomous, yes/no, 0/1)
ed: Level of education (No HS, HS, Some College, College, Post-grad)
```
Additional variables are just transformations of the above. I also tried converting a few of the continuous variables into categorical variables and implementing them in the model, no luck there.
If you'd like to pop it into R, quickly, here it is:
```
## R Code
df <- read.spss(file="http://www.bertelsen.ca/R/logistic-regression.sav", use.value.labels=T, to.data.frame=T)
```


| Better Classification of default in logistic regression | CC BY-SA 2.5 | null | 2010-11-23T17:35:51.753 | 2010-12-07T16:30:00.257 | 2010-11-24T11:46:49.317 | 159 | 776 | [
"r",
"logistic",
"spss",
"self-study"
]
|
4831 | 1 | 4833 | null | 51 | 42253 | When transforming variables, do you have to use all of the same transformation? For example, can I pick and choose differently transformed variables, as in:
Let, $x_1,x_2,x_3$ be age, length of employment, length of residence, and income.
```
Y = B1*sqrt(x1) + B2*-1/(x2) + B3*log(x3)
```
Or, must you be consistent with your transforms and use all of the same? As in:
```
Y = B1*log(x1) + B2*log(x2) + B3*log(x3)
```
My understanding is that the goal of transformation is to address the problem of normality. Looking at histograms of each variable we can see that they present very different distributions, which would lead me to believe that the transformations required are different on a variable by variable basis.
```
## R Code
df <- read.spss(file="http://www.bertelsen.ca/R/logistic-regression.sav",
use.value.labels=T, to.data.frame=T)
hist(df[1:7])
```

Lastly, how valid is it to transform variables using $\log(x_n + 1)$ where $x_n$ has $0$ values? Does this transform need to be consistent across all variables or is it used adhoc even for those variables which do not include $0$'s?
```
## R Code
plot(df[1:7])
```

| Regression: Transforming Variables | CC BY-SA 3.0 | null | 2010-11-23T17:41:19.050 | 2021-05-20T11:40:23.770 | 2017-10-04T13:24:10.083 | 919 | 776 | [
"r",
"regression",
"logistic",
"data-transformation"
]
|
4832 | 1 | 4835 | null | 7 | 7205 | In SPSS output there is a pretty little classification table available when you perform a logistic regression, is the same possible with R? If so, how?

| Logistic Regression: Classification Tables a la SPSS in R | CC BY-SA 2.5 | null | 2010-11-23T17:44:28.190 | 2010-11-23T21:33:23.423 | 2010-11-23T17:52:19.663 | 776 | 776 | [
"r",
"logistic",
"spss"
]
|
4833 | 2 | null | 4831 | 71 | null | One transforms the dependent variable to achieve approximate symmetry and homoscedasticity of the residuals. Transformations of the independent variables have a different purpose: after all, in this regression all the independent values are taken as fixed, not random, so "normality" is inapplicable. The main objective in these transformations is to achieve linear relationships with the dependent variable (or, really, with its logit). (This objective over-rides auxiliary ones such as reducing excess [leverage](http://www.jerrydallal.com/LHSP/diagnose.htm) or achieving a simple interpretation of the coefficients.) These relationships are a property of the data and the phenomena that produced them, so you need the flexibility to choose appropriate re-expressions of each of the variables separately from the others. Specifically, not only is it not a problem to use a log, a root, and a reciprocal, it's rather common. The principle is that there is (usually) nothing special about how the data are originally expressed, so you should let the data suggest re-expressions that lead to effective, accurate, useful, and (if possible) theoretically justified models.
The histograms--which reflect the univariate distributions--often hint at an initial transformation, but are not dispositive. Accompany them with scatterplot matrices so you can examine the relationships among all the variables.
---
Transformations like $\log(x + c)$ where $c$ is a positive constant "start value" can work--and can be indicated even when no value of $x$ is zero--but sometimes they destroy linear relationships. When this occurs, a good solution is to create two variables. One of them equals $\log(x)$ when $x$ is nonzero and otherwise is anything; it's convenient to let it default to zero. The other, let's call it $z_x$, is an indicator of whether $x$ is zero: it equals 1 when $x = 0$ and is 0 otherwise. These terms contribute a sum
$$\beta \log(x) + \beta_0 z_x$$
to the estimate. When $x \gt 0$, $z_x = 0$ so the second term drops out leaving just $\beta \log(x)$. When $x = 0$, "$\log(x)$" has been set to zero while $z_x = 1$, leaving just the value $\beta_0$. Thus, $\beta_0$ estimates the effect when $x = 0$ and otherwise $\beta$ is the coefficient of $\log(x)$.
| null | CC BY-SA 2.5 | null | 2010-11-23T17:55:52.673 | 2010-11-23T18:03:33.650 | 2010-11-23T18:03:33.650 | 919 | 919 | null |
4834 | 1 | 4836 | null | 8 | 2397 | One of the assumptions for using the Wilcoxon sign-rank test is that the underlying distribution is continuous ([see here](http://en.wikipedia.org/wiki/Wilcoxon_signed-rank_test#Assumptions).)
However, there are cases (for example, when analyzing Likert scale data) where this assumption might not necessarily hold. In such cases, what test can you use? And how would you do it with R?
(My only bet here is to use a randomization test on the median - which I imagine can be easily done using the boot package.)
| Alternative to the Wilcoxon test when the distribution isn't continuous? | CC BY-SA 2.5 | null | 2010-11-23T17:59:32.200 | 2010-11-23T18:18:57.223 | 2010-11-23T18:18:57.223 | 919 | 253 | [
"r",
"nonparametric",
"sign-test",
"wilcoxon-signed-rank"
]
|
4835 | 2 | null | 4832 | 8 | null | I'm not aware of a specific command, but this might be a start:
```
# generate some data
> N <- 100
> X <- rnorm(N, 175, 7)
> Y <- 0.4*X + 10 + rnorm(N, 0, 3)
# dichotomize Y
> Yfac <- cut(Y, breaks=c(-Inf, median(Y), Inf), labels=c("lo", "hi"))
# logistic regression
> glmFit <- glm(Yfac ~ X, family=binomial(link="logit"))
# predicted probabilities
> Yhat <- fitted(glmFit)
# choose a threshold for dichotomizing according to predicted probability
> thresh <- 0.5
> YhatFac <- cut(Yhat, breaks=c(-Inf, thresh, Inf), labels=c("lo", "hi"))
# contingency table and marginal sums
> cTab <- table(Yfac, YhatFac)
> addmargins(cTab)
YhatFac
Yfac lo hi Sum
lo 36 14 50
hi 12 38 50
Sum 48 52 100
# percentage correct for training data
> sum(diag(cTab)) / sum(cTab)
[1] 0.74
```
| null | CC BY-SA 2.5 | null | 2010-11-23T18:00:23.230 | 2010-11-23T18:00:23.230 | null | null | 1909 | null |
4836 | 2 | null | 4834 | 7 | null | I have found that the Wilcoxon statistic is still fine for this purpose and that small simulations do a good job of estimating the size and the power of the test. I suspect this is more powerful than just comparing the two medians. The main concern is lack of power due to extensive numbers of ties, but that concern attaches to any solution you can conceive of: there's no way around it (except to design instruments that offer a wider range of responses!).
To perform the simulation, concatenate the two data arrays (of lengths $n$ and $m$) into a single array (of length $n+m$). In each iteration randomly permute the elements of the array and break the result into the first $n$ and last $m$ elements.
| null | CC BY-SA 2.5 | null | 2010-11-23T18:17:49.580 | 2010-11-23T18:17:49.580 | null | null | 919 | null |
4837 | 2 | null | 4364 | 25 | null | There's an entire book written about this: "Characterizations of the normal probability law", A. M. Mathai & G. Perderzoli. A brief review in [JASA (Dec. 1978)](http://www.jstor.org/pss/2286313) mentions the following:
>
Let $X_1, \ldots, X_n$ be independent random variables. Then $\sum_{i=1}^n{a_i x_i}$ and $\sum_{i=1}^n{b_i x_i}$ are independent, where $a_i b_i \ne 0$, if and only if $X_i$ [are] normally distributed.
| null | CC BY-SA 3.0 | null | 2010-11-23T18:37:54.900 | 2016-11-04T16:46:51.817 | 2016-11-04T16:46:51.817 | 1569 | 919 | null |
4838 | 2 | null | 47 | 2 | null | This is problem with clustering, you can't tell what is considered a cluster. I would see what is the reason behind clustering users, specify my threshold value, and use hierarchical clustering.
In my experience, one has to set either the number of cluster needed, or the threshold value (the distance value that binds two data point together).
| null | CC BY-SA 2.5 | null | 2010-11-23T18:59:58.170 | 2010-11-23T18:59:58.170 | null | null | 2108 | null |
4839 | 1 | null | null | 4 | 1870 | I have a survey where I have asked people which type of computer games they enjoy and whether they consider themselves a hardcore gamer. I allowed people to select multiple genres, but now I am unsure what to do with my data.
I initially thought factor analysis, with the idea being if there were genres that belonged to a particular type, they would separate out and I would be able to see a pattern. However, since I have the data on whether they consider themselves hardcore or not, it seems like I should use it.
If I did use it, would it make sense to test each genre individually, e.g. RTS-hardcore vs. RTS-non-hardcore, or should I look into combinations of genres?
---
[EDIT] @Srikant: Yes, I was planning to make the answers binary. It didn't make sense to me to score different answers with values depending on say the number of genres somebody chose, because they might not necessarily play each game evenly, and I would have no way to determine what ratio of gameplay each genre had, so binary seemed the most fair to me.
@mbq & @chl: The aim is just to see if certain genres tend to be more "hardcore" than other genres. I was expecting to find RTS, MOBA and FPS to cluster towards hardcore seeing as they tend to have a higher learning curve than say music/rhythm games.
| Whether to use factor analysis based on binary multiple response data? | CC BY-SA 3.0 | null | 2010-11-23T19:10:52.133 | 2019-07-20T21:48:37.407 | 2019-07-20T21:48:37.407 | 11887 | 2127 | [
"binary-data",
"factor-analysis",
"survey",
"correspondence-analysis"
]
|
4840 | 1 | null | null | 1 | 235 | my friend is a chemist and his problem is to predict the level of ozone concentration in a single site. We have the data for the last 12 years.
We want to predict the concentration for the coming years (as much as possible).
I know this data set is small, so my question is is this possible? And how much of data is required if not?
What is the tool/method to use?
Here is a plot of my data:

any suggestions/ help?
| How to predict Ozone concentration in few years time? | CC BY-SA 2.5 | null | 2010-11-23T19:11:35.047 | 2011-05-19T21:20:30.913 | 2010-11-27T15:10:15.177 | 2108 | 2108 | [
"machine-learning",
"forecasting",
"predictive-models"
]
|
4841 | 2 | null | 4803 | 3 | null | A Gamma distribution with an integer shape parameter is an [Erlang distribution](http://en.wikipedia.org/wiki/Erlang_distribution) which in turn is a generalization of the Exponential distribution often used to model waiting times. Thus, perhaps one approach is to use an Erlang distribution instead of a Gamma to model your waiting times. One way to justify the Erlang distribution is to assume that inter-event times are triggered by a certain number of underlying exponential inter-event times.
For example, think of water droplets falling into a balloon at some unknown rate. As water droplets keep falling the balloon will pop at some point. The waiting time for a balloon to pop will be erlang distributed as long as the inter-arrival times of the water droplets are exponentially distributed.
A mixture erlang distribution can have the following interpretation: The inter-event times are now dependent on two separate exponentially distributed processes with perhaps different parameters.
| null | CC BY-SA 2.5 | null | 2010-11-23T20:00:49.273 | 2010-11-23T20:00:49.273 | null | null | null | null |
4842 | 2 | null | 4818 | 8 | null | Won't the two approaches come to the same thing? The relevant chi-square distribution has one degree of freedom so is simply the distribution of the square of a random variable with a standard normal distribution. I'd have to go through the algebra to check, which I haven't got time to do right now, but I'd be surprised if you don't end up with exactly the same answer both ways.
| null | CC BY-SA 2.5 | null | 2010-11-23T20:23:40.877 | 2010-11-23T20:23:40.877 | null | null | 449 | null |
4843 | 1 | 4848 | null | 3 | 2477 | The "standard" way to compute Kappa for a predictive classification model (Witten and Frank page 163) is to construct the random confusion matrix in such a way that the number of predictions for each class is the same as the model predicted.
For a visual, see (right side is the random):

Does anyone know why this is the case, instead of truly creating a random confusion matrix where the prior probabilities drive the number of predictions for each class. That seems the more accurate comparison against "a null model". For example, in this case, the number of actual and predicted classes would coincide (in the image uploaded, this would mean that the columns of the random confusion matrix would be 100, 60 and 40 respectively).
Thanks!
BMiner
| Kappa for Predictive Model | CC BY-SA 2.5 | null | 2010-11-23T20:47:07.533 | 2010-11-23T22:09:40.567 | 2010-11-23T21:30:51.223 | 2040 | 2040 | [
"predictive-models"
]
|
4844 | 1 | null | null | 5 | 1780 | I have a timeseries of data which was gathered by driving a car around randomly.
One data point was gathered every minute and each data point is either "Yes" or "No". Yes = temperature is above a threshold, No = it is below the threshold. Assume the car never stops.
I'm interested in the proportion of Yeses and related confidence intervals.
I'm inclined to use binomial confidence intervals except that it seems to me that the data points are not independent.
Would the fact that the data was gathered by driving around randomly buy me independence? How could I derive meaningful confidence intervals?
| Confidence interval based on time series | CC BY-SA 2.5 | null | 2010-11-23T20:52:02.350 | 2022-08-18T20:59:10.793 | 2010-11-23T20:58:18.350 | null | 1784 | [
"time-series",
"confidence-interval",
"independence"
]
|
4845 | 2 | null | 4830 | 4 | null | I'm not a logistic regression expert, but isn't it just a problem of unbalanced data? Probably you have much more non-defaulters than defaulters what may shift the prediction to deal better with larger class. Try to kick out some non-defaulters and see what happens.
| null | CC BY-SA 2.5 | null | 2010-11-23T20:53:52.237 | 2010-11-23T20:53:52.237 | null | null | null | null |
4846 | 2 | null | 4832 | 5 | null | Thomas D. Fletcher has a function called `ClassLog()` (for "classification analysis for a logistic regression model") in his [QuantPsyc](http://cran.r-project.org/web/packages/QuantPsyc/) package. However, I like @caracal's response because it is self-made and easily customizable.
| null | CC BY-SA 2.5 | null | 2010-11-23T21:33:23.423 | 2010-11-23T21:33:23.423 | null | null | 930 | null |
4847 | 2 | null | 4843 | 2 | null | This is just an outer product of actual and predicted class frequencies times all objects count. In R:
```
A<-c(100,60,40)
P<-c(120,60,20)
numCases<-sum(A) #=sum(P) also
(A/numCases)%o%(P/numCases)*numCases
A%o%P/numCases #same simplified
```
| null | CC BY-SA 2.5 | null | 2010-11-23T21:34:43.747 | 2010-11-23T21:34:43.747 | null | null | null | null |
4848 | 2 | null | 4843 | 6 | null | It might be useful to consider Cohen's $\kappa$ in the context of inter-rater-agreement. Suppose you have two raters individually assigning the same set of objects to the same categories. You can then ask for overall agreement by dividing the sum of the diagonal of the confusion matrix by the total sum. But this does not take into account that the two raters will also, to some extent, agree by chance. $\kappa$ is supposed to be a chance-corrected measure conditional on the baseline frequencies with which the raters use the categories (marginal sums).
The expected frequency of each cell under the assumption of independence given the marginal sums is then calculated just like in the $\chi^2$ test - this is equivalent to Witten & Frank's description (see mbq's answer). For chance-agreement, we only need the diagonal cells. In R
```
# generate the given data
> lvls <- factor(1:3, labels=letters[1:3])
> rtr1 <- rep(lvls, c(100, 60, 40))
> rtr2 <- rep(rep(lvls, nlevels(lvls)), c(88,10,2, 14,40,6, 18,10,12))
> cTab <- table(rtr1, rtr2)
> addmargins(cTab)
rtr2
rtr1 a b c Sum
a 88 10 2 100
b 14 40 6 60
c 18 10 12 40
Sum 120 60 20 200
> library(irr) # for kappa2()
> kappa2(cbind(rtr1, rtr2))
Cohen's Kappa for 2 Raters (Weights: unweighted)
Subjects = 200
Raters = 2
Kappa = 0.492
z = 9.46
p-value = 0
# observed frequency of agreement (diagonal cells)
> fObs <- sum(diag(cTab)) / sum(cTab)
# frequency of agreement expected by chance (like chi^2)
> fExp <- sum(rowSums(cTab) * colSums(cTab)) / sum(cTab)^2
> (fObs-fExp) / (1-fExp) # Cohen's kappa
[1] 0.4915254
```
Note that $\kappa$ is not universally accepted at doing a good job, see, e.g., [here](http://www.john-uebersax.com/stat/kappa.htm), or [here](http://www.agreestat.com/research_papers.html), or the literature cited in the Wikipedia article.
| null | CC BY-SA 2.5 | null | 2010-11-23T21:53:33.310 | 2010-11-23T22:09:40.567 | 2010-11-23T22:09:40.567 | 1909 | 1909 | null |
4849 | 2 | null | 4839 | 4 | null | Correspondence analysis might be a good fit. Creating a graph that shows the relationship between Hard-Core gaming (or not) and the different types of games that they are playing. The result would be the ability to say with cautious confidence that Hard Core Gamers play a certain group of game types (or not).
With respect to factor analysis, I would recommend reviewing some of the answers to a question I asked previously, just so you're clear on what your goal is:
[What are the differences between Factor Analysis and Principal Component Analysis?](https://stats.stackexchange.com/questions/1576/what-are-the-differences-between-factor-analysis-and-principal-component-analysis)
Interpreting and creating correspondence analysis plots is discussed here (with examples on how to create the plots as well, pay strict attention to the differences in taking inference from horizontal and vertical axis distances)
[Interpreting 2D correspondence analysis plots](https://stats.stackexchange.com/questions/3270/interpreting-2d-correspondence-analysis-plots)
| null | CC BY-SA 2.5 | null | 2010-11-23T22:12:47.053 | 2010-11-23T22:12:47.053 | 2017-04-13T12:44:46.680 | -1 | 776 | null |
4850 | 2 | null | 1525 | 27 | null | I have hesitated to wade into this discussion, but because it seems to have gotten sidetracked over a trivial issue concerning how to express numbers, maybe it's worthwhile refocusing it. A point of departure for your consideration is this:
>
A probability is a hypothetical property. Proportions summarize observations.
A [frequentist](http://en.wikipedia.org/wiki/Probability_interpretations#Frequentism) might rely on laws of large numbers to justify statements like "the long-run proportion of an event [is] its probability." This supplies meaning to statements like "a probability is an expected proportion," which otherwise might appear merely tautological. Other interpretations of probability also lead to connections between probabilities and proportions but they are less direct than this one.
In our models we usually take probabilities to be definite but unknown. Due to the sharp contrasts among the meanings of "probable," "definite," and "unknown" I am reluctant to apply the term "uncertain" to describe that situation. However, before we conduct a sequence of observations, the [eventual] proportion, like any future event, is indeed "uncertain". After we make those observations, the proportion is both definite and known. (Perhaps this is what is meant by "guaranteed" in the OP.) Much of our knowledge about the [hypothetical] probability is mediated through these uncertain observations and informed by the idea that they might have turned out otherwise. In this sense--that uncertainty about the observations is transmitted back to uncertain knowledge of the underlying probability--it seems justifiable to refer to the probability as "uncertain."
In any event it is apparent that probabilities and proportions function differently in statistics, despite their similarities and intimate relationships. It would be a mistake to take them to be the same thing.
### Reference
Huber, WA [Ignorance is Not Probability](http://onlinelibrary.wiley.com/doi/10.1111/j.1539-6924.2010.01361.x/abstract). Risk Analysis Volume 30, Issue 3, pages 371–376, March 2010.
| null | CC BY-SA 3.0 | null | 2010-11-23T22:38:38.997 | 2013-12-10T15:01:22.863 | 2020-06-11T14:32:37.003 | -1 | 919 | null |
4851 | 1 | 4853 | null | 4 | 397 | I'm trying to precompute the distributions of several random variables. In particular, these random variables are the results of functions evaluated at locations in a genome, so there will be on the order of 10^8 or 10^9 values for each. The functions are pretty smooth, so I don't think I'll lose much accuracy by only evaluating at every 2nd/10th/100th? base or so, but regardless there will be a large number of samples. My plan is to precompute quantile tables (maybe percentiles) for each function and reference these in the execution of my main program to avoid having to compute these distribution statistics in every run.
But I don't really see how I can easily do this: storing, sorting, and reducing an array of 10^9 floats isn't really feasible, but I can't think of another way that doesn't lose information about the distribution. Is there a way of measuring the quantiles of a sample distribution that doesn't require storing the whole thing in memory?
| Efficient Empirical CDF Computation / Storage | CC BY-SA 2.5 | null | 2010-11-23T23:02:12.753 | 2010-11-23T23:29:44.537 | null | null | 2111 | [
"distributions",
"python",
"bioinformatics",
"cumulative-distribution-function",
"biostatistics"
]
|
4852 | 2 | null | 4851 | 2 | null | Here's a possible computing solution to your problem.
If the pdf is sparse (by sparse I mean that long stretches are zero) as your graph indicates, then you could perhaps exploit that structure.
For example, you could have one vector which gives the pdf for non-zero regions and another vector indicating where the non-zero regions begin and end.
You could extend this idea if you have a large number of constant areas in your pdf.
| null | CC BY-SA 2.5 | null | 2010-11-23T23:28:58.350 | 2010-11-23T23:28:58.350 | null | null | 8 | null |
4853 | 2 | null | 4851 | 6 | null | Why not create bins a priori to cover the expected range of a variable, scan through the data once to record all the bin counts, and then estimate the percentiles via interpolation of the EDF? If the bins are evenly distributed, locating the bin for a value is a $O(1)$ operation; in the worst case it's a binary search costing $O(\log(k))$ operations for $k$ bins.
| null | CC BY-SA 2.5 | null | 2010-11-23T23:29:44.537 | 2010-11-23T23:29:44.537 | null | null | 919 | null |
4854 | 1 | 4861 | null | 26 | 17819 | In Logistic Regression, is there a need to be as concerned about multicollinearity as you would be in straight up OLS regression?
For example, with a logistic regression, where multicollinearity exists, would you need to be cautious (as you would in OLS regression) with taking inference from the Beta coefficients?
For OLS regression one "fix" to high multicollinearity is ridge regression, is there something like that for logistic regression? Also, dropping variables, or combining variables.
What approaches are reasonable for reducing the effects of multicollinearity in a logistic regression? Are they essentially the same as OLS?
(Note: this is not for the purpose of a designed experiment)
| Logistic Regression - Multicollinearity Concerns/Pitfalls | CC BY-SA 2.5 | null | 2010-11-23T23:50:10.213 | 2013-05-31T08:26:39.323 | 2010-11-24T15:01:27.013 | 919 | 776 | [
"regression",
"logistic",
"multicollinearity"
]
|
4855 | 2 | null | 4839 | 6 | null | The first step is to define your research question.
A few possible research questions given your data include:
- How can genres of video games be grouped into a smaller set?
- How are genres of video games or groups of genres related to self-identifying as a hard-core gamer?
Then, you could present a table of frequencies and percentages of genre by hard-core gamer status.
You could also divide the analysis into two steps:
- grouping types of video games; this could be done conceptually (e.g., based on prior knowledge of ways of grouping gaming genres) or using a data driven approach such as PCA, factor analysis (perhaps on tetrachoric correlations)
- examine differences in endorsement of video gram types across hard-core gamer category.
As mentioned by Brandon correspondence analysis would also be another nice option.
| null | CC BY-SA 2.5 | null | 2010-11-24T01:22:36.983 | 2010-11-24T07:56:08.127 | 2010-11-24T07:56:08.127 | 930 | 183 | null |
4856 | 1 | 4857 | null | 3 | 2580 | How can I rewrite an AR(p) model in state-space form?
Max(p)=5 and I want to use Kalman Predictor.
| Rewriting AR model in State-Space form | CC BY-SA 2.5 | null | 2010-11-24T01:36:32.617 | 2010-11-24T12:49:59.810 | null | null | 1637 | [
"time-series",
"kalman-filter"
]
|
4857 | 2 | null | 4856 | 4 | null | I suggest you buy the excellent book by G. Petris, S. Petrone and P. Campagnoli [Dynamic Linear Models with R](http://www.springer.com/statistics/statistical+theory+and+methods/book/978-0-387-77237-0).
You will learn that any ARMA model
$Y_t = \sum_{j=1}^{r}\phi_jY_{t-j} + \sum_{j=1}^{r-1}\psi_{j}\epsilon_t$
can be expressed in the following form:
$
\begin{matrix}
Y_t & = & F\theta_t\\
\theta_{t+1} & = & G\theta_{t}+R\epsilon_t
\end{matrix} $
with
$
F=\begin{bmatrix}
1 & 0 & ... & 0
\end{bmatrix}
$
$
G=\begin{bmatrix}
\phi_1 & 1 & 0 & ... & 0\\
\phi_2 & 0 & 1 & ... & 0\\
... & ... & ... & ... & ...\\
\phi_{r-1} & 0 & ... & 0 & 1\\
\phi_r & 0 & ... & 0 & 0
\end{bmatrix}
$
$
R={\begin{bmatrix}
1 & \psi_1 & ... & \psi_{r-2} & \psi_{r-1}
\end{bmatrix}}'
$
In your specific case, just set $r=5$ and $\psi_{j=1..5}=0$.
You can use the package [dlm](http://www.google.com.sg/url?sa=t&source=web&cd=2&sqi=2&ved=0CB0QFjAB&url=http%3A%2F%2Fcran.us.r-project.org%2Fweb%2Fpackages%2Fdlm%2Findex.html&ei=5XzsTIL_GcbirAeT9-TpAw&usg=AFQjCNFU5Fn1Pjk6h3yIrtuUC2ttt4RIVA) to use the Kalman filter on this model.
fRed
| null | CC BY-SA 2.5 | null | 2010-11-24T02:11:51.600 | 2010-11-24T10:57:39.583 | 2010-11-24T10:57:39.583 | 1709 | 1709 | null |
4858 | 1 | null | null | 10 | 8633 | I have been reading a good book called [Applied Longitudinal Data Analysis: Modeling Change and Event Occurrence](http://gseacademic.harvard.edu/alda/) by Judith Singer and John Willet. The book shows that by modeling in 2 levels, we can model the individual change in level 1 and in level 2 model for systematic interindividual differences in change.
The [R codes](http://www.ats.ucla.edu/stat/r/examples/alda/) for the examples only show how to use `lme()` to estimate the fixed and random effects. However, the text suggested that we should test the variance components to determine whether the random effects are significant or not.
For example, one of the codes does only the following:
```
library(nlme)
model.a <- lme(alcuse~ 1, alcohol1, random= ~1 |id)
summary(model.a)
Linear mixed-effects model fit by REML
Data: alcohol1
AIC BIC logLik
679.0049 689.5087 -336.5025
Random effects:
Formula: ~1 | id
(Intercept) Residual
StdDev: 0.7570578 0.7494974
Fixed effects: alcuse ~ 1
Value Std.Error DF t-value p-value
(Intercept) 0.9219549 0.09629638 164 9.574139 0
Standardized Within-Group Residuals:
Min Q1 Med Q3 Max
-1.8892070 -0.3079143 -0.3029178 0.6110925 2.8562135
Number of Observations: 246
Number of Groups: 82
```
But the text lists the following:
- fixed effect: 0.922*** (s = 0.096) -> available in the output
- within person variance: 0.562*** (s = 0.062) -> can be obtained from the output (random effect residual std. dev squared)
- between person variance: 0.564*** (s = 0.119)
My work involves a lot of analysis for longitudinal data so I really need to understand this idea. Your help is very much appreciated.
| How to test random effects in a multilevel model in R | CC BY-SA 3.0 | null | 2010-11-24T04:12:41.550 | 2013-07-01T02:02:33.493 | 2013-07-01T02:02:33.493 | 7290 | 1663 | [
"r",
"hypothesis-testing",
"mixed-model",
"multilevel-analysis",
"panel-data"
]
|
4859 | 2 | null | 4830 | 2 | null | You might just try including all of the interaction effects. You can then use L1/L2-regularized logistic regression to minimize over-fitting and take advantage of any helpful features. I really like Hastie/Tibshirani's glmnet package (http://cran.r-project.org/web/packages/glmnet/index.html).
| null | CC BY-SA 2.5 | null | 2010-11-24T04:31:36.440 | 2010-11-24T04:31:36.440 | null | null | 2077 | null |
4860 | 2 | null | 3061 | 4 | null | This statistic measures a [kind of correlation](http://www.jstor.org/pss/2532051) between two sets of data. Its calculation requires no assumptions about what their scatterplot looks like (if that's what you mean by "tendency").
| null | CC BY-SA 2.5 | null | 2010-11-24T05:05:17.127 | 2010-11-24T05:05:17.127 | null | null | 919 | null |
4861 | 2 | null | 4854 | 21 | null | All of the same principles concerning multicollinearity apply to logistic regression as they do to OLS. The same diagnostics assessing multicollinearity can be used (e.g. VIF, condition number, auxiliary regressions.), and the same dimension reduction techniques can be used (such as combining variables via principal components analysis).
This [answer](https://stats.stackexchange.com/questions/4272/when-to-use-regularization-methods-for-regression/4274#4274) by chl will lead you to some resources and R packages for fitting penalized logistic models (as well as a good discussion on these types of penalized regression procedures). But some of your comments about "solutions" to multicollinearity are a bit disconcerting to me. If you only care about estimating relationships for variables that are not collinear these "solutions" may be fine, but if your interested in estimating coefficients of variables that are collinear these techniques do not solve your problem. Although the problem of multicollinearity is technical in that your matrix of predictor variables can not be inverted, it has a logical analog in that your predictors are not independent, and their effects cannot be uniquely identified.
| null | CC BY-SA 2.5 | null | 2010-11-24T05:13:29.350 | 2010-11-24T13:14:11.800 | 2017-04-13T12:44:21.160 | -1 | 1036 | null |
4862 | 2 | null | 4858 | 3 | null | One of the best resources on multilevel analysis in R is John Fox's web appendix to the text "An R and S-PLUS Companion to Applied Regression". It provides a great overview of the method and means to calculate some of the more familiar measures from the R NLME output. The appendix is available on CRAN.
Here is the link:
[http://cran.r-project.org/doc/contrib/Fox-Companion/appendix-mixed-models.pdf](http://cran.r-project.org/doc/contrib/Fox-Companion/appendix-mixed-models.pdf)
| null | CC BY-SA 2.5 | null | 2010-11-24T05:32:41.863 | 2010-11-24T05:32:41.863 | null | null | 485 | null |
4863 | 2 | null | 4810 | 2 | null | I guess it depends on what statistics or findings you are going to find out, research, study, or report. I'm assuming you will prob be using these graphs to represent findings for your university topic, right?
Like for example, if you want to present your finding about say, 'How long users stays on a a certain website', it may be good to show it in CDF as it shows the accumulated time he spent on that website, through the pages etc.
On the other hand, if you want to simply show the probability of users clicking on an advert link (e.g. Google adwords link) then you may want to present it in PDF form as it will probably be a normal distribution bell curve and you can show the probability of that heppening.
Hope this helps,
Jeff
| null | CC BY-SA 2.5 | null | 2010-11-24T05:35:57.333 | 2010-11-24T05:35:57.333 | null | null | null | null |
4864 | 2 | null | 4830 | 2 | null | I know your question is about logistic regression and as it is a homework assignment so your approach may be constrained. However, if your interest is in interactions and accuracy of classification, it might be interesting to use something like CART to model this.
Here's some R code to do produce the basic tree. I've set rpart loose on the enire data frame here. Perhaps not the best approach without some prior knowledge and a cross validation method:
```
library(foreign)
df <- read.spss(file="http://www.bertelsen.ca/R/logistic-regression.sav", use.value.labels=T, to.data.frame=T)
library(rpart)
fit<-rpart(default~.,method="anova",data=df)
pfit<- prune(fit, cp= fit$cptable[which.min(fit$cptable[,"xerror"]),"CP"])
# plot the pruned tree
plot(pfit, uniform=TRUE,
main="Pruned Classification Tree for Loan Default")
text(pfit, use.n=TRUE, all=TRUE, cex=.8)
```
I'm not sure right off how to produce the classifcation table. It shouldn't be too hard from the predicted values from the model object and the original values. Anyone have any tips here?
| null | CC BY-SA 2.5 | null | 2010-11-24T05:41:16.503 | 2010-11-24T06:51:58.127 | 2010-11-24T06:51:58.127 | 485 | 485 | null |
4866 | 2 | null | 4858 | 5 | null | The `intervals()` function should provide you with $100(1-\alpha)$ confidence intervals for the random effects in your model, see `help(intervals.lme)` for more information.
You can also test if any of the variance components can be droped from the model by using `anova()` (which amounts to do an LRT between two nested models).
| null | CC BY-SA 2.5 | null | 2010-11-24T07:54:23.777 | 2010-11-24T07:54:23.777 | null | null | 930 | null |
4867 | 2 | null | 4839 | 3 | null | As an alternative to CA suggested by @Brandon, you could also try [Multiple Correspondence Analysis](http://en.wikipedia.org/wiki/Multiple_correspondence_analysis) which has the advantage of considering all types of games at the same time (unlike CA), which are probably scored as binary variable (in this case, the MCA solution will be close to the PCA one--factor scores and eigenvalues are linearly related). Basically this will give you an idea of how games group together, if any. At the same time, you can use your "hardcore gamer" status (yes/no) as an illustrative variable (i.e., this variable will not participate to the construction of the factorial axes), which will help you identifying how it related to these clusters of variables.
The [FactoMineR](http://cran.r-project.org/web/packages/FactoMineR/index.html) R package offers all of what is needed for such kind of analysis, see `MCA()`.
As you didn't say anything about your sample size, it's hard to suggest confirmatory or model-based approaches, like logistic regression or latent class regression. But you can look at [Random Forests](http://en.wikipedia.org/wiki/Random_forest) and try to identify the "best" variables that allow to predict your outcome with a minimal classification error rate (see the [randomForest](http://cran.r-project.org/web/packages/randomForest/index.html) package).
| null | CC BY-SA 2.5 | null | 2010-11-24T08:13:19.087 | 2010-11-24T08:13:19.087 | null | null | 930 | null |
4868 | 1 | 4871 | null | 6 | 2250 | Is it possible to do k-fold cross-validation to test all data, rather than using kfcv to find the optimal hypothesis as is typically done.
Example:
Say I want to use a svms on a dataset of size 1000. Could I use 900 events to train the svm for the testing of the other 100 events. Then use a separate 900 events to train the svm for a separate 100 events, repeating this process 10 times until all of the data has been tested.
Would events tested using separately trained svms be comparable? i.e. through this technique could I then use my entire dataset instead of setting aside a certain fraction for training, or is this a statistically unwise thing to do?
Thanks for any help,
Colorado
PS) Any references to why this would['nt] work is greatly appreciated, I'm discussing this technique in an academic setting.
| Using k-fold cross-validation to test all data | CC BY-SA 2.5 | null | 2010-11-24T08:47:26.607 | 2012-08-19T15:38:39.877 | 2010-11-24T12:47:15.397 | null | 2113 | [
"machine-learning",
"cross-validation"
]
|
4869 | 2 | null | 4184 | 2 | null | The paper
[http://www.ricam.oeaw.ac.at/Groebner-Bases-Bibliography/gbbib_files/publication_582.pdf](http://www.ricam.oeaw.ac.at/Groebner-Bases-Bibliography/gbbib_files/publication_582.pdf)
by Pistone et al seems to be the source paper for this.
| null | CC BY-SA 2.5 | null | 2010-11-24T08:52:57.620 | 2010-11-24T08:52:57.620 | null | null | null | null |
4870 | 2 | null | 4858 | 19 | null | First point:
You need to be careful if you want to test whether the variance of a random effect is 0:
The standard $\chi^2$-asymptotics for the LR that are used in `anova()` do not apply for LRTs or restricted likelihood ratio tests on variance components since the null hypothesis is on the edge of the parameter space. Using this wrong reference distribution will yield ridiculously conservative tests.
Sources: [Self & Liang (1987)](http://www.jstor.org/stable/2289471), [Crainiceanu & Ruppert (2003)](http://onlinelibrary.wiley.com/doi/10.1111/j.1467-9868.2004.00438.x/full), [Greven, Crainiceanu, Küchenhoff (2008)](http://www.stat.uni-muenchen.de/~greven/files/greven_iwsm07.pdf).
For exact LR tests on variances in linear mixed models with uncorrelated random effects you can use my package [RLRsim](http://cran.r-project.org/web/packages/RLRsim/index.html).
For generalized linear mixed models or models with correlated random effects, I would strongly recommend a parametric bootstrap to approximate the correct p-value if the LRT or RLRT (the latter has more power, see sources above) are somewhere in the vicinity of the critical value for the (wrong) standard reference distribution. Code for `lme4` at the end of the post.
Second point:
It may be dangerous to use standard confidence intervals for variance parameter estimates, since their distribution is usually very skewed and so using a symmetric interval is a little simplistic- Douglas Bates has some material on this [here](http://lme4.r-forge.r-project.org/book/).
---
Parametric bootstrap code for `lme4`-models:
```
#m0 is the lmer model under the null hypothesis (i.e. the smaller model)
#mA is the lmer model under the alternative
bootstrapAnova <- function(mA, m0, B=1000){
oneBootstrap <- function(m0, mA){
d <- drop(simulate(m0))
m2 <-refit(mA, newresp=d)
m1 <-refit(m0, newresp=d)
return(anova(m2,m1)$Chisq[2])
}
nulldist <- if(!require(multicore)){
replicate(B, oneBootstrap(m0, mA))
} else {
unlist(mclapply(1:B, function(x) oneBootstrap(m0, mA)))
}
ret <- anova(mA, m0)
ret$"Pr(>Chisq)"[2] <- mean(ret$Chisq[2] < nulldist)
names(ret)[7] <- "Pr_boot(>Chisq)"
attr(ret, "heading") <- c(attr(ret, "heading")[1],
paste("Parametric bootstrap with", B,"samples."),
attr(ret, "heading")[-1])
attr(ret, "nulldist") <- nulldist
return(ret)
}
#use like this (and increase B if you want reviewers to believe you):
(bRLRT <- bootstrapAnova(mA=<BIG MODEL>, m0=<SMALLER MODEL>))
```
| null | CC BY-SA 2.5 | null | 2010-11-24T09:28:58.507 | 2010-11-24T09:28:58.507 | null | null | 1979 | null |
4871 | 2 | null | 4868 | 6 | null | As far as I understand your question, it can be formulated this way:
Instead of calculating a quality measure for each of the k validation-folds and then calculate the average, may I aggregate all folds an then calculate my quality measure, hence getting only one instead of k values?
This question requires two perspectives:
From the perspective of crossvalidation itself it is ok, because training- and testsamples still have an empty intersection etc. Since you just aggregate multiple samples drawn without replacement, the test-distribution is not spoiled.
From the perspective of the model, it depends whether the model produces comparable scores. SVM will work in my opinion, but imagine a model which min-max-normalizes the scores across the test-set (iiek), so that the calculation of a representative decision threshold across all test-samples will be quite hard.
In general, a lot of techniques which require the estimation of a parameter, which itself depends on the quality of a model, utilize this approach. A concrete example is the calculating of an operator to calibrate the scores of a classification model, e.g. Platt Scaling.
Furthermore (also this not an completely satisfying argument) the open source software Rapidminer has an operator for this approach.
PS:
I want to point out, that although this approach is useful to get reliable quality measures for only small datasets, it may be hard to perform statistical tests to compare the significance of two models, since cv cannot be repeated endlessly (example: how to estimate whether the assumption of the so-often-misused t-test is satisfied if you only have 6 data points?).
PPS: Also interested, I was not able to find a paper focusing on the examination of this approach. The papers I have seen so far using this technique did not bother to reference it.
| null | CC BY-SA 2.5 | null | 2010-11-24T10:43:38.363 | 2010-11-24T12:48:29.310 | 2010-11-24T12:48:29.310 | null | 264 | null |
4872 | 1 | null | null | 2 | 331 | what is the minimum entry degree to be employed as a statistician
| degree to become a statistician | CC BY-SA 2.5 | null | 2010-11-24T10:56:16.390 | 2010-11-24T10:56:16.390 | null | null | null | [
"careers"
]
|
4873 | 2 | null | 4830 | 8 | null | In unbalanced datasets such as this, you can usually improve classification performance by moving away from using a fitted probability of .5 as your cutpoint for classifying cases into defaulters and non-defaulters. For example, I get correct classification rates of .88 and .58 with a cutpoint of .4 for a glm with all 2nd-order interactions. (Which probably leads to overfitting and seems to have some rank issues, but that's another story.)
Code:
```
m <- glm(default ~ (age + employ + address + income + debtinc +
creddebt + othdebt + ed)^2,
family=binomial(), data=df)
p <- predict(m, newdata=df, type="response")
getMisclass <- function(cutoff, p, labels){
pred <- factor(1*(p > cutoff), labels=c("No Default", "Default"))
t <- table(pred, labels)
cat("cutoff ", cutoff, ":\n")
print(t)
cat("correct :", round(sum(t[c(1,4)])/sum(t), 2),"\n")
cat("correct No :", round(t[1]/sum(t[,1]), 2),"\n")
cat("correct Yes:", round(t[4]/sum(t[,2]), 2),"\n\n")
invisible(t)
}
cutoffs <- seq(.1,.9,by=.1)
sapply(cutoffs, getMisclass, p=p, labels=df$default)
```
partial output:
```
cutoff 0.3 :
labels
pred No Yes
No Default 3004 352
Default 740 903
correct : 0.78
correct No : 0.8
correct Yes: 0.72
cutoff 0.4 :
labels
pred No Yes
No Default 3278 532
Default 466 723
correct : 0.8
correct No : 0.88
correct Yes: 0.58
cutoff 0.5 :
labels
pred No Yes
No Default 3493 685
Default 251 570
correct : 0.81
correct No : 0.93
correct Yes: 0.45
cutoff 0.6 :
labels
pred No Yes
No Default 3606 824
Default 138 431
correct : 0.81
correct No : 0.96
correct Yes: 0.34
```
| null | CC BY-SA 2.5 | null | 2010-11-24T11:43:04.703 | 2010-11-24T11:43:04.703 | null | null | 1979 | null |
4874 | 2 | null | 4868 | 1 | null | Yes it is; and while this is a very reliable way of reporting error, I would say it is even encouraged.
| null | CC BY-SA 2.5 | null | 2010-11-24T12:43:13.730 | 2010-11-24T12:43:13.730 | null | null | null | null |
4875 | 2 | null | 4856 | 4 | null | Another good book that covers this is [Time Series Analysis by State Space Methods](http://rads.stackoverflow.com/amzn/click/0198523548) by Durbin and Koopman (pp 46-49.)
| null | CC BY-SA 2.5 | null | 2010-11-24T12:49:59.810 | 2010-11-24T12:49:59.810 | null | null | 439 | null |
4876 | 1 | null | null | 0 | 75 | I am trying to track the performance of a set of homogeneous subsystems vs the system as a whole. The performance metric is measured by a mean value based on a set of tasks performed by a the subsystem, for example, the mean wait time before scheduling task.
Each subsystem may be responsible for varying numbers of tasks. I can calculate the mean wait time for each subsystem and for the system as a whole.
My question is how to measure and visualize how each subsystem is performing compared to the whole system average (or maybe there's a better aggregate measure for the whole system?)
Obviously being below the mean shows the subsystem is performing relatively better and above the mean vice versa, and the distance from the whole system mean represents that, but I struggle to get real meaning or provide a good visualization from that.
Is there a better way? Or if I'm on the right track, how do I assign meaning / visualize the metric?
| Comparing and visualizing rates of subsystems | CC BY-SA 2.5 | null | 2010-11-24T13:20:08.147 | 2010-11-24T16:15:31.593 | null | null | 2122 | [
"data-visualization",
"mean"
]
|
4877 | 1 | 4879 | null | 3 | 13448 | I have imported a datafile in R. It has different columns. There is one column that has the name of Operating System belonging to that row information. I wanted to get the percentage share of each unique OS (windows, linux, ios etc) from this column and plot it. I am very new to R and would like to know if there is any inbuilt way to do this in R.
| Creating plots for String type columns in R | CC BY-SA 2.5 | null | 2010-11-24T14:15:28.017 | 2010-11-24T20:49:04.323 | 2010-11-24T14:56:18.557 | 2101 | 2101 | [
"r",
"data-visualization"
]
|
4878 | 1 | null | null | 2 | 4190 | We're using Excel Solver to model a DEA problem. We've worked with a few resources and feel confident that the model is returning correct results but I need some help interpreting them.
When solver completes its analysis, it provides an answer report that gives the slack found in some of the constraints -- which are binding and which are not.
Here's a sample result for a DMU that was found to be inefficient when Solving to Minimize the Inputs:
```
WeightedSum Outputs = DMU
of Reference Set Inputs = DMU * E Slack
Output1 2,892,230 >= 2,892,230 binding
Output2 66,229,077 >= 66,229,077 binding
Output3 9,724,273 >= 5,372,010 4,352,263
Output4 3,123,300 >= 2,511,864 611,436
Output5 21,609 >= 16,039 5,570
Output6 120,527 >= 33,234 87,292
Output7 8,548 >= 2,769 5,779
Output8 17,314 >= 12,605 4,709
Output9 748 >= 616 132
Input1 19,775 <= 23,314 3,539
Input2 9,132 <= 9,132 binding
Input3 20,028 <= 143,604 123,576
Input4 74,293 <= 74,293 binding
Input5 15,558 <= 15,558 binding
```
Is it correct to assert that those inputs found to be binding could not be further optimized but those that contained slack are where the inefficiencies are found?
Can the same be said for Outputs? or does that work the opposite?
Or is slack just a bi-product of the analysis and not relevant?
Thanks for your help!
| Slack values in Data Envelopment Analysis | CC BY-SA 2.5 | null | 2010-11-24T14:21:34.157 | 2010-11-24T15:44:21.667 | 2010-11-24T15:16:06.487 | null | 1256 | [
"econometrics",
"excel"
]
|
4879 | 2 | null | 4877 | 2 | null | It seems the `barplot()` will be your friend in that case, e.g.
```
x <- sample(c("Win","Linux","Mac"), 100, replace=TRUE)
barplot(table(x))
```
This will work for variables of type `character` or `factor`. Another option is to use Cleveland's [dotplot](http://www.b-eye-network.com/view/2468), see `dotchart()` (or `dotplot()`in the [lattice](http://cran.r-project.org/web/packages/lattice/index.html) package).
Update
You could replace `table(x)` by `table(x)/sum(table(x))*100` to express data as % rather than counts. I know there are more elegant solutions in additional packages, but I can't remember their names actually. The `table()` function will also work for two-way classification, and marginal totals can easily be computed in a similar way; e.g. `apply(table(x, y), 1, sum)` gives rows marginal frequencies.
| null | CC BY-SA 2.5 | null | 2010-11-24T14:29:50.333 | 2010-11-24T15:04:50.940 | 2010-11-24T15:04:50.940 | 930 | 930 | null |
4880 | 2 | null | 4877 | 4 | null | Simulating some data for a good start
```
x<-sample(c("Win","Linux","Mac"),721,replace=TRUE)
```
The good practice is to hold such categorical variables as factors; one can convert character vector to factor with `factor` function:
```
x<-factor(x)
```
Then, `plot` called on it will make a barplot with counts; to convert it into per-cents, you can use `table` that makes counts of each level:
```
t<-table(x); barplot(t/sum(t)*100,ylab="Per cent")
```
| null | CC BY-SA 2.5 | null | 2010-11-24T15:06:04.903 | 2010-11-24T20:49:04.323 | 2010-11-24T20:49:04.323 | 930 | null | null |
4881 | 2 | null | 4878 | 3 | null | As I understand data envelopment analysis, you optimize efficiency of the decision making unit (DMU) subject to a set of constraints on input availability and output requirements.
Since you have identified the maximum efficiency the non-binding input constraints indicate that the amount of slack in inputs is unnecessary expenditure and can be avoided without sacrificing efficiency.
However, the slack with respect to output constraints needs to be dealt with more carefully. If you do not care about producing more than the minimum required then the slack on the output constraints is irrelevant. However, if you do care about producing more than the minimum required then perhaps you should either penalize excess output suitably or transform the greater than equal constraints into an equality constraint.
Do note that imposing penalties or converting your output constraints into equality constraints may decrease efficiency but that is a trade-off you need to take keeping the context in mind.
| null | CC BY-SA 2.5 | null | 2010-11-24T15:30:23.087 | 2010-11-24T15:30:23.087 | null | null | null | null |
4882 | 2 | null | 4878 | 3 | null | >
Is it correct to assert that those inputs found to be binding could not be further optimized but those that contained slack are where the inefficiencies are found?
First of all, this is a multivariate problem, so it is really hard to conclude anything about individual inputs/outputs. However I would rather say the opposite - the inputs where the constraints are binding could be further optimized except you are hitting a boundary condition that stops you from doing so, while inputs where there is some slack left found an optimum.
As a simple example, suppose you want to minimize $(x-3)^2+y$ with $x \geq 0$, $y\geq 0$ constraints. You will find the optimum at $x=3$, $y=0$ with $x$ having a slack of 3, and $y$ being a binding constraint. That's because you could get smaller values of your output if $y$ were allowed to be negative, but you prohibited that. On the other hand, $x$ would not benefit from allowing negative values.
Of course, one cannot make far-reaching conclusions, because minimizing $(x-3)^2+y^2$ would give the same results even though having negative $y$'s would not help. Similarly, it is possible to create a function where the global minimum would occur for a negative $x$ even though there is a local minimum at $x=3$. Additionally, with constraints on the output, even inputs with slack might not be local optima because one of the output constraints could have kicked in. You can check the latter by looking at derivatives (I think Solver might show them) - a zero derivative implies a local optimum (or inflection point, but that's less likely).
Sorry for the rambling answer, but the main point is that if your problem is sufficiently complex, then any simple interpretation is likely be wrong in some cases.
| null | CC BY-SA 2.5 | null | 2010-11-24T15:44:21.667 | 2010-11-24T15:44:21.667 | null | null | 279 | null |
4883 | 2 | null | 4876 | 2 | null | It sounds like you should lookup the topic of [statistical process control](http://en.wikipedia.org/wiki/Statistical_process_control). Its origins lie in manufacturing where deviations from expected process output are used to raise flags to ensure that quality of manufactured items is as per pre-established norms. In particular, see the wikipedia description for [control charts](http://en.wikipedia.org/wiki/Control_chart) which should get you started on the visualization aspect.
| null | CC BY-SA 2.5 | null | 2010-11-24T16:15:31.593 | 2010-11-24T16:15:31.593 | null | null | null | null |
4884 | 1 | 9742 | null | 5 | 2212 | Hello fellow number crunchers
I hope this a valid question for this forum. I am a lonesome quarter-statistician and have trouble finding someone to ask.
Introduction:
- Wikipedia Explanation of AB-Tests
- A ton of examples for typical use of AB-Tests
The AB-Test has become really popular since it is so easy to implement and execute. Additionally the web is floated with blogs explaining how to determine the significance of the results. All in all, it seems that there is less discussion about the control or exclusion of possible "influential" variables (on the other hand, controlling such variables is quite hard on the web).
Most AB-Tests comparing the outcomes of both groups by simple counting how many clicks or conversions every group has generated. Than a binomial distribution is assumed for each group and hence statistical tests are performed to see which group got the greater p.
So the question is:
Is it "better" to compare the outcome of both groups without any aggregation or it is "better" to aggregate e.g. on daily basis ?
Example:
The AB-Test is to check whether a landing page creates more newsletter subscriptions (<- conversions in this case). The AB-Test is deployed/online the whole test-time. In group A the landing page's main color is blue, in group B the main color is red. Assume 2000 visitors per day, i.e. each group gets roughly 1000 visitors. In this case "without aggregation" means, that I get 1000 datapoints per group per day meanwhile "aggregation on daily basis" means, that I get one (!) datapoint per group per day.
Discussion
The latter (aggregation on daily baiss) would allow the pairing of values, which in turn can capture daily effects like peaks in user behavior and preferences, but it extends the duration of AB-Tests, because it takes longer to collect enough datapoints. On the other hand, the former (no aggregation) seems to the strategy of the majority, because ... I dont know, maybe because a) with enough traffic you can make nearly anything significant within one day b) it is the easiest thing to do.
One (possibly influencing) example to stimulate your thoughts:
Assume that on one day during the test the "National Blue Day" is celebrated, so the color blue together with positive emotions is visible in all the media. This day group A has created a ton of conversions more than group B.
This difference clearly affects the test if aggregated on daily basis (increase of variance) or (if not aggregated at all) it either vanishes in the sea of data (in the case that multiple days are collected without aggregation) or it leads to the wrong results (if the day is the first and only day of the test).
Another example: Assume that the landing page belongs to a vegetable company. One day the "National Vegetable Day" is celebrated and now everyone wants to subscribe to the newsletter, no matter what the color is. This short-time effect is captured by aggregation on daily basis and a paired test, but it increases the variance in the case of no aggregation (because no paired test can performed here (is this even correct ?))
All in all: Am I on the right track or do I miss something completely ?
| Aggregation-Level in AB-Tests | CC BY-SA 2.5 | null | 2010-11-24T16:23:43.947 | 2022-02-04T12:00:11.743 | 2011-05-09T17:54:04.670 | 1036 | 264 | [
"hypothesis-testing",
"aggregation",
"ab-test"
]
|
4885 | 2 | null | 4810 | 19 | null | It's partly a matter of taste and convention, but theory, attention to your objectives, and a smidgen of cognitive neuroscience [see the references] can provide some guidance.
Because a pdf and a cdf convey the same information, the distinction between them arises from how they do it: a pdf represents probability with areas while a cdf represents probability with (vertical) distances. Studies show that people compare distances faster and more accurately than they compare areas and that they systematically mis-estimate areas. Thus, if your purpose is to provide a graphical tool for reading off probabilities, your should favor using a cdf.
Pdfs and cdfs also represent probability density: the former does so by means of height while the latter represents density by slope. Now the tables are turned, because people are poor estimators of slope (which is the tangent of an angle; we tend to see the angle itself). Densities are good at conveying information about modes, heaviness of tails, and gaps. Favor using pdfs in such situations and anywhere else where local details of the probability distribution need to be emphasized.
Sometimes a pdf or cdf provides useful theoretical information. Its value (or rather the inverse thereof) is involved in formulas for standard errors for quantiles, extremes, and rank statistics. Display a pdf rather than a cdf in such situations. When studying multivariate correlations in a nonparametric setting, such as with [copulas](http://en.wikipedia.org/wiki/Copula_%28statistics%29), the cdf turns out to be more useful (perhaps because it is the function that transforms a continuous probability law into a uniform one).
A pdf or cdf can be intimately associated with a particular statistical test. The [Kolmogorov-Smirnov test](http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test) (and the KS statistic) has a simple graphical representation in terms of a vertical buffer around the cdf; it has no simple graphical representation in terms of the pdf (that I know of).
The ccdf (complementary cdf) is used in special applications that focus on survivorship and rare events. Its use tends to be established by convention.
## References
W.S. Cleveland (1994). The Elements of Graphing Data. Summit, NJ, USA: Hobart Press. ISBN 0-9634884-1-4
B.D. Dent (1999). Cartography: Thematic Map Design 5th Ed. Boston, MA, USA: WCB McGraw-Hill.
A.M. MacEachren (2004). How Maps Work. New York, NY, USA: The Guilford Press. ISBN 1-57230-040-X
| null | CC BY-SA 3.0 | null | 2010-11-24T16:29:10.620 | 2012-05-01T19:32:12.390 | 2012-05-01T19:32:12.390 | 919 | 919 | null |
4886 | 1 | 4888 | null | 3 | 292 | I have a data set that I need to analyze in R. A simplified version of it would be like this
```
SessionNo. Objects OtherColumns
A 2 .
A 3
B 4
C 1
D 2
D 1
D 2
D 3
E 5
```
here each sessionno. represents one session of a broswer but due to the relation with other columns in the data it is aggregated like shown. So, Session 1 is now fragmented into two rows etc. What I need to find is the avg. number of objects downloaded per session (or any other statistics for no. of rows in each session). So, how do I count the no of objects for each session in R. Here, 5 objects in session A, 4 in session B, 8 in session D etc.
I guess one way would be to sum the whole Objects column and count the no. of unique session numbers in SessionNo. But I guess it would be more of a general solution if I could group the unique session number with the total number of objects aggregated in it? Any suggestions on how to accomplish that in R?
| Getting aggregated share in R | CC BY-SA 2.5 | null | 2010-11-24T17:33:01.623 | 2010-11-25T13:46:52.060 | null | null | 2101 | [
"r",
"aggregation"
]
|
4887 | 2 | null | 4886 | 2 | null | If I am reading your question correctly, something like the following should do it:
`aggregate(x$Objects,by=list(x$SessionNo.),sum)`
where `x` is the data frame containing your data. This will give you, for each unique session number, the sum of the object counts.
You can of course substitute other functions (including your own) on place of the `sum`.
| null | CC BY-SA 2.5 | null | 2010-11-24T17:45:13.697 | 2010-11-25T07:42:10.487 | 2010-11-25T07:42:10.487 | 439 | 439 | null |
4888 | 2 | null | 4886 | 3 | null | Perhaps this might help:
```
tapply(df$Objects, df$SessionNo., sum)
```
| null | CC BY-SA 2.5 | null | 2010-11-24T17:48:57.147 | 2010-11-25T10:31:03.683 | 2010-11-25T10:31:03.683 | 1050 | 1050 | null |
4889 | 2 | null | 4886 | 1 | null | I personally like using the plyr and or reshape packages for tasks like this. If you're just starting with R, I would highly recommend getting to know them well. They've solved nearly all of my data manipulation tasks.
```
ddply(df, .(sessionNo.) function(x) data.frame(
obj.count=sum(Objects)
))
```
OR, cast
```
colnames(df[1:2]) <- c("variable","value")
cast(df[1:2], variable ~ value, sum)
```
| null | CC BY-SA 2.5 | null | 2010-11-24T17:53:47.613 | 2010-11-25T02:08:57.187 | 2010-11-25T02:08:57.187 | 776 | 776 | null |
4890 | 2 | null | 4884 | 1 | null | The right level of aggregation depends on the time period over which you wish to generalize.
For example, you want to deploy A during nights across several sites but are unsure about its effectiveness relative to the existing option B. Thus, you may deploy A over a small number of sites and see its effects relative to the alternative B. In such a scenario, you need to aggregate the effects of A across all the nights that it was deployed to assess the relative impact of A vs B.
To use your example from the last para: If the interest lies in evaluating the impact of A across all days (possibly because A will be deployed on all days) then the 'right' thing to do is to aggregate across all days so that the test of A's effectiveness is not biased.
| null | CC BY-SA 2.5 | null | 2010-11-24T18:09:45.540 | 2010-11-24T18:09:45.540 | null | null | null | null |
4891 | 2 | null | 4810 | 8 | null | I agree with whuber's answer, but have one additional minor point:
The CDF has a simple non-parametric estimator that needs no choices to be made: the [empirical distribution function](http://en.wikipedia.org/wiki/Empirical_distribution_function). It's not quite so simple to estimate a PDF. If you use a histogram you need to choose the bin width and the starting point for the first bin. If you use [kernel density estimation](http://en.wikipedia.org/wiki/Kernel_density_estimation) you need to choose the kernel shape and bandwidth. A suspicious or cynical reader may wonder if you really chose these entirely a priori or if you tried a few different values and chose the ones that gave the result you most liked.
This is only a minor point though. The ones whuber made are more important, so i'd probably only use this to choose when I was still undecided after considering those.
| null | CC BY-SA 2.5 | null | 2010-11-24T18:41:20.210 | 2010-11-24T18:41:20.210 | null | null | 449 | null |
4892 | 1 | 5052 | null | 7 | 6672 | What is the [VC dimension](http://en.wikipedia.org/wiki/VC_dimension) of [SVM](http://en.wikipedia.org/wiki/Support_vector_machine) with the polynomial kernel $k(x,x')=(1+<x,x'>_{\mathbb{R^{2}}})^{2}$ for binary classification in $\mathbb{R^{2}}$?
It would be equal or more than v iff there exists a set of v points such that, any labeling (-1 or +1) of the points given, there exists a correct separating border.
It would be strictly less than v iff for all sets of v points, there exists a labeling of the points such that there is no correct separating border.
In this case, the separating border is a conic section or a line, so any idea based on these curves rather than SVM is welcome.
For instance, let $x=(x_{1},x_{2}) \in \mathbb{R^{2}}$. It is mapped to $\mathbb{R^{6}}$ using a certain $\phi$ and an hyperplane in this new space is a conic section in the former space. So my guess would be the VC dimension of a linear classifier in $\mathbb{R^{6}}$, which is 7.
---
Actually, the answer is 6: the equation of an hyperplane in $\mathbb{R^{d}}$ is $$<w,x>_{\mathbb{R^{d}}}+b=0$$ where $w \in \mathbb{R^{d}}$ and $b \in \mathbb{R}$. Here, we have $b=1$ and $w \in \mathbb{R^{5}}$, not $\mathbb{R^{6}}$.
| VC dimension of SVM with polynomial kernel in $\mathbb{R^{2}}$ | CC BY-SA 2.5 | null | 2010-11-24T18:47:26.677 | 2017-05-23T17:29:05.427 | 2015-09-30T03:42:36.400 | 12359 | 1351 | [
"self-study",
"classification",
"svm",
"kernel-trick",
"vc-dimension"
]
|
4893 | 2 | null | 4892 | 4 | null | If I'm not mistaken:
$[1 + \langle x,y\rangle]^2$ = $\langle[1, \sqrt{2}y_1, y_1^2, \sqrt{2}y_2, y_2^2, \sqrt{2}y_1y_2], [1, \sqrt{2}x_1, x_1^2, \sqrt{2}x_2, x_2^2, \sqrt{2}x_1x_2]^T\rangle$
So the separating rule is linear in $R^6$. Consequently, VC-dim is 7.
| null | CC BY-SA 3.0 | null | 2010-11-24T20:21:22.247 | 2017-05-23T17:29:05.427 | 2017-05-23T17:29:05.427 | 86522 | 1725 | null |
4894 | 2 | null | 4858 | 11 | null | (I would post this as a comment to the previous answer by 'chi', but don't see that that is possible here.)
Be very careful with `intervals()` and `anova()` from the `nlme` package, these have exactly the flaws that @fabians points out above -- they rely on the standard chi-squared distribution applied to a quadratic approximation of the shape of the likelihood profile for the variances. The newer `lme4a` package (on r-forge) allows the creation of likelihood profiles for the variance parameters, which takes care of the quadratic approximation part (although not the distributional part).
Also, it is fairly hotly debated whether dropping non-significant variance components is a good idea or not (don't have an immediate reference, but this has been discussed on r-sig-mixed-models).
| null | CC BY-SA 2.5 | null | 2010-11-24T20:48:13.937 | 2010-11-24T20:48:13.937 | null | null | 2126 | null |
4895 | 2 | null | 118 | 30 | null | The answer that best satisfied me is that it falls out naturally from the generalization of a sample to n-dimensional euclidean space. It's certainly debatable whether that's something that should be done, but in any case:
Assume your $n$ measurements $X_i$ are each an axis in $\mathbb R^n$. Then your data $x_i$ define a point $\bf x$ in that space. Now you might notice that the data are all very similar to each other, so you can represent them with a single location parameter $\mu$ that is constrained to lie on the line defined by $X_i=\mu$. Projecting your datapoint onto this line gets you $\hat\mu=\bar x$, and the distance from the projected point $\hat\mu\bf 1$ to the actual datapoint is $\sqrt{\frac{n-1} n}\hat\sigma=\|\bf x-\hat\mu\bf 1\|$.
This approach also gets you a geometric interpretation for correlation, $\hat\rho=\cos \angle(\vec{\bf\tilde x},\vec{\bf\tilde y})$.
| null | CC BY-SA 2.5 | null | 2010-11-24T20:49:39.210 | 2010-11-24T20:49:39.210 | null | null | 2456 | null |
4896 | 2 | null | 4886 | 2 | null | The function that is specifically designed for this task is ave(). By default it returned the mean within a group and returns a vector of the same length as the two input arguments. It is designed to fill in columns of either means or deviations from means. If this is in a dataframe with name "tst":
```
> tst$tmn <- with(tst, ave(Objects, SessionNo.))
> tst$devmn <- tst$Objects- with(tst, ave(Objects, SessionNo.))
> tst
SessionNo. Objects tmn devmn
1 A 2 2.5 -0.5
2 A 3 2.5 0.5
3 B 4 4.0 0.0
4 C 1 1.0 0.0
5 D 2 2.0 0.0
6 D 1 2.0 -1.0
7 D 2 2.0 0.0
8 D 3 2.0 1.0
9 E 5 5.0 0.0
```
| null | CC BY-SA 2.5 | null | 2010-11-25T00:00:21.353 | 2010-11-25T00:00:21.353 | null | null | 2129 | null |
4897 | 2 | null | 4663 | 10 | null | LASSO is not an algorithm per se, but an operator.
There are many different ways to derive efficient algorithms for $\ell_1$ regularized problems. For instance, one can use quadratic programming to them tackle directly. I guess this is what you refer to as LASSO.
Another one is LARS, very popular because of its simplicity, connection with forward procedures (yet not too greedy), very constructive proof and easy generalization.
Even compared with state of the art quadratic programming solvers, LARS can be much more efficient.
| null | CC BY-SA 2.5 | null | 2010-11-25T00:29:06.963 | 2010-11-25T00:29:06.963 | null | null | 358 | null |
4898 | 1 | 4917 | null | 12 | 4150 | If you choose to analyse a pre-post treatment-control design with a continuous dependent variable using a mixed ANOVA, there are various ways of quantifying the effect of being in the treatment group.
The interaction effect is one main option.
In general, I particularly like Cohen's d type measures (i.e., ${\frac{\mu_1 - \mu_2}{\sigma}}$). I don't like variance explained measures because results vary based on irrelevant factors such as relative sample sizes of groups.
Thus, I was thinking I could quantify the effect as follows
- $\Delta\mu_c = \mu_{c2} - \mu_{c1}$
- $\Delta\mu_t = \mu_{t2} - \mu_{t1}$
- Thus, the effect size could be defined as $\frac{\Delta\mu_t - \Delta\mu_c}{\sigma}$
where $c$ refers to control, $t$ to treatment, and 1 and 2 to pre and post respectively.
$\sigma$ could be the pooled standard deviation at time 1.
Questions:
- Is it appropriate to label this effect size measure d?
- Does this approach seem reasonable?
- What is standard practice for effect size measures for such designs?
| Effect size for interaction effect in pre-post treatment-control design | CC BY-SA 2.5 | null | 2010-11-25T01:33:26.563 | 2011-04-19T02:15:25.713 | null | null | 183 | [
"anova",
"mixed-model",
"effect-size",
"cohens-d"
]
|
4899 | 2 | null | 118 | 11 | null | Estimating the standard deviation of a distribution requires to choose a distance.
Any of the following distance can be used:
$$d_n((X)_{i=1,\ldots,I},\mu)=\left(\sum | X-\mu|^n\right)^{1/n}$$
We usually use the natural euclidean distance ($n=2$), which is the one everybody uses in daily life.
The distance that you propose is the one with $n=1$.
Both are good candidates but they are different.
One could decide to use $n=3$ as well.
I am not sure that you will like my answer, my point contrary to others is not to demonstrate that $n=2$ is better. I think that if you want to estimate the standard deviation of a distribution, you can absolutely use a different distance.
| null | CC BY-SA 3.0 | null | 2010-11-25T03:01:40.620 | 2014-07-31T17:00:06.640 | 2014-07-31T17:00:06.640 | 5176 | 1709 | null |
4900 | 2 | null | 4898 | 3 | null | I believe that generalized eta-square ([Olejnik & Algena, 2003](http://www.ncbi.nlm.nih.gov/pubmed/14664681); [Bakeman, 2005](http://brm.psychonomic-journals.org/content/37/3/379)) provides a reasonable solution to the quantification of effect size that generalizes across between-Ss and within-Ss designs. If I read those references correctly, generalized eta-square should also generalize across sample sizes.
Generalized eta-square is automatically computed by the ezANOVA() function in the ez package for R.
| null | CC BY-SA 2.5 | null | 2010-11-25T03:49:17.473 | 2010-11-25T03:49:17.473 | null | null | 364 | null |
4901 | 1 | 4940 | null | 44 | 36762 | Other than literally testing each possible combination of variable(s) in a model (`x1:x2` or `x1*x2 ... xn-1 * xn`). How do you identify if an interaction SHOULD or COULD exist between your independent (hopefully) variables?
What are best practices in attempting to identify interactions?
Is there a graphical technique that you could or do use?
| What are best practices in identifying interaction effects? | CC BY-SA 3.0 | null | 2010-11-25T05:32:53.903 | 2020-08-14T22:49:38.713 | 2011-09-27T05:28:07.813 | 183 | 776 | [
"regression",
"modeling",
"interaction"
]
|
4902 | 2 | null | 4901 | 8 | null | How large is $n$ ? how many observations do you have ? this is crucial ...
Sobol indices will tell you the proportion of variance explained by interaction if you have a lot of observations and a few $n$, otherwise you will have to do modelling (linear to start with). You have a nice R package for that called sensitivity. Anyway the idea is quite often that of decomposing the variance (also called generalized ANOVA).
If you want to know if this proportion of variance is significant you will have to do modelling (roughly, you need to know the number of degrees of freedom of your model to compare it to the variance).
Are your variables discrete or continuous ? bounded or not really (i.e you don't know the maximum) ?
| null | CC BY-SA 3.0 | null | 2010-11-25T07:07:59.057 | 2016-04-21T09:55:28.640 | 2016-04-21T09:55:28.640 | 17230 | 223 | null |
4903 | 2 | null | 4901 | 17 | null | My best practice would be to think about the problem to hand before fitting the model. What is a plausible model given the phenomenon you are studying? Fitting all possible combinations of variables and interactions sounds like data dredging to me.
| null | CC BY-SA 2.5 | null | 2010-11-25T07:50:59.160 | 2010-11-25T07:50:59.160 | null | null | 1390 | null |
4904 | 1 | 4905 | null | 6 | 1636 | I'm searching for a statistical method to determine if a player is cheating in an online game. The game is a Quake3 like game (ego-shooter).
Given a number of positive points and a number of negative points per player (score) and given n players (n<=64).
The score comes together like this (positive/negative seen from anti-cheat-perspective):
positive = number of time the player died himself
negative = number of opposite team's players killed
Additional available values:
t = Time in which this score has been accomplished
c = number of current number of players
m = number of times a player killed someone from the same team (teamkiller, very negative if higher than 1 or 2 over a 15 minutes period)
Each player can cheat independent of each other.
I used the standard deviation (on the value v = (negative+1)/(postive+1) per player, assigning the exceeding of 3 times standard deviation with absolute certainty (100%) of cheating - although my experience has been that exceeding stddev alone would be sufficient in most cases), which works great on a single player but fails miserably as soon as several players are cheating at the same time. Also the practise of defining 3 times stddev as 100% allows for > 100% probabilities. Unfortunately, the percentage of players cheating in an unfiltered game is roughly 70 to 80 percent or even higher, if accounting for more subtle cheats.
One more thing to consider is, if there is a group of people cheating, and one that isn't, the distribution is no longer a normal distribution, but a camel-bump like distribution.
Is there any useful algorithm or formula for this problem ?
| Method to reliably determine abnormal statistical values | CC BY-SA 2.5 | null | 2010-11-25T11:16:41.540 | 2010-12-03T04:58:22.370 | 2010-11-25T11:21:15.593 | 2132 | 2132 | [
"outliers"
]
|
4905 | 2 | null | 4904 | 11 | null | Your use of the stddev indicates you look at every variable seperately. If you look at them together, you might have more chance. An outlier in one dimension can be coincidence, an outlier in more dimensions is more surely an anomaly. I don't know much about games, but I reckon that you could use find extra variables like traveled distance in the game and so on.
You can use outlier theory for detecting anomalies. A very naive way of looking for outliers is using the [mahalanobis distance](http://en.wikipedia.org/wiki/Mahalanobis_distance). This is a measure that takes into account the spread of your data, and calculates the relative distance from the center. It will be less sensitive to outliers in one statistic, but can be seen as a way of finding gamers where the combination of the statistics is odd.
A similar approach is building a model and taking a look at the error terms. This does essentially the same: it looks for gamers that don't fit the general pattern. It's a technique that's also used in financial services to find fraud cases. The model can go from a basic linear model to more complex models. If you apply your algorithm to the error terms of the model fitted without player i, you essentially calculate something similar to [Cook's distance](http://en.wikipedia.org/wiki/Cook%27s_distance) of a certain player. In combination with the [DFFITS](http://en.wikipedia.org/wiki/DFFITS) measure and the leverage, it is often used for detection of outliers and/or influential points in regression.
You could as well use supervised classification: You train an algorithm with genuine gamers and known cheaters. There's a multitude of techniques available there, starting from neural networks and classification trees to support vector machines and random forests.
Genetic algorithms are used more and more as well, as they can progress in knowledge when time goes by. If you check the assumed cheaters, you could -much like a spam filter- correct the wrongly classified gamers. The algorithm will continuously learn to predict better when a gamer is a cheater.
As mbq mentioned, without example data it's impossible to give you an algorithm; I don't even know on which measurements one can work. But this should give you some ideas about the multitude of available methods, from very naive to pretty complex. Much can be learned from fraud detection if you'd like to google around a bit further.
A starter could be [this article](http://home.cc.gatech.edu/ccl/uploads/45/multiple-algorithms-for-fraud.pdf) of Wheeler and Aitken. Another interesting overview of possible techniques is found in [this article](http://linktoprofessionals.com/files/data/Analysis,%20reports/Survey%20of%20Fraud%20Detection%20Techniques.pdf) by Kou et al ([alternative link to publication](http://ieeexplore.ieee.org/Xplore/defdeny.jsp?url=http%3A%2F%2Fieeexplore.ieee.org%2Fstamp%2Fstamp.jsp%3Ftp%3D%26arnumber%3D1297040%26userType%3Dinst&denyReason=-133&arnumber=1297040&productsMatched=null&userType=inst))
| null | CC BY-SA 2.5 | null | 2010-11-25T12:01:24.780 | 2010-11-25T12:46:04.647 | 2010-11-25T12:46:04.647 | 1124 | 1124 | null |
4906 | 2 | null | 4886 | 1 | null | I often use `by`, which is a wrapper around `tapply`. Result is a list with a weee different print method compared to `tapply`.
```
df <- data.frame(SessionNo. = sample(x = c("A", "B", "C", "D", "E"),
size = 20, replace = TRUE,
prob = c(0.05, 0.1, 0.4, 0.4, 0.05)),
Objects = sample(x = 1:5, size = 20, replace = TRUE))
df.out <- by(data = df$Objects, INDICES = df$SessionNo.,
FUN = mean, simplify = FALSE)
head(df.out)
$A
[1] 3
$B
[1] 1
$C
[1] 3.222222
$D
[1] 3.833333
$E
[1] 1
```
| null | CC BY-SA 2.5 | null | 2010-11-25T13:46:52.060 | 2010-11-25T13:46:52.060 | null | null | 144 | null |
4907 | 2 | null | 4901 | 17 | null | Fitting a tree model (i.e. using R), will help you identify complex interactions between the explanatory variables. Read the example on page 30 [here](http://www.math.chs.nihon-u.ac.jp/%7Etanaka/files/kenkyuu/MultipleRegression.pdf).
| null | CC BY-SA 4.0 | null | 2010-11-25T13:51:42.120 | 2020-08-14T18:30:10.400 | 2020-08-14T18:30:10.400 | 93702 | 339 | null |
4908 | 1 | null | null | 5 | 272 | Lets imagine that we have some experiments. Each experiment may result in one of the outcomes: A, B, C. So we have probabilities distribution for each experiment $P_A, P_B, P_C$ which is context-dependent.
E.g.:
- $Context_1 \Rightarrow \{P_A^1, P_B^1, P_C^1\},$ experimental outcome is A
- $Context_2 \Rightarrow \{P_A^2, P_B^2, P_C^2\},$ experimental outcome is B
- $Context_3 \Rightarrow \{P_A^3, P_B^3, P_C^3\},$ experimental outcome is A
- $Context_4 \Rightarrow \{P_A^4, P_B^4, P_C^4\},$ experimental outcome is C
This probabilities are calculated by some function $F:Context\rightarrow \{P_A, P_B, P_C\}$
I want to estimate an absolute trust rate of this function. In other words, I want to be able to say "we can trust this function on 86%" like we do when we deal with Pearson's chi-square test.
Any suggestions?
| Chi-square analog for context-dependent distributions | CC BY-SA 2.5 | null | 2010-11-25T13:55:10.873 | 2010-11-28T23:09:43.057 | 2010-11-26T08:27:35.030 | 1314 | 1314 | [
"distributions",
"probability",
"chi-squared-test",
"predictor"
]
|
4909 | 1 | null | null | 6 | 3776 | The empirical cumulative distribution function of a random variable, given observations $x_\left( k \right) > x_\left( k-1 \right)$, $k \in \mathbb N$, $k \le n$, is defined as $F_{emp}(x_\left( k \right) > X \ge x_\left( k-1 \right)) = \frac k {n+1}$ and $F_{emp}(X \ge x_\left(n\right))=1$.
Why? As long as we're interpolating, wouldn't it make sense to use some interpolation method with less error? A simple nearest neighbour or piecewise average interpolant would be an improvement, and a cubic interpolant would get us a differentiable empirical density function, too.
The above definition won't even give you the piecewise infimum of the cdf, because the variable is random. It certainly approaches the true function as $n\to\infty$, but then so would any other interpolant. Surely at least linear interpolants were considered.
| Interpolating the empirical cumulative function | CC BY-SA 2.5 | null | 2010-11-25T14:13:31.967 | 2019-09-27T16:39:20.190 | 2017-09-28T18:27:46.007 | 60613 | 2456 | [
"estimation",
"random-variable",
"interpolation"
]
|
4910 | 2 | null | 4909 | 4 | null | The EDF is the CDF of the population constituted by the data themselves. This is exactly what you need to describe and analyze any resampling process from the dataset, including nonparametric bootstrapping, jackknifing, cross-validation, etc. Not only that, it's perfectly general: any kind of interpolation would be invalid for discrete distributions.
| null | CC BY-SA 2.5 | null | 2010-11-25T14:34:31.573 | 2010-11-25T14:34:31.573 | null | null | 919 | null |
4911 | 2 | null | 4360 | 2 | null | For the Bayesian explanation, you need a prior probability distribution on the reported results by a lying coin flipper, as well as a prior probability of lying. When you see a value that is much more likely under the lying distribution than the random flips one, that makes your posterior probability of lying much higher.
| null | CC BY-SA 2.5 | null | 2010-11-25T15:18:15.367 | 2010-11-25T15:18:15.367 | null | null | 2134 | null |
4912 | 1 | 4948 | null | 4 | 11715 | Suppose my sample comprises 350 instances. Each instance has two independent binary (`pos` and `neg`) evaluations, A and B. If my alternative hypothesis is that A is more often positive than B, how would I express whether the difference is statistically significant?
---
To be more precise on the data: This question came to mind when looking at patient blood serum data. These are 350 blood samples positive against X. Each sample may or may not also be positive against A or B, whereas A and B are both more often positive if X is positive, but biologically are not dependent on one another.
So my initial thought was to look whether A is more often positive than B. In the example it was, A is positive in ~80% and B in ~20%. So I thought about assessing this difference for significance and this is where I got stuck. :)
| Calculating significance of difference between two binary values on one dataset | CC BY-SA 2.5 | null | 2010-11-25T16:32:33.287 | 2010-11-28T19:19:27.373 | 2010-11-25T20:28:48.560 | 1785 | 1785 | [
"hypothesis-testing",
"statistical-significance",
"sample",
"binary-data"
]
|
4913 | 2 | null | 4912 | 5 | null | Your addendum suggests that A and B are dependent samples since they come from the same "instance", i.e., patient. In that case, I propose the McNemar-Test which tests for a (two-sided) hypothesis of unequal proportions:
```
> N <- 350
> A <- factor(rbinom(N, size=1, p=0.6), labels=c("pos", "neg"))
> B <- factor(rbinom(N, size=1, p=0.4), labels=c("pos", "neg"))
> cTab <- table(A, B)
> mcnemar.test(cTab, correct=FALSE) # no continuity correction
McNemar's Chi-squared test
data: cTab
McNemar's chi-squared = 29.1236, df = 1, p-value = 6.79e-08
# alternative - Monte-Carlo permutation test
> library(coin) # for symmetry_test()
> symmetry_test(cTab, teststat="quad", distribution=approximate(B=9999))
Approximative General Independence Test
data: response by groups (A, B)
stratified by block
chi-squared = 29.1236, p-value < 2.2e-16
```
| null | CC BY-SA 2.5 | null | 2010-11-25T17:16:36.760 | 2010-11-25T20:10:00.573 | 2010-11-25T20:10:00.573 | 1909 | 1909 | null |
4914 | 1 | null | null | 5 | 528 | In a data set with thousands of data points, I am testing different short-term and longer term data outputs based on 5 rolling data points all the way to 100 rolling data points (which each value being a separate column in excel: 5, 6,..., 100).
The test I developed (with rudimentary knowledge of the whole thing) is to check which are the better "fits" of these rolling data outputs (kinda like moving averages, but not quite), in other words, if column 5, 17 or 98. In theory, the better fits should produce lower standard deviations, in the conventional calculation.
An analogy for better understanding is something like this: if I do a 5 point moving average on a five period sin wave, the output should be zero or flat, the same for a 6 point moving average on a six period sin wave, and so on all the way to 100. This is the same as saying that the standard deviation in each case should be zero.
So the test really is choosing the lowest standard deviation in each column from 5 all the way to 100, which should then provide the best "fit" (to go back to the analogy, fit with the "sin wave").
However, as the statistical inclined will quickly note (am afraid am not one of them for sure), this method presents two major difficulties:
- The longer data sets, ie closer to 100 rolling data points, have larger absolute deviations, thus producing higher standard deviations. So a lower standard deviation in the shorter data series may not be indicative of better "fitting", just lower absolute deviations;
- Each column has a different number of rolling data points, so once again the numbers and statistical fit may not be comparable between the columns.
So my question is therefore: is there a statistical measure better than the standard deviation I used above, such that it is not sensitive to absolute deviations and number of data points?
Thanks!
| Developing a statistical test to ascertain better "fit" | CC BY-SA 2.5 | null | 2010-11-25T17:26:03.753 | 2010-11-27T18:30:58.310 | 2010-11-25T22:44:35.630 | null | 2137 | [
"time-series",
"standard-deviation",
"goodness-of-fit"
]
|
4915 | 2 | null | 4914 | 3 | null | Based on the information given, I think that you could consider AIC, a measure of likelihood that is penalized by degrees of freedom.
| null | CC BY-SA 2.5 | null | 2010-11-25T19:49:13.047 | 2010-11-25T21:41:36.517 | 2010-11-25T21:41:36.517 | 930 | 1381 | null |
4916 | 2 | null | 4360 | 6 | null | I think your intuition is flawed. It seems you are implicitly comparing a single, "very special" result (exactly 10000 heads) with a set of many results (all "non-special" numbers of heads close to 10000). However, the definition of "special" is an arbitrary choice based on our psychology. How about binary 10000000000000 (decimal 8192) or Hex ABC (decimal 2748) - would that be suspiciously special as well? As Joris Meys commented, the Bayes argument would essentially be the same for any single number of heads, implying that each outcome would be suspicious.
To expand the argument a bit: you want to test a hypothesis ("the experimenter is faking"), and then you choose a test statistic (number of heads). Now, is this test statistic suited to tell you something about your hypothesis? To me, it seems the chosen test statistic is not informative (not a function of a parameter specified as a fixed value in the hypothesis). This goes back to the question what you mean by "cheating". If that means the experimenter controls the coin at will, then this is not reflected in the test statistic. I think you need to be more precise to find a quantifiable indicator, and thus make the question amenable to a statistical test.
| null | CC BY-SA 2.5 | null | 2010-11-25T20:33:30.693 | 2010-11-25T23:48:14.830 | 2010-11-25T23:48:14.830 | 1909 | 1909 | null |
4917 | 2 | null | 4898 | 7 | null | Yes, what you are suggesting is exactly what has been suggested in the literature. See, for example: Morris, S. B. (2008). Estimating effect sizes from pretest-posttest-control group designs. Organizational Research Methods, 11(2), 364-386 ([link](http://orm.sagepub.com/cgi/content/abstract/11/2/364), but unfortunately, no free access). The article also describes different methods for estimating this effect size measure. You can use the letter "d" to denote the effect size, but you should definitely provide an explanation of what you calculated (otherwise, readers will probably assume that you calculated the standardized mean difference only for the post-test scores).
| null | CC BY-SA 2.5 | null | 2010-11-25T23:37:14.657 | 2010-11-25T23:37:14.657 | null | null | 1934 | null |
4918 | 1 | null | null | -3 | 166 | The input is a bunch of numbers. How can I calculate a threshold that there are 10% of the totally amount of numbers are above this threshold. I think this is equally as to calculate the distribution function, but what I need is the threshold number, not the
| How to get the required data? | CC BY-SA 2.5 | 0 | 2010-11-26T00:46:30.300 | 2010-11-27T11:49:30.547 | 2010-11-27T11:49:30.547 | 930 | 2141 | [
"distributions"
]
|
4919 | 2 | null | 4918 | 0 | null | My understanding of the question is that given a series of numbers, which is the number that has 10% of all the given numbers larger than this number and 90% smaller? If this understanding is correct, a software percentile function [(see here)](http://en.wikipedia.org/wiki/Percentile) or a decile function [(see here)](http://en.wikipedia.org/wiki/Decile) might be what is needed.
| null | CC BY-SA 2.5 | null | 2010-11-26T00:51:35.250 | 2010-11-27T11:47:55.777 | 2010-11-27T11:47:55.777 | 930 | 226 | null |
4920 | 1 | 4926 | null | 21 | 33440 | Using Pearson's Correlation Coefficient, I have several variables that are highly correlated ($\rho = 0.978$ and $\rho = 0.989$ for 2 pairs of variables that are in my model).
The reason some of the variables are highly correlated is because one variable is used in the calculation for another variable.
Example:
$B = V / 3000$ and
$E = V * D$
$B$ and $E$ have $\rho = 0.989$
Is it possible for me to just "throw away" one of the variables?
| Can I simply remove one of two predictor variables that are highly linearly correlated? | CC BY-SA 2.5 | null | 2010-11-26T00:58:39.783 | 2014-04-16T22:07:23.160 | 2010-11-26T01:03:50.200 | 1894 | 1894 | [
"regression",
"correlation",
"modeling"
]
|
4921 | 2 | null | 4920 | 0 | null | If D is not a constant, then B and E are effectively two different variables because of the variations in D. The high correlation indicates that D is practically constant throughout the training data. If that is the case, then you can discard either B or E.
| null | CC BY-SA 2.5 | null | 2010-11-26T01:17:49.297 | 2010-11-26T01:17:49.297 | null | null | 2071 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.