Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
7796
|
2
| null |
7774
|
4
| null |
Important Note: I've decided to remove the proof I gave originally in this answer. It was longer, more computational, used bigger hammers, and proved a weaker result as compared to the other proof I've given. All around, an inferior approach (in my view). If you're really interested, I suppose you can look at the edits.
The asymptotic results that I originally quoted and which are still found below in this answer do show that as $n \to \infty$ we can do a bit better than the bound proved in the other answer, which holds for all $n$.
---
The following asymptotic results hold
$$
\mathbb{P}(T > n \log n + c n ) \to 1 - e^{-e^{-c}}
$$
and
$$
\mathbb{P}(T \leq n \log n - c n ) \to e^{-e^c} \>.
$$
The constant $c \in \mathbb{R}$ and the limits are taken as $n \to \infty$. Note that, though they're separated into two results, they're pretty much the same result since $c$ is not constrained to be nonnegative in either case.
See, e.g., Motwani and Raghavan, Randomized Algorithms, pp. 60--63 for a proof.
---
Also: David kindly provides a proof for his stated upper bound in the comments to this answer.
| null |
CC BY-SA 2.5
| null |
2011-03-02T13:50:29.307
|
2011-03-07T13:01:34.917
|
2011-03-07T13:01:34.917
|
2970
|
2970
| null |
7797
|
1
|
7798
| null |
7
|
1930
|
This question might be way off base as I am just getting to know vector autoregressive models, but I've tried searching through the usual channels and if this actually is a valid question it might be of help to others that it is here.
Should I think in the same way about sample size for a VAR-model as for any other regression method? That is, calculate the degrees of freedom that will be used and work from there to establish what a suitable number of observations would be, or is there something else to take into account?
|
Is there a method to find what is a good sample size for a VAR-model?
|
CC BY-SA 2.5
| null |
2011-03-02T13:58:38.210
|
2015-01-26T12:05:11.483
| null | null |
2105
|
[
"time-series",
"multivariate-analysis",
"sample-size",
"econometrics"
] |
7798
|
2
| null |
7797
|
5
| null |
The usual way of estimating VAR(p) model for $n$-dimensional vector $X_t=(X_{1t},...,X_{nt})$ is using OLS for each equation:
\begin{align}
X_{it}=\theta_{0i}+\sum_{s=1}^p\sum_{j=1}^n\theta_{sj}X_{j,t-s}+\varepsilon_{it}
\end{align}
So the answer is yes. For each equation you will need to estimate $np+1$ parameters, so the sample size is chosen as in usual regression with $np+1$ independent variables.
Things you will need to take into account is whether additional exogenous variables are included and what type of covariance matrix for $\varepsilon_t=(\varepsilon_{t1},...,\varepsilon_{tn})$ you intend to postulate. This will increase the number of parameters to be estimated.
| null |
CC BY-SA 2.5
| null |
2011-03-02T14:15:22.457
|
2011-03-02T14:15:22.457
| null | null |
2116
| null |
7799
|
1
|
7803
| null |
5
|
2702
|
I am a programmer who, in a past life, used the products RS/1, RS/Explore, and RS/Discover in a manufacturing engineer career. Now, years later, I would like to do some multivariate regression on some real-world data (sales data from my wife's store). The point would be to highlight what sales days are truly exceptional, not just part of the normal distribution, after taking into account month, day of week, weather, etc.
If I still had access to RS/1 and associated products, I'd know how to do this, but that is not the case, so I'm thinking I'd want to use R. However, most R tutorials I have found just cover the very basics, and don't get to the point of multivariate regression. Can you recommend an R tutorial that takes one past the basics of plotting a histogram, etc. into in-depth analysis of real-world ad-hoc data, presumably using multi-variate regression?
Thanks!
|
Tutorial for using R to do multivariate regression?
|
CC BY-SA 2.5
| null |
2011-03-02T14:18:10.247
|
2011-03-02T20:06:24.910
|
2011-03-02T14:20:51.750
|
2116
|
2917
|
[
"r",
"data-visualization",
"multivariate-analysis"
] |
7800
|
2
| null |
7782
|
5
| null |
I would probably do the following:
1) Split the data into two roughly equal segments.
2) Perform exploratory analyses on one of these and derive a new model
3) Test the model on the other half of the data.
This at least will be something that's not done all that often, which will make it a better fit for publication (should you want to do so), and will give you an independent test of your model.
You could also fit both models (the prior one and the one you develop) to your test data, and compare the fit of both.
| null |
CC BY-SA 2.5
| null |
2011-03-02T14:30:59.527
|
2011-03-02T14:30:59.527
| null | null |
656
| null |
7801
|
2
| null |
7795
|
3
| null |
This will not be a complete answer. All the plotting for `gam` objects is being done with function `plot.gam`. You can look at its code by simply typing
```
> plot.gam
```
in R console. As you will see the code is huge. What I've gleaned from it, that all the plotting is done by gathering relevant information in `pd` object which is a list. So one of the possible solution would be to edit `plot.gam`, using `edit` for example, so that it returns that object. Adding `pd` before last `}` will be enough. I would advise adding `invisible(pd)`, so that this object is returned only if you ask for it:
```
> pd <- plot(b,seWithMean = TRUE)
```
Then inspect this object and search in the code of `plot.gam` for the lines with `plot` and `lines`. Then you will see which of the relevant `x` and `y` values appear in the plot.
| null |
CC BY-SA 2.5
| null |
2011-03-02T14:45:37.597
|
2011-03-02T14:45:37.597
| null | null |
2116
| null |
7803
|
2
| null |
7799
|
7
| null |
This is my favorite one: [Quick-R](http://www.statmethods.net/stats/regression.html)
| null |
CC BY-SA 2.5
| null |
2011-03-02T14:49:04.513
|
2011-03-02T14:49:04.513
| null | null |
1499
| null |
7804
|
2
| null |
7768
|
6
| null |
Instead of assigning special value for non-existent first time runner previous lap time, simply use interaction term for previous lap time with the inverse of first time runner dummy:
$$Y_i=\beta_0+\beta_1 FTR_i+\beta_2 (NFTR_i)\times PLT_i+...$$
here
- $Y_i$ is your input variable,
- $...$ is your other variables,
- $FTR_i$ is
dummy for the first time runner,
- $PLT_i$ is the previous lap time and
- $NFTR_i$ is dummy for non first time
runner equaling 1, when $FTR_i=0$ and
0 otherwise.
Then the model for first time runners will be:
$$Y_i=(\beta_0+\beta_1) + ...$$
and for non first time runners:
$$Y_i=\beta_0+ \beta_2 PLT_i + ...$$
| null |
CC BY-SA 2.5
| null |
2011-03-02T15:01:25.117
|
2011-03-02T15:01:25.117
| null | null |
2116
| null |
7805
|
1
|
7812
| null |
6
|
166
|
Let's say you have a jointly gaussian vector random variable $\mathbf{x}$, with mean $\mathbf{M}$ and covariance $\mathbf{S}$. I now transform each scalar element of $\mathbf{x}$ , say $x_j$, with a sigmoid:
$$y_j = 1/(1+\exp(-x_j))$$
I am interested in the expectation between two variables $y_j$ and $y_j'$ of the resulting distribution, that is
$$E\{y_j y_j'\}$$
where $E\{\}$ is the expectation operator. Note:
- The whole PDF of $y$ can be computed just by applying a change of variable. Unfortunately the integrals leading to the expectations are intractable (to the best of my knowledge).
- I'm looking for closed-form approximations, no Markov Chain-Monte Carlo, no variational stuff. That is, approximations to the variable change, to the expectation integrals, to the sigmoid, to the resulting PDF, that allow computing $E\{y_j y_j'\}$.
- Some dead ends: Taylor, often used in papers on the topic, is inaccurate by the mile. Gradshteyn and Ryzhik does not seem to contain the integrals.
|
Correlation between two nodes of a single layer MLP for joint-Gaussian input
|
CC BY-SA 2.5
| null |
2011-03-02T15:03:56.520
|
2011-03-02T19:39:10.920
|
2011-03-02T16:00:10.463
|
919
|
3509
|
[
"machine-learning",
"bayesian",
"normal-distribution",
"data-transformation",
"neural-networks"
] |
7806
|
2
| null |
7795
|
21
| null |
Starting with `mgcv` 1.8-6, `plot.gam` invisibly returns the data it uses to generate the plots, i.e. doing
`pd <- plot(<some gam() model>)`
gives you a list with the plotting data in `pd`.
---
ANSWER BELOW FOR `mgcv` <= 1.8-5:
I've repeatedly cursed the fact that the plot functions for `mgcv` don't return the stuff they are plotting -- what follows is ugly but it works:
```
library(mgcv)
set.seed(0)
dat <- gamSim(1, n = 400, dist = "normal", scale = 2)
b <- gam(y ~ s(x0) + s(x1) + s(x2) + s(x3), data = dat)
plotData <- list()
trace(mgcv:::plot.gam, at = list(c(27, 1)),
## tested for mgcv_1.8-4. other versions may need different at-argument.
quote({
message("ooh, so dirty -- assigning into globalenv()'s plotData...")
plotData <<- pd
}))
mgcv::plot.gam(b, seWithMean = TRUE, pages = 1)
par(mfrow = c(2, 2))
for (i in 1:4) {
plot(plotData[[i]]$x, plotData[[i]]$fit, type = "l", xlim = plotData[[i]]$xlim,
ylim = range(plotData[[i]]$fit + plotData[[i]]$se, plotData[[i]]$fit -
plotData[[i]]$se))
matlines(plotData[[i]]$x, cbind(plotData[[i]]$fit + plotData[[i]]$se,
plotData[[i]]$fit - plotData[[i]]$se), lty = 2, col = 1)
rug(plotData[[i]]$raw)
}
```
| null |
CC BY-SA 3.0
| null |
2011-03-02T15:15:49.453
|
2015-04-27T09:54:09.650
|
2015-04-27T09:54:09.650
|
1979
|
1979
| null |
7807
|
2
| null |
7790
|
5
| null |
You are calculating the mean of a variable that is 0 if no event and 1 if there is an event. The sum of $N$ such (independent) binomial random variables has a variance $N\times p(1-p)$. The mean has a variance $p(1-p)/N$. We can use a two-sample difference in means test to see whether the difference in proportions between the groups is significant. Calculate:
$$\begin{equation*}\frac{p - q}{\sqrt{p(1-p)/N + q(1-q)/M}}\end{equation*}$$
where $p$ is the proportion from group 1, which has $N$ observations, and $q$ is the proportion from group 2 with $M$ observations. If this number is large in absolute value (bigger than 1.96 is a typical norm, giving a hypothesis test with a significance level of 5%), then you can reject the claim that the two groups have the same proportion of events.
This assumes that each person in group 1 has the same probability of having an event and each person in group 2 has the same probability of event, but these probabilities can differ across groups. Since you are randomly assigning people to the groups (e.g., they aren't self-selecting into them), this is a reasonably good assumption.
Unfortunately, I can't help with you PHP coding, but I hope that this gets you started.
| null |
CC BY-SA 2.5
| null |
2011-03-02T15:20:12.543
|
2011-03-02T17:14:47.650
|
2011-03-02T17:14:47.650
|
919
|
401
| null |
7808
|
2
| null |
7791
|
5
| null |
Your data will give partial answers by means of the Hansen-Hurwitz or [Horvitz-Thompson](http://www.math.umt.edu/patterson/549/Horvitz-Thompson.pdf) estimators.
The model is this: represent this individual's attendance as a sequence of indicator (0/1) variables $(q_i)$, $i=1, 2, \ldots$. You randomly observe a two-element subset out of each weekly block $(q_{5k+1}, q_{5k+2}, \ldots, q_{5k+5})$. (This is a form of systematic sampling.)
- How often does he train? You want to estimate the weekly mean of the $q_i$. The statistics you gather tell you the mean observation is 0.9. Let's suppose this was collected over $w$ weeks. Then the Horvitz-Thompson estimator of the total number of the individual's visits is $\sum{\frac{q_i}{\pi_i}}$ = ${5\over2} \sum{q_i}$ = ${5\over2} (2 w) 0.9$ = $4.5 w$ (where $\pi_i$ is the chance of observing $q_i$ and the sum is over your actual observations.) That is, you should estimate he trains 4.5 days per week. See the reference for how to compute the standard error of this estimate. As an extremely good approximation you can use the usual (Binomial) formulas.
- Does he train randomly? There is no way to tell. You would need to maintain totals by day of week.
| null |
CC BY-SA 2.5
| null |
2011-03-02T17:06:53.537
|
2011-03-02T17:06:53.537
| null | null |
919
| null |
7809
|
1
|
7811
| null |
4
|
356
|
I am trying to find a simple similarity metric that will compare vectors of indeterminate length. These vectors will be populated with values between 1 and 5. In this situation a 1 is closer to a 2 then it is to a 5 etc etc.
I am new to this type of math. I naively considered using cosine similarity. However, when I looked into this further I realized that this probably wasn't the right metric for this sort of calculation.
Thanks in advance and I apologize for the newbie question.
Note: I will be programming this in PHP.
|
Simple similarity metric
|
CC BY-SA 2.5
| null |
2011-03-02T17:30:26.333
|
2011-03-02T19:01:44.663
| null | null |
1514
|
[
"correlation"
] |
7810
|
1
|
7814
| null |
11
|
857
|
What is the median of the [non-central t distribution](http://en.wikipedia.org/wiki/Noncentral_t-distribution) with non-centrality parameter $\delta \ne 0$? This may be a hopeless question because the CDF appears to be expressed as an infinite sum, and I cannot find any information about the inverse CDF function.
|
What is the median of a non-central t distribution?
|
CC BY-SA 2.5
| null |
2011-03-02T17:47:10.903
|
2015-09-14T09:27:07.320
| null | null |
795
|
[
"distributions",
"median",
"non-central",
"t-distribution"
] |
7811
|
2
| null |
7809
|
5
| null |
You could just use a norm: given vectors $\mathbf{x}$ and $\mathbf{y}$, we define the "2-norm" by
$$
||\mathbf{x} - \mathbf{y}||_2 = \sqrt{\sum_i (x_i - y_i)^2}
$$
Similarly we define the"p-norm" by
$$
||\mathbf{x} - \mathbf{y}||_p = \left(\sum_i |x_i - y_i|^p\right)^{1/p}
$$
In the limit, as $p \to \infty$, we get the "infinity norm"
$$
||\mathbf{x} - \mathbf{y}||_\infty = \max_i |x_i - y_i|
$$
Note the two vectors are assumed to be the same length! If you want your "similarity metric" to be sensibly defined when, say, comparing two vectors of length 3 or two vectors of length 300, you probably have to normalize by the size. That is, you should choose a similarity metric of the form
$$d(\mathbf{x},\mathbf{y}) = \frac{||\mathbf{x} - \mathbf{y}||_p}{n^{1/p}}$$
where the vectors are of length $n$.
edit: (I changed $d$ above slightly) to turn this distance metric into a similarity metric, I would abuse the fact that the vectors are known to range between $1$ and $5$. This tells me the maximum value that $d(\mathbf{x},\mathbf{y})$ can take occurs when one is a vector of all $1$s and the other is a vector of all $5$s. For the definition of $d$ above, the maximum value it takes is $4$. The proposed similarity metric is then:
$$f(\mathbf{x},\mathbf{y}) = 4 - \frac{||\mathbf{x} - \mathbf{y}||_p}{n^{1/p}}$$
| null |
CC BY-SA 2.5
| null |
2011-03-02T17:52:27.593
|
2011-03-02T19:01:44.663
|
2011-03-02T19:01:44.663
|
795
|
795
| null |
7812
|
2
| null |
7805
|
2
| null |
The question really concerns pairs of normal variates. Let's call them $x_1$ and $x_2$ with means $\mu_i$, standard deviations $\sigma_i$, and correlation $\rho$. Whence their joint pdf is
$$\frac{1}{2 \pi \sqrt{1 - \rho^2} \sigma_1 \sigma_2}
e^{-\frac{1}{1-\rho^2} \left(\frac{(x_1 - \mu_1)^2}{2 \sigma_1^2} + \frac{(x_2 - \mu_2)^2}{2 \sigma_2^2} - \frac{\rho (x_1 - \mu_1)(x_2 - \mu_2)}{\sigma_1 \sigma_2}\right)} dx_1 dx_2\text{.}$$
Let $f(x_1,x_2)$ be the product of this with the $y_i$ (as functions of the $x_i$). The first component of the gradient of $\log(f)$ is
$$\frac{\partial \log(f)}{\partial x_1}
= \frac{1}{1 + e^{x_1}} + \frac{\rho(\mu_2 - x_2) \sigma_1 + (x_1 - \mu_1)\sigma_2}{(\rho^2-1)\sigma_1^2 \sigma_2},$$
with a similar expression for the second component (via the symmetry achieved by exchanging the subscripts 1 and 2). There will be a unique global maximum, which we can detect by setting the gradient to zero. This pair of nonlinear equations has no closed form solution. It is rapidly found by a few Newton-Raphson iterations. Alternatively, we can linearize these equations. Indeed, through second order, the first component equals
$$\frac{1}{2} + x_1\left(\frac{-1}{4} + \frac{1}{(\rho^2-1)\sigma_1^2}\right) + \frac{-\rho x_2 \sigma_1 + \rho \mu_2 \sigma_1 - \mu_1 \sigma_2}{(\rho^2 -1)\sigma_1^2 \sigma_2}.$$
This gives a pair of linear equations in $(x_1, x_2)$, which therefore do have a closed form solution, say $\hat{x}_i(\mu_1, \mu_2, \sigma_1, \sigma_2, \rho)$, which obviously are rational polynomials.
The Jacobian at this critical point has 1,1 coefficient
$$\frac{e^\hat{x_1}\left(2 - (\rho^2-1)\sigma_1^2 + 2\cosh(\hat{x_1})\right)}{(1+e^\hat{x_1})^2(\rho^2-1)\sigma_1^2},$$
1,2 and 2,1 coefficients
$$\frac{\rho}{\sigma_1 \sigma_2(1 - \rho^2)},$$
and 2,2 coefficient obtained from the 1,1 coefficient by symmetry. Because this is a critical point (at least approximately), we can substitute
$$e^\hat{x_1} = \frac{(\rho^2-1)\sigma_1^2 \sigma_2}{(\mu_2 - \hat{x_2})\rho \sigma_1 + (\hat{x_1} - \mu_1)\sigma_2} - 1$$
and use that also to compute $\cosh(\hat{x_1}) = \frac{e^\hat{x_1} - e^{-\hat{x_1}}}{2}$, with a similar manipulation for $e^\hat{x_2}$ and $\cosh(\hat{x_2})$. This enables evaluation of the Hessian (the determinant of the Jacobian) as a rational function of the parameters.
The rest is routine: the Hessian tells us how to approximate the integral as a binormal integral (a [saddlepoint approximation](http://www.questia.com/googleScholar.qst?docId=5001894379)). The answer equals $\frac{1}{2\pi}$ times a rational function of the five parameters: that's your closed form (for what it's worth!).
| null |
CC BY-SA 2.5
| null |
2011-03-02T17:59:22.200
|
2011-03-02T19:39:10.920
|
2011-03-02T19:39:10.920
|
919
|
919
| null |
7813
|
1
|
7877
| null |
8
|
12266
|
I am trying to read up about AdaBoost from [Tibshirani](http://www.stanford.edu/~hastie/local.ftp/Springer/ESLII_print5.pdf) (page 337 onwards), and would appreciate some help in understanding it better.
The book says that "For each successive iteration m = 2, 3, . . . , M the observation weights are individually modified and the classification algorithm is reapplied to the weighted observations. At step m, those observations that were misclassified by the classifier Gm−1 (x) induced at the previous step have their weights increased, whereas the weights are decreased for those that were classified correctly. Thus as iterations proceed, observations that are difficult to classify correctly receive ever-increasing influence. Each successive classifier is thereby forced to concentrate on those training observations that are missed by previous ones in the sequence."
I am not able to understand what it means to "reapply the algorithm to the weighted observations". Say for example if I am doing 2-class text classification, what am I doing to my observations (document vectors)? How does this "force" the classifier to concentrate on some samples vs others?
|
Adjusting sample weights in AdaBoost
|
CC BY-SA 2.5
| null |
2011-03-02T18:06:19.327
|
2018-01-05T10:16:06.940
|
2011-03-03T06:12:11.953
|
-1
|
3301
|
[
"machine-learning",
"boosting"
] |
7814
|
2
| null |
7810
|
11
| null |
You can approximate it.
For example, I made the following nonlinear fits for $\nu$ (degrees of freedom) from 1 through 20 and $\delta$ (noncentrality parameter) from 0 through 5 (in steps of 1/2). Let
$$a(\nu) = 0.963158 + \frac{0.051726}{\nu-0.705428} + 0.0112409\log(\nu),$$
$$b(\nu) = -0.0214885+\frac{0.406419}{0.659586 +\nu}+0.00531844 \log(\nu),$$
and
$$g(\nu, \delta) = \delta + a(\nu) \exp(b(\nu) \delta) - 1.$$
Then $g$ estimates the median to within 0.15 for $\nu=1$, 0.03 for $\nu=2$, .015 for $\nu=3$, and .007 for $\nu = 4, 5, \ldots, 20$.
The estimation was done by computing the values of $a$ and $b$ for each value of $\nu$ from 1 through 20 and then separately fitting $a$ and $b$ to $\nu$. I examined plots of $a$ and $b$ to determine an appropriate functional form for these fits.
You can do better by focusing on the intervals of these parameters of interest to you. In particular, if you're not interested in really small values of $\nu$ you could easily improve these estimates, likely to within 0.005 consistently.
Here are plots of the median versus $\delta$ for $\nu=1$, the hardest case, and the negative residuals (true median minus approximate value) versus $\delta$:


The residuals are truly small compared to the medians.
BTW, for all but the smallest degrees of freedom the median is close to the noncentrality parameter. Here's a graph of the median, for $\delta$ from 0 to 5 and $\nu$ (treated as a real parameter) from 1 to 20.

For many purposes using $\delta$ to estimate the median might be good enough. Here is a plot of the error (relative to $\delta$) made by assuming the median equals $\delta$ (for $\nu$ from 2 through 20).

| null |
CC BY-SA 2.5
| null |
2011-03-02T19:00:15.790
|
2011-03-02T19:32:53.217
|
2011-03-02T19:32:53.217
|
919
|
919
| null |
7815
|
1
|
7933
| null |
116
|
20750
|
Many statistical jobs ask for experience with large scale data. What are the sorts of statistical and computational skills that would be need for working with large data sets. For example, how about building regression models given a data set with 10 million samples?
|
What skills are required to perform large scale statistical analyses?
|
CC BY-SA 2.5
| null |
2011-03-02T19:05:46.350
|
2011-11-29T19:37:05.097
|
2011-03-06T21:39:42.707
|
930
|
3026
|
[
"regression",
"machine-learning",
"multivariate-analysis",
"large-data"
] |
7816
|
2
| null |
7815
|
18
| null |
Your question should yield some good answers. Here are some starting points.
- An ability to work with the tradeoffs between precision and the demands placed on computing power.
- Facility with data mining techniques that can be used as preliminary screening tools before conducting regression. E.g., chaid, cart, or neural networks.
- A deep understanding of the relationship between statistical significance and practical significance. A wide repertoire of methods for variable selection.
- The instinct to crossvalidate.
| null |
CC BY-SA 2.5
| null |
2011-03-02T19:27:50.943
|
2011-03-02T19:27:50.943
| null | null |
2669
| null |
7817
|
1
|
7867
| null |
7
|
272
|
For a paper, I need an example of a dataset $(x_i,y_i)$ where the residuals are $iid$ (the $x$ do not represent time) with a discontinuity on the effect of $x$ on $y$.
- I have already found a dataset in Berger and Pope 2010 but the jump there is not clearly visible
- The 'LifeCycleSavings' dataset has been suggested, but it has only 50 observations (ideally, I would be looking at 100).
Does anyone know of other datasets?
|
Example of discontinous effect of x on y dataset (for paper)
|
CC BY-SA 2.5
| null |
2011-03-02T19:32:52.250
|
2017-10-26T18:15:15.700
|
2017-10-26T18:15:15.700
|
7071
|
603
|
[
"dataset",
"regression-discontinuity"
] |
7818
|
2
| null |
7799
|
5
| null |
+1 for Quick-R.
Another great resource that I (re)turn to regularly is the website of the [UCLA Statistical Consulting Group](http://www.ats.ucla.edu/stat/R/). In particular, it sounds like you might find their [data analysis examples](http://www.ats.ucla.edu/stat/R/dae/default.htm) useful. Many of the cases walk through the logic of inquiry and model design steps in addition to providing sample code and datasets. They also have a separate section of [textbook examples and code](http://www.ats.ucla.edu/stat/examples/default.htm), which I have found useful for self-teaching purposes.
| null |
CC BY-SA 2.5
| null |
2011-03-02T20:06:24.910
|
2011-03-02T20:06:24.910
| null | null |
3396
| null |
7819
|
2
| null |
7772
|
4
| null |
Might we reformulate the question as: 'I have N M-variate observations which I assume to be generated by N corresponding P-variate latent variables i.e. for each case/row M observed numbers are generated by P unobserved numbers. I have an idea that this mapping is linear with an M x P matrix of coefficients and I want to know what the latent matrix values should be.'?
If that's accurate then you have a multivariate version of the regression calibration problem. Normally one knows X and Y and estimates beta, whereas here one knows Y and beta and estimates / 'backs-out' X.
This is what motivates suncoolsu's question about control - the question is about what distribution assumptions can be made about the marginal distribution of X (if any). Your EM idea will make sense if you are happy to make distributional assumptions about P(Y | X; beta) and P(X) to apply Bayes theorem (although you won't need to iterate.)
Or maybe that's not the problem you're facing and I just don't understand your description.
| null |
CC BY-SA 2.5
| null |
2011-03-02T20:42:51.933
|
2011-03-02T20:42:51.933
| null | null |
1739
| null |
7820
|
2
| null |
6455
|
0
| null |
... h should be as small as possible to preserve whatever power the LB test may have under the circumstances. As h increases the power drops. The LB test is a dreadfully weak test; you must have a lot of samples; n must be ~> 100 to be meaningful. Unfortunately I have never
seen a better test. But perhaps one exists. Anyone know of one ?
Paul3nt
| null |
CC BY-SA 2.5
| null |
2011-03-02T23:55:45.330
|
2011-03-02T23:55:45.330
| null | null | null | null |
7821
|
2
| null |
7815
|
5
| null |
- Framing the problem in the Map-reduce framework.
- The Engineering side of the problem, eg., how much does it hurt to use lower precision for the parameters, or model selection based not only on generalization but storage and computation costs as well.
| null |
CC BY-SA 3.0
| null |
2011-03-03T02:00:03.717
|
2011-11-29T19:37:05.097
|
2011-11-29T19:37:05.097
|
2728
|
2728
| null |
7824
|
2
| null |
7542
|
0
| null |
Perhaps you should randomly permute the callers, then, per your example, claim that the first 163 made 30 calls, the next 163 made 31 calls, etc. By 'permutation', I mean sort them in a random order. The simple way to do this is to use your random number generator to assign each caller a number, then sort them by that number.
| null |
CC BY-SA 2.5
| null |
2011-03-03T05:39:05.227
|
2011-03-03T05:39:05.227
| null | null |
795
| null |
7825
|
1
| null | null |
8
|
7058
|
I'm looking at the effect defeat and entrapment inducing conditions have on subjective ratings of defeat and entrapment at three different time points (among other things).
However the subjective ratings are not normally distributed. I've done several transformations and the squareroot transformation seems to work best. However there are still some aspects of the data that have not normalized. This non-normality manifests itself in negative skewness in High entrapment high defeat conditions at the time point I expected there to be the highest defeat and entrapment ratings. Consequently I think it could be argued that this skew is due to the experimental manipulation.
Would it be acceptable to run ANOVAs on this data despite the lack of normality, given the manipulations? Or would non-parametric tests be more appropriate? If so is there a non parametric equivalent of a 4x3 mixed ANOVA?
|
How to do ANOVA on data which is still not normal after transformations?
|
CC BY-SA 2.5
| null |
2011-03-03T09:41:24.050
|
2011-03-03T12:27:03.903
|
2011-03-03T10:34:07.277
|
2116
| null |
[
"nonparametric",
"data-transformation"
] |
7826
|
1
| null | null |
6
|
1696
|
I have a dataset about a population in hospital, and what type of infections patients have.
Let say the number of patients is 100, 10 of them have pneumonia `(group A)`, 20 of them have urinary tract infection `(group B)`; keep in mind that group A and B can be overlapping, that is a patient suffering from pneumonia can also have urinary tract infection.
I need to estimate the prevalence of different infection type in this population (i.e., prevalence of pneumonia, prevalence of urinary tract infection). I'm not sure whether assuming binomial distribution, like the one below from [here](http://www.promesa.co.nz/Help/EP_est_simple_random_sample.htm), is appropriate:
$$\text{SE}=\sqrt{\frac{p\times (1-p)}{n}}\times\sqrt{1-f}$$
Using this formula, I will compute multiple "binomial" estimates (i.e., one for each type of infection). I would feel comfortable to use this if only I need to describe prevalence of one type of infection, but in this case, I need to describe several ones from the same population. I am not sure if using the formula is appropriate or not in this case. Can anyone here enlighten me? Thanks!
|
Computing confidence intervals for prevalence for several types of infection
|
CC BY-SA 2.5
| null |
2011-03-03T09:52:03.557
|
2011-08-15T12:48:21.707
|
2011-03-03T10:08:24.523
|
930
|
588
|
[
"estimation",
"confidence-interval",
"epidemiology"
] |
7827
|
2
| null |
7825
|
3
| null |
I believe with negatively skewed data, you may have to reflect the data to become positively skewed before applying another data transformation (e.g. log or square root). However, this tends to make interpretation of your results difficult.
What is your sample size? Depending on how large it is exactly, parametric tests may give fairly good estimates.
Otherwise, for a non-parametric alternative, maybe you can try the [Friedman test](http://en.wikipedia.org/wiki/Friedman_test).
In addition, you may try conducting a MANOVA for repeated measures, with an explicit time variable included, as an alternative to a 4x3 Mixed ANOVA. A major difference is that the assumption of sphericity is relaxed (or rather, it is estimated for you), and that all time-points of your outcome variable are fitted at once.
| null |
CC BY-SA 2.5
| null |
2011-03-03T10:47:45.707
|
2011-03-03T10:47:45.707
| null | null |
3309
| null |
7828
|
2
| null |
7825
|
16
| null |
It's the residuals that should be normally distributed, not the marginal distribution of your response variable.
I would try using transformations, do the ANOVA, and check the residuals. If they look noticeably non-normal regardless of what transformation you use, I would switch to a non-parametric test such as the Friedman test.
| null |
CC BY-SA 2.5
| null |
2011-03-03T11:20:05.713
|
2011-03-03T11:40:26.200
|
2011-03-03T11:40:26.200
|
159
|
159
| null |
7829
|
1
| null | null |
7
|
2681
|
I used [BIRCH](http://en.wikipedia.org/wiki/BIRCH_%28data_clustering%29) and [HAC](http://en.wikipedia.org/wiki/Cluster_analysis#Agglomerative_hierarchical_clustering) to do clustering on my data.
I want to now what type of information I can include in reports that my users can generate to get more insights on the clusters. I would have to dumb down the statistical terms and represent them as much as possible visually in these reports for my users.
|
How to generate user-friendly summaries of cluster analysis?
|
CC BY-SA 2.5
| null |
2011-03-03T11:26:58.357
|
2011-03-03T18:01:32.537
|
2011-03-03T12:29:01.593
|
930
| null |
[
"data-visualization",
"clustering",
"teaching"
] |
7830
|
2
| null |
7825
|
2
| null |
A boxcox transformation (there's one in the MASS package) works as well on negatively as positively skewed data. FYI, you need to enter a formula in that function like y~1 and make sure all of y is positive first (if it's not just add a constant like abs(min(y))). You may have to adjust the lambda range in the function to find the peak of the curve. It will give you the best lambda value to choose and then you just apply this transform:
```
b <- boxcox(y~1)
lambda <- b$x[b$y == max(b$y)]
yt <- (y^lambda-1)/lambda
#you can transform back with
ytb <- (t*lambda+1)^(1/lambda)
```
See if your data are normal then.
```
#you can transform back with
ytb <- (t*lambda+1)^(1/lambda)
#maybe put back the min
ytb <- ytb - abs(min(y))
```
| null |
CC BY-SA 2.5
| null |
2011-03-03T12:27:03.903
|
2011-03-03T12:27:03.903
| null | null |
601
| null |
7831
|
1
| null | null |
7
|
899
|
I am working with the twostep cluster process in SPSS Modeler (Clementine) and trying to get a sense for the distance function used. It is a log-likelihood function (as stated in docs) but I am unsure for even the continuous variables (the function handles continuous and nominal variable) how this is a log likelihood (it is missing most of the elements of a Gaussian). Below is a screen shot of the documentation describing the distance formula.
Has anyone seen the derivation of this distance function?

|
Derivation of distance in TwoStep clustering
|
CC BY-SA 3.0
| null |
2011-03-03T12:43:42.567
|
2017-09-11T11:46:26.327
|
2017-09-11T11:46:26.327
|
3277
|
2040
|
[
"clustering",
"spss",
"distance-functions"
] |
7832
|
2
| null |
7829
|
2
| null |
The best method I have found for a non-technical audience is to present a table or plots of the centroids of each cluster along with a description of that cluster. It helps in the business world (not sure your domain) to give a name to each cluster describing it's principle characteristics. Example when clustering customers would be: "Long time loyals" for that cluster that is generally comprised of long tenured customers.
| null |
CC BY-SA 2.5
| null |
2011-03-03T12:58:25.523
|
2011-03-03T12:58:25.523
| null | null |
2040
| null |
7833
|
2
| null |
7829
|
3
| null |
Bubble Charts are a good visual device that you can use to represent your cluster. Pick your 4 most important variables and plot each cluster using the x and y axis, size and color of bubble to represent the 4 factors. If you have many variables you can perform a principal components analysis first to reduce them to 4 factors.
[http://www.google.com/images?um=1&hl=en&rlz=1I7GGLD_en&tbs=isch:1&aq=f&aqi=g6&oq=&q=bubble%20chart](http://www.google.com/images?um=1&hl=en&rlz=1I7GGLD_en&tbs=isch:1&aq=f&aqi=g6&oq=&q=bubble%20chart)
-Ralph Winters
| null |
CC BY-SA 2.5
| null |
2011-03-03T13:43:30.573
|
2011-03-03T13:43:30.573
| null | null |
3489
| null |
7834
|
2
| null |
6835
|
3
| null |
I recommend using ARIMA with checks for level shift, trends, and interventions.
You can't just try and model the data with the only intent of identifying a trend. It is more complex than just a single focus. Let me explain....
In order to determine if there is a trend, you need to be careful as the trend may be just a change in the intercept or also known as a "level shift". You also need to be aware that you can't assume the trend started at the beginning of the time series as we are taught in Economics classes so this is an iterative process to determine the beginning and end of the trend. You will also need to determine your threshold of how many periods before you can "call" a trend. You will also need to be adjusting for interventions as they can skew the t-test when trying to identify a trend.
| null |
CC BY-SA 2.5
| null |
2011-03-03T15:26:59.480
|
2011-03-03T15:32:02.253
|
2011-03-03T15:32:02.253
|
3411
|
3411
| null |
7835
|
1
|
7874
| null |
5
|
190
|
I have a bunch of points that belong to one of population P1, P2, ... Pn AND to class A or B.
Within each population I'll be doing classification between A and B, and I want to select features that discriminates the best between A and B. Now, my features are also correlated with population membership, but I don't care about that, I only want to know how well the measures discriminate within a population.
So, is there a "proper" way to measure that? Within a population, I can just check how much the metric is correlated with class membership (biserial correlation), but if I get the correlation on the whole population, Simpson's paradox gets in the way and I may end up measuring the wrong thing.
I could just calculate correlations for each population, and average those or something, but that sounds ugly to me (this is for proramming real-world applications, not for publishing in a paper, so I don't mind if my method isn't that rigorous as long as the result is OK).
So, what's the proper way to control for population membership?
|
Feature selection for classification, controlling for sub-population
|
CC BY-SA 2.5
| null |
2011-03-03T15:28:29.620
|
2011-03-04T10:29:42.123
| null | null |
1737
|
[
"classification",
"feature-selection",
"simpsons-paradox",
"confounding"
] |
7836
|
1
|
7883
| null |
21
|
15203
|
In regression analysis what's the difference between 'data-generation process' and 'model'?
|
In regression analysis what's the difference between data-generation process and model?
|
CC BY-SA 2.5
| null |
2011-03-03T15:29:29.913
|
2015-11-12T22:32:35.560
|
2011-03-03T19:40:41.407
| null |
3525
|
[
"econometrics"
] |
7837
|
2
| null |
7836
|
5
| null |
The DGP is the true model. The model is what we have tried to, using our best skills, to represent the true state of nature. The DGP is influenced by "noise". Noise can be of many kinds:
- One time interventions
- Level shifts
- Trends
- Changes in Seasonality
- Changes in Model Parameters
- Changes in Variance
If you don't control for these 6 items than your ability to identify the true DGP is reduced.
| null |
CC BY-SA 2.5
| null |
2011-03-03T15:36:16.930
|
2011-03-03T19:40:02.140
|
2011-03-03T19:40:02.140
| null |
3411
| null |
7838
|
2
| null |
7835
|
1
| null |
A bit more research on Wikipedia indicates that [ANCOVA](http://en.wikipedia.org/wiki/Analysis_of_covariance) may be what I need.
| null |
CC BY-SA 2.5
| null |
2011-03-03T15:40:09.830
|
2011-03-03T15:40:09.830
| null | null |
1737
| null |
7839
|
1
| null | null |
1
|
1893
|
- Using either percent error or percent difference, I want to compare one of my measured values from a set to the mean of the set. By reading the Wikipedia article on percent difference, it's still not quite clear which I should choose.
- What if I wanted to compare an instantaneous slope to the slope of a linear regression, do I use percent error or percent difference?
If it isn't clear yet, my misunderstanding lies in the comparison of one value from a set to some computed aggregated value from that set.
Thanks in advance.
-JP
|
Percent error or percent difference?
|
CC BY-SA 2.5
| null |
2011-03-03T15:43:53.747
|
2011-03-03T22:12:41.800
|
2011-03-03T19:45:16.987
| null |
3527
|
[
"regression",
"sampling",
"mean"
] |
7840
|
2
| null |
7835
|
0
| null |
I think that linear discriminant analysis is a better option:
[http://en.wikipedia.org/wiki/Linear_discriminant_analysis](http://en.wikipedia.org/wiki/Linear_discriminant_analysis)
-Ralph Winters
| null |
CC BY-SA 2.5
| null |
2011-03-03T15:48:13.713
|
2011-03-03T15:48:13.713
| null | null |
3489
| null |
7844
|
2
| null |
7829
|
9
| null |
I like a 2D plot that shows the clusters and the actual data points so readers can get an idea of the quality of the clustering. If there are more than two factors, you can put the principle components on the axes, as in my example:

The equivalent 3D plots are only good if the viewer can interact with it get a sense of depth and obscured pieces. Here's a 3D example with the same data.

| null |
CC BY-SA 2.5
| null |
2011-03-03T16:28:00.933
|
2011-03-03T18:01:32.537
|
2011-03-03T18:01:32.537
|
1191
|
1191
| null |
7845
|
1
|
7847
| null |
14
|
6967
|
Let's say I have two distributions I want to compare in detail, i.e. in a way that makes shape, scale and shift easily visible. One good way to do this is to plot a histogram for each distribution, put them on the same X scale, and stack one underneath the other.
When doing this, how should binning be done? Should both histograms use the same bin boundaries even if one distribution is much more dispersed than the other, as in Image 1 below? Should binning be done independently for each histogram before zooming, as in Image 2 below? Is there even a good rule of thumb on this?


|
Best way to put two histograms on same scale?
|
CC BY-SA 2.5
| null |
2011-03-03T16:28:11.247
|
2011-03-08T14:43:57.757
| null | null |
1347
|
[
"data-visualization",
"histogram",
"density-function",
"binning"
] |
7846
|
2
| null |
7784
|
6
| null |
The Pearson correlation coefficient measures linear association. Being based on empirical second central moments, it is influenced by extreme values. Therefore:
- Evidence of nonlinearity in a scatterplot of actual-vs-predicted values would suggest using an alternative such as the rank correlation (Spearman) coefficient;
If the relationship looks monotonic on average (as in the upper row of the illustration), a rank correlation coefficient will be effective;
Otherwise, the relationship is curvilinear (as in some examples from the lower row of the illustration, such as the leftmost or the middle u-shaped one) and likely any measure of correlation will be an inadequate description; using a rank correlation coefficient won't fix this.
- The presence of outlying data in the scatterplot indicates the Pearson correlation coefficient may be overstating the strength of the linear relationship. It might or might not be correct; use it with due caution. The rank correlation coefficient might or might not be better, depending on how trustworthy the outlying values are.

(Image copied from the Wikipedia article on [Pearson product-moment correlation coefficient](http://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient).)
| null |
CC BY-SA 2.5
| null |
2011-03-03T16:45:19.690
|
2011-03-03T16:45:19.690
| null | null |
919
| null |
7847
|
2
| null |
7845
|
8
| null |
I think you need to use the same bins. Otherwise the mind plays tricks on you. Normal(0,2) looks more dispersed relative to Normal(0,1) in Image #2 than it does in Image #1. Nothing to do with statistics. It just looks like Normal(0,1) went on a "diet".
-Ralph Winters
Midpoint and histogram end points can also alter perception of the dispersion.
Notice that in this applet a maximum bin selection implies a range of >1.5 - ~5 while a minimum bin selection implies a range of <1 - > 5.5
[http://www.stat.sc.edu/~west/javahtml/Histogram.html](http://www.stat.sc.edu/~west/javahtml/Histogram.html)
| null |
CC BY-SA 2.5
| null |
2011-03-03T16:55:22.073
|
2011-03-08T14:43:57.757
|
2011-03-08T14:43:57.757
|
3489
|
3489
| null |
7848
|
2
| null |
2077
|
9
| null |
Logs and reciprocals and other power transformations often yield unexpected results.
As for detrending residuals(ie Tukey), this may have some application in some cases but could be dangerous. On the other hand, detecting level shifts and trend changes are systematically available to researchers employing intervention detection methods. Since a level shift is the difference of a time trend just as a pulse is the difference of a level shift the methods employed by Ruey Tsay are easily covered by this problem.
If a series exhibits level shifts (ie change in intercept) the appropriate remedy to make the series stationary is to "demean" the series. Box-Jenkins errored critically by assuming that the remedy for non-stationarity was a differencing operators. So, sometimes differencing is appropriate and other times adjusting for the mean shift"s" is appropriate. In either case, the autocorrelation function can exhibit non-stationarity. This is a symptom of the state of the series(ie stationary or non-stationary). In the case of evidented non-stationarity the causes can be different. For example, the series has truly a continuous varying mean or the series has had a temporary change in mean.
The suggested approach was first proposed Tsay in 1982 and has been added to some software. Researchers should refer to Tsay's Journal of Forecasting article titled "Outliers, Level Shifts, and Variance Changes in Time Series " , Journal of Forecasting, Vol. 7, I-20 (1988).
As usual, textbooks are slow to incorporate leading edge technology, but this material can be referenced in the Wei book (ie Time Series Analysis), Delurgio and Makradakis cover the incorporating interventions, but not how to detect as Wei's text does.
| null |
CC BY-SA 2.5
| null |
2011-03-03T17:46:47.633
|
2011-03-03T19:41:27.587
|
2011-03-03T19:41:27.587
|
3411
|
3411
| null |
7849
|
2
| null |
7771
|
2
| null |
- Yes. $D = Z/\sqrt{n}$ for the one-sample test. $D = Z/\sqrt{\frac{n_1 n_2}{n_1 + n_2}}$ for the two-sample test. $D$ should also be the "Most Extreme Differences - Absolute" entry in the output graphic (double-click the table shown in the SPSS output viewer). $Z$ might be labeled "Test Statistic," "Kolmogorov-Smirnov Z," or something else depending on which test and version of SPSS you're using.
- It depends. Mann-Whitney tests for a difference in the central tendencies by comparing average ranks; K-S tests for a difference in distributions by comparing the maximum difference in empirical cumulative distribution functions. If you expect strong shape differences, such as only low and high values in one group but middle values for the other group (this would be atypical for most data), K-S is a better choice. If you expect just a location shift, Mann-Whitney is more powerful.
| null |
CC BY-SA 2.5
| null |
2011-03-03T18:32:32.497
|
2011-03-03T18:32:32.497
| null | null |
3408
| null |
7850
|
1
|
7854
| null |
6
|
26730
|
I working on this data set with marital status and age. I want to plot the percentage of never married man versus each age. Could you please help me to figure out the way how to do it in R? So far I have created two separate arrays with men never marries and ever married. I know how many case of each I have. What i need to do is to count number of people that were never married at each age and divide it by the total number of never married people to get a percentage. I hope I was clear.
Thank you
|
Calculating proportions by age in R
|
CC BY-SA 2.5
| null |
2011-03-03T18:48:35.460
|
2017-01-30T15:59:33.653
|
2017-01-30T15:59:33.653
|
919
|
3008
|
[
"r",
"data-visualization"
] |
7851
|
2
| null |
7839
|
0
| null |
I do not necessary agree with the Wikipedia article, however, of the two, Percent Error would be a better statistic since you would are essentially treating your average as the "theoretical" value. The article seems to state (though not clearly) that "Percent Difference" involves two experimental values x1 and x2, which you do not have.
-Ralph Winters
| null |
CC BY-SA 2.5
| null |
2011-03-03T19:07:46.923
|
2011-03-03T19:07:46.923
| null | null |
3489
| null |
7852
|
2
| null |
7850
|
2
| null |
Probably what you need is `table` or `aggregate`. If you add more details I can give you a more in-depth explanation.
| null |
CC BY-SA 2.5
| null |
2011-03-03T19:29:42.647
|
2011-03-03T19:29:42.647
| null | null |
582
| null |
7853
|
1
|
7856
| null |
40
|
5123
|
I understand two-tailed hypothesis testing. You have $H_0 : \theta = \theta_0$ (vs. $H_1 = \neg H_0 : \theta \ne \theta_0$). The $p$-value is the probability that $\theta$ generates data at least as extreme as what was observed.
I don't understand one-tailed hypothesis testing. Here, $H_0 : \theta\le\theta_0$ (vs. $H_1 = \neg H_0 : \theta > \theta_0$). The definition of p-value shouldn't have changed from above: it should still be the probability that $\theta$ generates data at least as extreme as what was observed. But we don't know $\theta$, only that it's upper-bounded by $\theta_0$.
So instead, I see texts telling us to assume that $\theta = \theta_0$ (not $\theta \le \theta_0$ as per $H_0$) and calculate the probability that this generates data at least as extreme as what was observed, but only on one end. This seems to have nothing to do with the hypotheses, technically.
Now, I understand that this is frequentist hypothesis testing, and that frequentists place no priors on their $\theta$s. But shouldn't that just mean the hypotheses are then impossible to accept or reject, rather than shoehorning the above calculation into the picture?
|
Justification of one-tailed hypothesis testing
|
CC BY-SA 3.0
| null |
2011-03-03T19:35:51.870
|
2014-08-16T19:09:10.283
|
2014-08-16T19:02:38.267
|
44269
|
1720
|
[
"hypothesis-testing"
] |
7854
|
2
| null |
7850
|
10
| null |
Your approach seems way too complicated to me. Let's start with some data:
```
## make up some data
status <- factor(rbinom(1000, 1, 0.3), labels = c("single", "married"))
age <- sample(20:50, 1000, replace = TRUE)
df <- data.frame(status, age)
head(df)
```
Print the first six cases:
```
> head(df)
status age
1 married 21
2 single 50
3 single 43
4 single 28
5 married 28
6 single 40
```
Next, we need to calculate row wise percentages; even if I doubt that this makes sense (it refers to your statement: "What i need to do is to count number of people that were never married at each age and divide it by the total number of never married people to get a percentage.").
```
## calculate row wise percentages (is that what you are looking for?)
(tab <- prop.table(table(df), 1)*100)
```
The resulting table looks like this:
```
> (tab <- prop.table(table(df), 1)*100)
age
status 20 21 22 23 24 25 26
single 1.857143 3.142857 3.428571 2.285714 2.142857 2.857143 3.428571
married 2.333333 2.333333 5.666667 1.333333 3.333333 5.333333 2.000000
age
status 27 28 29 30 31 32 33
single 2.857143 3.142857 3.428571 3.285714 2.714286 3.714286 3.571429
married 5.000000 4.333333 2.666667 4.000000 1.666667 4.666667 3.000000
age
status 34 35 36 37 38 39 40
single 3.000000 2.857143 5.000000 3.571429 2.857143 3.571429 3.000000
married 3.333333 4.000000 4.000000 2.333333 2.000000 2.000000 2.000000
age
status 41 42 43 44 45 46 47
single 4.285714 3.000000 3.714286 3.857143 2.857143 3.714286 1.714286
married 2.333333 3.333333 2.000000 4.333333 3.666667 5.333333 2.666667
age
status 48 49 50
single 2.857143 3.428571 4.857143
married 2.333333 3.000000 3.666667
```
That is, if you sum up row wise, it gives 100%
```
> sum(tab[1,])
[1] 100
```
Finally, plot it.
```
## plot it
plot(as.numeric(dimnames(tab)$age), tab[1,],
xlab = "Age", ylab = "Single [%]")
```

| null |
CC BY-SA 2.5
| null |
2011-03-03T19:44:17.863
|
2011-03-03T19:44:17.863
| null | null |
307
| null |
7855
|
2
| null |
7850
|
5
| null |
I did something similar recently. There are quite a few ways to aggregate data like this in R, but the `ddply` function from the package `plyr` is my security blanket, and I turn to it for things like this.
I'm assuming that you have individual records for each person in your dataset, with age, sex, and marital status. There's no need to split up the data into multiple tables for this approach - if you have women in the original table, just leave them in and add sex as a grouping variable.
```
require(plyr)
results.by.age <- ddply(.data = yourdata, .var = c("sex", "age"), .fun = function(x) {
data.frame(n = nrow(x),
ever.married.n = nrow(subset(x, marital.status %in%
c("Married", "Divorced"))),
ever.married.prop = nrow(subset(x, marital.status %in%
c("Married", "Divorced"))) / nrow(x)
)
}
)
```
This splits the data.frame `yourdata` by unique combinations of the variables `sex` and `age`. Then, for each of those chunks (referred to as `x`), it calculates the number of people who belong to that group (`n`), how many of them are married (`ever.married.n`), and what proportion of them are married (`ever.married.prop`). It will then return a data.frame called `results.by.age` with rows like
```
sex age n ever.married.n ever.married.prop
"Male" 25 264 167 0.633
```
This is perhaps not the most elegant or efficient way to do this, but this general pattern has been very helpful for me. One advantage of this is that you can easily and transparently collect whatever statistics you want from the subset, which can be helpful if you want to, say, add a regression line to the plot (weight by `n`) or have both male and female proportions on the same plot and color the points by sex.
---
Here's a revised version using the `summarise()` function from plyr - the effect is the same, but `summarise()` has a couple of key advantages:
- It works within the environment of the current subset - so rather than typing `x$marital.status`, I can just type `marital.status`.
- It lets me refer to other variables I've already created, which makes percentages, transformations and the like much easier - if I've already made `num` and `denom`, the proportion of `num` is just `num / denom`.
```
results.by.age <- ddply(.data = yourdata, .var = c("sex", "age"), .fun = summarise,
n = length(marital.status),
ever.married = sum(marital.status %in% c("Married", "Divorced")),
ever.married.prop = ever.married / n # Referring to vars I just created
)
```
| null |
CC BY-SA 3.0
| null |
2011-03-03T19:57:08.010
|
2014-07-20T20:40:20.357
|
2014-07-20T20:40:20.357
|
71
|
71
| null |
7856
|
2
| null |
7853
|
39
| null |
That's a thoughtful question. Many texts (perhaps for pedagogical reasons) paper over this issue. What's really going on is that $H_0$ is a composite "hypothesis" in your one-sided situation: it's actually a set of hypotheses, not a single one. It is necessary that for every possible hypothesis in $H_0$, the chance of the test statistic falling in the critical region must be less than or equal to the test size. Moreover, if the test is actually to achieve its nominal size (which is a good thing for achieving high power), then the supremum of these chances (taken over all the null hypotheses) should equal the nominal size. In practice, for simple one-parameter tests of location involving certain "nice" families of distributions, this supremum is attained for the hypothesis with parameter $\theta_0$. Thus, as a practical matter, all computation focuses on this one distribution. But we mustn't forget about the rest of the set $H_0$: that is a crucial distinction between two-sided and one-sided tests (and between "simple" and "composite" tests in general).
This subtly influences the interpretation of results of one-sided tests. When the null is rejected, we can say the evidence points against the true state of nature being any of the distributions in $H_0$. When the null is not rejected, we can only say there exists a distribution in $H_0$ which is "consistent" with the observed data. We are not saying that all distributions in $H_0$ are consistent with the data: far from it! Many of them may yield extremely low likelihoods.
| null |
CC BY-SA 2.5
| null |
2011-03-03T20:05:43.230
|
2011-03-03T20:05:43.230
| null | null |
919
| null |
7857
|
2
| null |
7845
|
0
| null |
So it's a question of maintaining the same bin size or maintaining the same number of bins? I can see arguments for both sides. A work-around would be to [standardize](http://en.wikipedia.org/wiki/Standard_score) the values first. Then you could maintain both.
| null |
CC BY-SA 2.5
| null |
2011-03-03T20:23:45.953
|
2011-03-03T20:23:45.953
| null | null |
1191
| null |
7858
|
2
| null |
7839
|
2
| null |
Well, neither of them. And you are trying to bite it from a wrong side.
Namely, you need some hypothesis -- to what end you are comparing individual measurements to the mean? This is important, since this determines what method you should use; for instance:
- you may have a million normally distributed numbers and 3 outliers and you want to find them -- this way a Z-score may be a good idea;
- you may have some numbers form unknown distribution and want to find some outliers -- then you should think about some IQR-based Z alternative or some other nonparametric methods;
- you may want to check if your sample is from a certain distribution -- making a qqplot is a some way to go.
Without it, you will just get 15.3%, 93.4%, 7.532% or some any other number that will be equally useless regardless of being percent error or percent difference.
| null |
CC BY-SA 2.5
| null |
2011-03-03T20:26:17.933
|
2011-03-03T22:12:41.800
|
2011-03-03T22:12:41.800
|
930
| null | null |
7859
|
2
| null |
7835
|
0
| null |
If you're only classifying within a population, don't worry about it. Just make sure you present your results in separate tables (or charts or whatever), one for each class.
If you're trying to classify observations who's class you do not know, then you need to be more careful.
Also, try a randomForest if you know how to use R, because I love random forests.
| null |
CC BY-SA 2.5
| null |
2011-03-03T21:06:23.557
|
2011-03-03T21:06:23.557
| null | null |
2817
| null |
7860
|
1
|
7862
| null |
33
|
12828
|
Is it possible to visualize the output of Principal Component Analysis in ways that give more insight than just summary tables? Is it possible to do it when the number of observations is large, say ~1e4? And is it possible to do it in R [other environments welcome]?
|
Visualizing a million, PCA edition
|
CC BY-SA 2.5
| null |
2011-03-03T22:59:23.757
|
2014-09-09T13:58:14.083
|
2014-09-09T13:58:14.083
|
28666
|
30
|
[
"r",
"data-visualization",
"pca",
"biplot"
] |
7861
|
2
| null |
7853
|
6
| null |
I see the $p$-value as the maximum probability of a type I error. If $\theta \ll \theta_0$, the probability of a type I error rate may be effectively zero, but so be it. When looking at the test from a minimax perspective, an adversary would never draw from deep in the 'interior' of the null hypothesis anyway, and the power should not be affected. For simple situations (the $t$-test, for example) it is possible to construct a test with a guaranteed maximum type I rate, allowing such one sided null hypotheses.
| null |
CC BY-SA 3.0
| null |
2011-03-04T00:15:13.317
|
2014-08-16T19:07:38.497
|
2014-08-16T19:07:38.497
|
44269
|
795
| null |
7862
|
2
| null |
7860
|
59
| null |
The [biplot](http://en.wikipedia.org/wiki/Biplot) is a useful tool for visualizing the results of PCA. It allows you to visualize the principal component scores and directions simultaneously. With 10,000 observations you’ll probably run into a problem with over-plotting. Alpha blending could help there.
Here is a PC biplot of the [wine data from the UCI ML repository](http://archive.ics.uci.edu/ml/datasets/Wine):

The points correspond to the PC1 and PC2 scores of each observation.
The arrows represent the correlation of the variables with PC1 and PC2. The white circle indicates the theoretical maximum extent of the arrows. The ellipses are 68% data ellipses for each of the 3 wine varieties in the data.
I have made the [code for generating this plot available here](http://vince.vu/software/#ggbiplot).
| null |
CC BY-SA 3.0
| null |
2011-03-04T00:28:50.527
|
2011-10-23T16:46:41.983
|
2011-10-23T16:46:41.983
|
1670
|
1670
| null |
7863
|
2
| null |
7853
|
2
| null |
You would use a one-sided hypothesis test if only results in one direction are supportive of the conclusion you are trying to reach.
Think of this in terms of the question you are asking. Suppose, for example, you want to see whether obesity leads to increased risk of heart attack. You gather your data, which might consist of 10 obese people and 10 non-obese people. Now let's say that, due to unrecorded confounding factors, poor experimental design, or just plain bad luck, you observe that only 2 of the 10 obese people have heart attacks, compared to 8 of the non-obese people.
Now if you were to conduct a 2-sided hypothesis test on this data, you would conclude that there was a statistically significant association (p ~ 0.02)between obesity and heart attack risk. However, the association would be in the opposite direction to that which you were actually expecting to see, hence the test result would be misleading.
(In real life, an experiment that produced such a counterintuitive result could lead to further questions that are interesting in themselves: for example, the data collection process might need to be improved, or there might be previously-unknown risk factors at work, or maybe conventional wisdom is simply mistaken. But these issues aren't really related to the narrow question of what sort of hypothesis test to use.)
| null |
CC BY-SA 2.5
| null |
2011-03-04T00:34:19.993
|
2011-03-04T00:34:19.993
| null | null |
1569
| null |
7865
|
1
| null | null |
4
|
834
|
How large sample size should be for Ljung-Box statistic to achieve a power $\ge 0.5$ when number of lags tested is 1? How does the power fall ( assuming an AR(1) process), with increasing number of lags? The LB test is more than 40 years old, surely someone must have answered these questions by now.
|
What is the power of the Ljung-Box Test for auto-correlation?
|
CC BY-SA 2.5
| null |
2011-03-04T02:44:35.870
|
2011-03-04T07:43:57.670
|
2011-03-04T07:28:34.923
|
2116
|
3533
|
[
"time-series",
"autocorrelation"
] |
7867
|
2
| null |
7817
|
2
| null |
In economics, this is called "regression discontinuity." For one example, check out
David Card & Carlos Dobkin & Nicole Maestas, 2008. "The Impact of Nearly Universal Insurance Coverage on Health Care Utilization: Evidence from Medicare," American Economic Review, American Economic Association, vol. 98(5), pages 2242-58. [[data](http://www.e-aer.org/data/dec08/20051296_data.zip)]
Here's a summary piece:
Lee, David S., and Thomas Lemieux. 2010. "Regression Discontinuity Designs in Economics." Journal of Economic Literature, 48(2): 281–355. [[article page](http://www.aeaweb.org/articles.php?doi=10.1257/jel.48.2.281)]
Hope that helps.
| null |
CC BY-SA 2.5
| null |
2011-03-04T04:29:51.180
|
2011-03-04T04:29:51.180
| null | null |
401
| null |
7868
|
1
| null | null |
3
|
249
|
I have a database of some 10000 cell-numbers with mapping of geographical areas they are associated to. I believe the first 5-6 characters in the numbers can point to the geo. area the cell number belongs to.
I want to build the rules for such mapping so that geographical area for a new number can be calculated. Can someone point how should I approach the problem?
Update: I am not sure how many digits are specific to geo. location. It is only my guess that 5-6 digits may be involved.
Also, more than one set of digits can be associated to a particular geo-area. Every operator will have their own digit-set to refer to area and my database has cell-numbers from several operators.
|
Building a classification rule for geographical mapping of cell phone number
|
CC BY-SA 2.5
| null |
2011-03-04T06:28:00.747
|
2014-08-06T10:33:42.837
|
2011-03-07T06:01:22.580
|
1261
|
1261
|
[
"classification"
] |
7869
|
1
| null | null |
4
|
660
|
Background:
I have the following data (an example):
```
headings = {
:heading1 => { :weight => 25, :views => 0, :conversions => 0}
:heading2 => { :weight => 25, :views => 0, :conversions => 0}
:heading3 => { :weight => 25, :views => 0, :conversions => 0}
:heading4 => { :weight => 25, :views => 0, :conversions => 0}
}
total_views = 0
```
I got to serve these headings based on their weights. Every time a heading is served its `views` is incremented by one and `total_views` also incremented. And whenever a user clicks on a served heading its `conversions` is incremented by one. I've written a program (in Ruby) which is performing this well.
Question:
I need to `Auto Optimize` best converting heading. Consider the following views and conversions for all headings:
```
heading1: views => 50, conversions => 30
heading2: views => 50, conversions => 10
heading3: views => 50, conversions => 15
heading4: views => 50, conversions => 5
```
I need to automatically increase the weights of heading(s) which is/are converting more and vice versa. The sum of weights will always be 100.
Is there any standard algorithm/formula/technique to do this? There might be some other parameters that need to predefined before making these calculations. But I am not getting it through.
|
What statistical technique would be appropriate for optimising the weights?
|
CC BY-SA 2.5
| null |
2011-03-04T06:54:53.380
|
2014-01-02T11:49:27.267
|
2011-03-04T21:35:45.670
|
264
|
3535
|
[
"optimization",
"reinforcement-learning"
] |
7870
|
2
| null |
7860
|
4
| null |
A Wachter plot can help you visualize the eigenvalues of your PCA. It is essentially a Q-Q plot of the eigenvalues against the Marchenko-Pastur distribution. I have an example here: There is one dominant eigenvalue which falls outside the Marchenko-Pastur distribution. The usefulness of this kind of plot depends on your application.
| null |
CC BY-SA 2.5
| null |
2011-03-04T06:56:34.357
|
2011-03-04T06:56:34.357
| null | null |
795
| null |
7871
|
2
| null |
7865
|
4
| null |
A little bit of googling would have answered your question. I am posting this as an answer because [the document](http://lib.stat.cmu.edu/S/Spoetry/Working/ljungbox.pdf) answering the question is number 2 in the google search "power of Ljung-Box test" and this question is number 3.
The answer to your question is in graphs Exhibit 14 and Exibit 15 in page 12 of said document. The power depends on the parameter of AR(1) process, the smaller it is, the lower the power, which is perfectly natural.
You can perfectly replicate these graphs yourself, here is an [example of power analysis](https://stats.stackexchange.com/questions/7319/two-sample-t-test-for-data-maybe-time-series-with-autocorrelation/7331#7331) for the AR(1) processes of another test, I think it is not that hard to adapt it for Ljung-Box test.
As for the literature the usual time series books do not mention or mention the power of Ljung-Box test briefly. This means that purely theoretical results are probably not available.
| null |
CC BY-SA 2.5
| null |
2011-03-04T07:43:57.670
|
2011-03-04T07:43:57.670
|
2017-04-13T12:44:39.283
|
-1
|
2116
| null |
7872
|
2
| null |
7860
|
0
| null |
You could also use the psych package.
This contains a plot.factor method, which will plot the different components against one another in the style of a scatterplot matrix.
| null |
CC BY-SA 2.5
| null |
2011-03-04T08:37:56.817
|
2011-03-04T08:37:56.817
| null | null |
656
| null |
7873
|
1
|
7926
| null |
9
|
382
|
Today I have got a question about binomial/ logistic regression, its based on an analysis that a group in my department have done and were seeking comments upon. I made up the example below to protect their anonymity, but they were keen to see the responses.
Firstly, the analysis began with a simple 1 or 0 binomial response (e.g. survival from one breeding season to the next) and the goal was to model this response as a function of some co-variates.
However, multiple measurements of some co-variates were available for some individuals, but not for others. For example, imagine variable x is a measure of metabolic rate during labour and individuals vary in the number of offspring that they have (e.g. variable x was measured 3 times for individual A, but only once for individual B). This imbalance is not due to the sampling strategy of the researchers per se, but reflects the characteristics of the population they were sampling from; some individuals have more offspring than others.
I should also point out that measuring the binomial 0\1 response between labour events was not possible because the interval between these events was quite short. Again, imagine the species in questions has a short breeding season, but can give birth to more than one offspring during the season.
The researchers chose to run a model in which they used the mean of variable x as one covariate and the number of offspring an individual gave birth to as another covariate.
Now, I was not keen on this approach for a number of reasons
1)Taking the average of x means losing information in the within-individual variability of x.
2) The mean is itself a statistic, so by putting it in the model we end up doing statistics on statistics.
3) The number of offspring an individual had is in the model, but it is also used to calculate the mean of variable x, which I think could cause trouble.
So, my question is how would people go about modeling this type of data?
At the moment, I would probably run separate models for individuals that had one offspring, then for individuals that had two offspring etc. Also, I would not use the mean of variable x and just use the raw data for each birth, but I am not convinced this is much better either.
Thank you for your time
(PS: I apologize that its quite a long question, and I hope that the example is clear)
|
Discussing binomial regression and modeling strategies
|
CC BY-SA 2.5
| null |
2011-03-04T10:09:32.567
|
2011-04-29T01:03:49.467
|
2011-04-29T01:03:49.467
|
3911
|
3136
|
[
"logistic",
"modeling",
"binomial-distribution",
"model-selection"
] |
7874
|
2
| null |
7835
|
2
| null |
Regarding "Calculating one measure across all populations":
If you want to find one set of features so that the discriminative measure is good for each population, then calculating an aggregation of the measures (e.g. avg) is certainly a way to go. Depending on what you want to achieve, you can vary how this aggregation is calculated. Examples:
- All populations are weighted equal => avg
- The quality shall be as good as possible for every population, but is not allowed to drop beneath a certain threshold => min
- The population with greater size shall be weighted more => weighted avg with weight = size of population
Regarding "General Model-Building":
As the hands-on machine learner that I am, I recommend to
- merge the populations and calculate a good feature subset (this is what you want to do, if I understood you correctly)
- merge the populations + add a feature "population", which contains something like the "population id" and than calculate a good feature subset
- calculate a feature subset for each population separately
This will show you whether the division into subpopulations is really an issue or can be skipped anyways. Sometimes correlations between predictors and population do not influence the correlation between predictors and response.
| null |
CC BY-SA 2.5
| null |
2011-03-04T10:29:42.123
|
2011-03-04T10:29:42.123
| null | null |
264
| null |
7875
|
2
| null |
7873
|
2
| null |
I think you could explore a nonlinear mixed model; this should allow you to use the data you have effectively. But if relatively few subjects have multiple measures, it won't matter much and may not work well (I think there could be convergence problems).
If you are using SAS you could use PROC GLIMMIX; if using R I think lme4 should be useful.
| null |
CC BY-SA 2.5
| null |
2011-03-04T10:55:33.093
|
2011-03-04T10:55:33.093
| null | null |
686
| null |
7876
|
1
|
7880
| null |
15
|
8904
|
I have trouble figuring out what the range bars in `plot.stl` exactly mean. I found Gavin's post on this question and read the documentation as well, I understand that they tell the relative magnitude of the decomposed components, but still I am not entirely sure how they work.
E.g.:
data: tiny bar, no scale seasonal: full bar, with scale ranging from -0.6 to 0.2 trend: another tiny bar (seems to be equal to data), no scale remainder: medium sized bar with scale from -1.5 to 0.5
I do not understand what's the basis to relation and why trend has no scale. I tried `stl` and `decompose` with identical results for multiplicative and additive methods.
|
Interpreting range bars in R's plot.stl?
|
CC BY-SA 2.5
| null |
2011-03-04T11:01:08.517
|
2019-10-08T06:54:18.540
|
2011-03-04T13:31:52.003
|
2116
|
704
|
[
"r",
"time-series"
] |
7877
|
2
| null |
7813
|
14
| null |
mpq has already explained [boosting in plain english](https://stats.stackexchange.com/questions/256/how-boosting-works).
A picture may substitute a thousand words ... (stolen from [R. Meir and G. Rätsch. An introduction to boosting and leveraging](http://www.boosting.org/papers/MeiRae03.ps.gz))

Image remark: In the 1st Iteration the classifier based on all datapoints classifies all points correctly except those in x<0.2/y>0.8 and the point around 0.4/0.55 (see the circles in the second picture). In the second iteration exactly those points gain a higher weight so that the classifier based on that weighted sample classifies them correctly (2nd Iteration, added dashed line). The combined classifiers (i.e. "the combination of the dashed lines") result in the classifier represent by the green line. Now the second classifier produces another missclassifications (x in [0.5,0.6] / y in [0.3,0.4]), which gain more focus in the third iteration and so on and so on. In every step, the combined classifier gets closer and closer to the best shape (although not continuously). The final classifier (i.e. the combination of all single classifiers) in the 100th Iteration classifies all points correctly.
Now it should be more clear how boosting works. Two questions remain about the algorithmic details.
1. How to estimate missclassifications ?
In every iteration, only a sample of the training data available in this iteration is used for training the classifier, the rest is used for estimating the error / the missclassifications.
2. How to apply the weights ?
You can do this in the following ways:
- You sample the data using an appropriate sample algorithm which can handle weights (e.g. weighted random sampling or rejection sampling) and build classification model on that sample. The resulting sample contains missclassified examples with a higher probability than correctly classified ones, hence the model learned on that sample is forced to concentrate on the missclassified part of the data space.
- You use a classification model which is capable of handling such weights implicitly like e.g. Decision Trees. DT simply count. So instead of using 1 as counter/increment if an example with a certain predictor and class values is presented, it uses the specified weight w. If w = 0, the example is practically ignored. As a result, the missclassified examples have more influence on the class probability estimated by the model.
Regarding your document example:
Imagine a certain word separates the classes perfectly, but it only appears in a certain part of the data space (i.e. near the decision boundary). Then the word has no power to separate all documents (hence it's expressiveness for the whole dataset is low), but only those near the boundary (where the expressiveness is high). Hence the documents containing this word will be misclassified in the first iteration(s), and hence gain more focus in later applications. Restricting the dataspace to the boundary (by document weighting), your classifier will/should detect the expressiveness of the word and classify the examples in that subspace correctly.
(God help me, I can't think of a more precise example. If the muse later decides to spend some time with me, I will edit my answer).
Note that boosting assumes weak classifiers. E.g. Boosting applied together with NaiveBayes will have no significant effect (at least regarding my experience).
edit: Added some details on the algorithm and explanation for the picture.
| null |
CC BY-SA 2.5
| null |
2011-03-04T11:46:01.650
|
2011-03-08T07:41:14.860
|
2017-04-13T12:44:33.237
|
-1
|
264
| null |
7878
|
2
| null |
7869
|
3
| null |
So, if I understand correctly, you are trying to arrive at a weighting scheme that maximizes the amount of total conversions. I am assuming that you are going to reset the conversion values for each heading after each weight change.
A simple solution then could be something like this:
(I'm not familiar with your syntax so hopefully you understand this pseudocode)
```
heading1: {ConvPView = conversions / views}
heading2: {ConvPView = conversions / views}
heading3: {ConvPView = conversions / views}
heading4: {ConvPView = conversions / views}
AvgeConvPView = (h1.cpv + h2.cpv +h3.cpv +h4.cpv) / 4
heading1: {weight = weight + ((ConvPView - AvgeConvPView) * weightChangeConstant)}
heading2: {etc..}
```
So, basically, you take a heading's conversions-per-view (conversions / views) and then subtract the average conversions-per-view for all headings. Then you multiply that by a weight-change constant and add the result to the previous weight. Or, in brief:
```
Weight = weight + (((conversions / views) - (mean of (conversions / views)) * constant)
```
This will give a negative number if bellow avg and a positive number if above. A larger constant will give you faster modification of weights but will also be more volatile and less able to settle on an optimal weighting distribution.
| null |
CC BY-SA 2.5
| null |
2011-03-04T12:05:12.543
|
2011-03-04T13:51:43.177
|
2011-03-04T13:51:43.177
|
3443
|
3443
| null |
7879
|
2
| null |
7869
|
12
| null |
Your problem is a standard problem in the area of Reinforcement Learning and can be reformulated as [n-armed bandit problem](http://en.wikipedia.org/wiki/N_armed_bandit), where you have to find the best arms to pull to optimize the overall gain. In this case one arm = one header and gain is equivalent to 1 (if conversion), else 0.
I really recommend to read the book of [Sutton and Barto](http://webdocs.cs.ualberta.ca/~sutton/book/ebook/the-book.html), especially [chapter 2](http://webdocs.cs.ualberta.ca/~sutton/book/ebook/node14.html), where the basic technique to solve the n-armed-bandit problem is explained in detail (how to select, how to increase weights, how to cope with gains changing over time etc.). It is truly a great (and not unnecessarily complicated) read.
Edit: Here are some detailed explanations as an outline how RL works in the case of the OP. Some parts are rather similar to Matt's answer, but not all.
In Reinforcement Learning (RL) we differentiate between "exploration" and "exploitation". During exploration we search the space of available action (aka headings to show) to find a better or the best one meanwhile in exploitation we just use the actions/headings for which we already know that they are good. To measure how good an action is, we calculate the reward an action gained when used and hence use this value to estimate the reward of further usage of this action. In our case the expected mean reward of an action/heading is simply the conversion-rate.
Some definitions:
$h_i$= Heading i
$Q(h_i)$ = Expected reward function of heading i = $conversion_i / views_i$
How to select an action or heading ?
One option is to select greedily the action with the highest reward / conversionrate estimate. However, we are not able to find new or better solutions this way (no exploration at all). What we actually want is a balance between exploration and exploitation. So we use a procedure called softmax-selection
$weight(h_i)=\frac{exp(Q(h_i)/\tau)}{\sum_j exp(Q(h_j)/\tau)}$
(see [softmax-selection in the book of sutton](http://webdocs.cs.ualberta.ca/~sutton/book/ebook/node17.html))
Calculate this weights and then select an action randomly with respect to these weights (see e.g. the function sample() in R)
By setting the parameter tau, one can control the balance between exploration and exploitation. If $\tau$ = 0, then we are back to pure greedy action selection, if $\tau$ reaches infinity (or is big enough), all weights become equal and we are restrained to pure random sampling (no exploitation).
How to update the rewards ?
One can use this formula ...
$Q_{k+1}(h)=Q_k(h) + \alpha*(r_{k+1}-Q_k(h))$
(see [see the formula in the book of Sutton](http://webdocs.cs.ualberta.ca/~sutton/book/ebook/node20.html))
where k - denotes the k-th time the heading h has been shown and $r_k$= 1 (if during the k-th time the header h was shown a conversion happened) or $r_k$= 0 (else)
For step size $\alpha$ you can e.g. choose:
- $1/k$ which in your case should lead to convergence (more recent rewards / conversions are weighted less)
- a constant, which will result in no convergence (all the rewards are weighted equal), but allow the system to adapt to changes. E.g. imagine that the concept what the best header is changes over time.
Final remark
How to set the parameters $\tau$ and $\alpha$ is problem dependent. I suggest to run some simulations to see which are the best ones for you. For this and in general, I can only recommend to read chapter 2 of the linked book. It is worth it.
Another remark from practical experience with this formula (if the headers to show are changing constantly) is to not always use the softmax-selection formula. I rather suggest to choose the best header p% of the time a header shall be shown and select a another header in (100-p)% of the time to find another possibly better one. On the other hand, if your goal is to find the best header among a fixed number of headers, it is better to always use softmax selection and set $\alpha=1/k$ and $\tau$ close to zero.
| null |
CC BY-SA 3.0
| null |
2011-03-04T12:05:25.883
|
2014-01-02T11:49:27.267
|
2014-01-02T11:49:27.267
|
264
|
264
| null |
7880
|
2
| null |
7876
|
23
| null |
Here is an example to discuss specifics against:
```
> plot(stl(nottem, "per"))
```

So on the upper panel, we might consider the bar as 1 unit of variation. The bar on the seasonal panel is only slightly larger than that on the data panel, indicating that the seasonal signal is large relative to the variation in the data. In other words, if we shrunk the seasonal panel such that the box became the same size as that in the data panel, the range of variation on the shrunk seasonal panel would be similar to but slightly smaller than that on the data panel.
Now consider the trend panel; the grey box is now much larger than either of the ones on the data or seasonal panel, indicating the variation attributed to the trend is much smaller than the seasonal component and consequently only a small part of the variation in the data series. The variation attributed to the trend is considerably smaller than the stochastic component (the remainders). As such, we can deduce that these data do not exhibit a trend.
Now look at another example:
```
> plot(stl(co2, "per"))
```
which gives

If we look at the relative sizes of the bars on this plot, we note that the trend dominates the data series and consequently the grey bars are of similar size. Of next greatest importance is variation at the seasonal scale, although variation at this scale is a much smaller component of the variation exhibited in the original data. The residuals (remainder) represent only small stochastic fluctuations as the grey bar is very large relative to the other panels.
So the general idea is that if you scaled all the panels such that the grey bars were all the same size, you would be able to determine the relative magnitude of the variations in each of the components and how much of the variation in the original data they contained. But because the plot draws each component on it's own scale, we need the bars to give us a relative scale for comparison.
Does this help any?
| null |
CC BY-SA 2.5
| null |
2011-03-04T13:03:27.320
|
2011-03-04T13:03:27.320
| null | null |
1390
| null |
7881
|
2
| null |
7845
|
2
| null |
Another approach would be to plot the different distributions on the same plot and use something like the `alpha` parameter in `ggplot2` to address the overplotting issues. The utility of this method will be dependent on the differences or similarities in your distribution as they will be plotted with the same bins. Another alternative would be to display smoothed density curves for each distribution. Here's an example of these options and the other options discussed in the thread:
```
library(ggplot2)
df <- melt(
data.frame(
x = rnorm(1000)
, y = rnorm(1000, 0, 2)
)
)
ggplot(data = df) +
# geom_bar(aes(x = value, fill = variable), alpha = 1/2)
# geom_bar(aes(x = value)) + facet_grid(variable ~ .)
# geom_density(aes(x = value, colour = variable))
# stat_qq(aes(sample = value, colour = variable))
```
| null |
CC BY-SA 2.5
| null |
2011-03-04T13:32:57.760
|
2011-03-04T13:32:57.760
| null | null |
696
| null |
7882
|
1
| null | null |
8
|
4519
|
Does anyone know how to construct a confidence interval for predicting a new test value given a trained Relevance Vector Machine (`rvm`) and/or Gaussian Process Regression (`gausspr`) using the [kernlab](http://cran.r-project.org/web/packages/kernlab/index.html) R package?
More specifically, how do I get:
- The standard error/deviation (variance) of a new test point;
- The parameters estimates of posterior distribution of the parameters?
I would appreciate if anyone could point to a document that discuss how to obtain/calculate the above, from the output of function call (`rvm` or `gausspr`).
|
How to compute prediction error from Relevance Vector Machine and Gaussian Process Regression?
|
CC BY-SA 2.5
| null |
2011-03-04T16:32:05.223
|
2012-03-10T16:40:32.110
|
2011-03-04T20:02:12.327
|
930
|
21360
|
[
"r",
"machine-learning",
"bayesian"
] |
7883
|
2
| null |
7836
|
18
| null |
We all have a good sense of what "model" might mean, although its technical definition will vary among disciplines. To compare this to DGP, I began by looking at the top five hits (counting two hits with the same author as one) in Googling "data generation process".
- A paper on how the US Air Force actually creates data in logistics support.
- Abstract of a paper published in Environment and Planning A concerning how "synthetic micropopulations" are created via computer "simulation models."
- A Web page on "synthetic data generation"; that is, simulation "to explore the effects of certain data characteristics on ... models."
- Abstract of a conference paper in data mining, asserting that "data in databases is the outcome of an underlying data generation process (dgp)."
- A book chapter that characterizes the data of interest as "arising from some transformation $W_t$ of an underlying [stochastic] process $V_t$ ... some or all [of which] may be unobserved..."
These links exhibit three slightly different but closely related uses of the term "data generation process." The commonest is in a context of statistical simulation. The others refer to the actual means by which data are created in an ongoing situation (logistics) and to a probability model for an ongoing data creation procedure, intended not to be analyzed directly. In the last case the text is differentiating an unobservable stochastic process, which nevertheless is modeled mathematically, from the actual numbers that will be analyzed.
These suggest two slightly different answers are tenable:
- In the context of simulation or creating "synthetic" data for analysis, the "data generation process" is a way to make data for subsequent study, usually by means of a computer's pseudo random number generator. The analysis will implicitly adopt some model that describes the mathematical properties of this DGP.
- In the context of statistical analysis, we may want to distinguish a real-world phenomenon (the DGP) from the observations that will be analyzed. We have models for both the phenomenon and the observations as well as a model for how the two are connected.
In regression, then, the DGP would normally describe how a set of data $(\mathbf{X}, Y)_i$ = $(X_{1i}, X_{2i}, \dots, X_{pi}, Y_{i})$, $i=1, 2, \ldots, n$ is assumed to be produced. E.g., the $X_{ji}$ could be set by the experimenter or they could be observed in some way and then be presumed to cause or be related to the values of the $Y_i$. The model would describe the possible ways in which these data could be mathematically related; e.g., we might say that each $Y_{i}$ is a random variable with expectation $\mathbf{X} \mathbf{\beta}$ and variance $\sigma^2$ for unknown parameters $\beta$ and $\sigma$.
| null |
CC BY-SA 2.5
| null |
2011-03-04T16:48:38.367
|
2011-03-04T16:48:38.367
| null | null |
919
| null |
7884
|
1
|
7889
| null |
14
|
33875
|
Sometimes I need to get only the first row of a data set grouped by an identifier, as when retrieving age and gender when there are multiple observations per individual. What's a fast (or the fastest) way to do this in R? I used aggregate() below and suspect there are better ways. Before posting this question I searched a bit on google, found and tried ddply, and was surprised that it was extremely slow and gave me memory errors on my dataset (400,000 rows x 16 cols, 7,000 unique IDs), whereas the aggregate() version was reasonably fast.
```
(dx <- data.frame(ID = factor(c(1,1,2,2,3,3)), AGE = c(30,30,40,40,35,35), FEM = factor(c(1,1,0,0,1,1))))
# ID AGE FEM
# 1 30 1
# 1 30 1
# 2 40 0
# 2 40 0
# 3 35 1
# 3 35 1
ag <- data.frame(ID=levels(dx$ID))
ag <- merge(ag, aggregate(AGE ~ ID, data=dx, function(x) x[1]), "ID")
ag <- merge(ag, aggregate(FEM ~ ID, data=dx, function(x) x[1]), "ID")
ag
# ID AGE FEM
# 1 30 1
# 2 40 0
# 3 35 1
#same result:
library(plyr)
ddply(.data = dx, .var = c("ID"), .fun = function(x) x[1,])
```
UPDATE: See Chase's answer and Matt Parker's comment for what I consider to be the most elegant approach. See @Matthew Dowle's answer for the fastest solution which uses the `data.table` package.
|
Fast ways in R to get the first row of a data frame grouped by an identifier
|
CC BY-SA 2.5
| null |
2011-03-04T17:17:28.997
|
2015-04-28T08:01:38.093
|
2011-03-14T15:33:12.630
|
3432
|
3432
|
[
"r",
"dataset",
"aggregation",
"plyr"
] |
7885
|
2
| null |
7884
|
4
| null |
You could try
```
agg <- by(dx, dx$ID, FUN = function(x) x[1, ])
# Which returns a list that you can then convert into a data.frame thusly:
do.call(rbind, agg)
```
I have no idea if this will be any faster than `plyr`, though.
| null |
CC BY-SA 2.5
| null |
2011-03-04T17:26:41.523
|
2011-03-04T17:26:41.523
| null | null |
71
| null |
7886
|
2
| null |
7884
|
10
| null |
You can try to use the [data.table](http://cran.r-project.org/web/packages/data.table/index.html) package.
For your particular case, the upside is that it's (insanely) fast. The first time I was introduced to it, I was working on data.frame objects with hundreds of thousands of rows. "Normal" `aggregate` or `ddply` methods were taken ~ 1-2 mins to complete (this was before Hadley introduced the `idata.frame` mojo into `ddply`). Using `data.table`, the operation was literally done in a matter of seconds.
The downside is that its so fast because it will resort your data.table (it's just like a data.frame) by "key columns" and use a smart searching strategy to find subsets of your data. This will result in a reordering of your data before you collect stats over it.
Given that you will just want the first row of each group -- maybe the reordering will mess up which row is first, which is why it might not be appropriate in your situation.
Anyway, you'll have to judge whether or not `data.table` is appropriate here, but this is how you would use it with the data you've presented:
```
install.packages('data.table') ## if yo udon't have it already
library(data.table)
dxt <- data.table(dx, key='ID')
dxt[, .SD[1,], by=ID]
ID AGE FEM
[1,] 1 30 1
[2,] 2 40 0
[3,] 3 35 1
```
Update: Matthew Dowle (the main developer of the data.table package) has provided a better/smarter/(extremely) more efficient way to use data.table to solve this problem as one of the answers here ... definitely check that out.
| null |
CC BY-SA 2.5
| null |
2011-03-04T17:33:51.057
|
2011-03-10T04:01:35.727
|
2011-03-10T04:01:35.727
|
227
|
227
| null |
7887
|
2
| null |
7884
|
4
| null |
Try reshape2
```
library(reshape2)
dx <- melt(dx,id=c('ID','FEM'))
dcast(dx,ID+FEM~variable,fun.aggregate=mean)
```
| null |
CC BY-SA 2.5
| null |
2011-03-04T17:42:50.517
|
2011-03-04T17:42:50.517
| null | null |
2817
| null |
7888
|
2
| null |
7884
|
10
| null |
You don't need multiple `merge()` steps, just `aggregate()` both variables of interest:
```
> aggregate(dx[, -1], by = list(ID = dx$ID), head, 1)
ID AGE FEM
1 1 30 1
2 2 40 0
3 3 35 1
> system.time(replicate(1000, aggregate(dx[, -1], by = list(ID = dx$ID),
+ head, 1)))
user system elapsed
2.531 0.007 2.547
> system.time(replicate(1000, {ag <- data.frame(ID=levels(dx$ID))
+ ag <- merge(ag, aggregate(AGE ~ ID, data=dx, function(x) x[1]), "ID")
+ ag <- merge(ag, aggregate(FEM ~ ID, data=dx, function(x) x[1]), "ID")
+ }))
user system elapsed
9.264 0.009 9.301
```
Comparison timings:
1) Matt's solution:
```
> system.time(replicate(1000, {
+ agg <- by(dx, dx$ID, FUN = function(x) x[1, ])
+ # Which returns a list that you can then convert into a data.frame thusly:
+ do.call(rbind, agg)
+ }))
user system elapsed
3.759 0.007 3.785
```
2) Zach's reshape2 solution:
```
> system.time(replicate(1000, {
+ dx <- melt(dx,id=c('ID','FEM'))
+ dcast(dx,ID+FEM~variable,fun.aggregate=mean)
+ }))
user system elapsed
12.804 0.032 13.019
```
3) Steve's data.table solution:
```
> system.time(replicate(1000, {
+ dxt <- data.table(dx, key='ID')
+ dxt[, .SD[1,], by=ID]
+ }))
user system elapsed
5.484 0.020 5.608
> dxt <- data.table(dx, key='ID') ## one time step
> system.time(replicate(1000, {
+ dxt[, .SD[1,], by=ID] ## try this one line on own
+ }))
user system elapsed
3.743 0.006 3.784
```
4) Chase's fast solution using numeric, not factor, `ID`:
```
> dx2 <- within(dx, ID <- as.numeric(ID))
> system.time(replicate(1000, {
+ dy <- dx[order(dx$ID),]
+ dy[ diff(c(0,dy$ID)) != 0, ]
+ }))
user system elapsed
0.663 0.000 0.663
```
and 5) Matt Parker's alternative to Chase's solution, for character or factor `ID`, which is slightly faster than Chase's numeric `ID` one:
```
> system.time(replicate(1000, {
+ dx[c(TRUE, dx$ID[-1] != dx$ID[-length(dx$ID)]), ]
+ }))
user system elapsed
0.513 0.000 0.516
```
| null |
CC BY-SA 2.5
| null |
2011-03-04T17:43:21.490
|
2011-03-04T20:16:34.900
|
2011-03-04T20:16:34.900
|
1390
|
1390
| null |
7889
|
2
| null |
7884
|
11
| null |
Is your ID column really a factor? If it is in fact numeric, I think you can use the `diff` function to your advantage. You could also coerce it to numeric with `as.numeric()`.
```
dx <- data.frame(
ID = sort(sample(1:7000, 400000, TRUE))
, AGE = sample(18:65, 400000, TRUE)
, FEM = sample(0:1, 400000, TRUE)
)
dx[ diff(c(0,dx$ID)) != 0, ]
```
| null |
CC BY-SA 2.5
| null |
2011-03-04T17:56:24.290
|
2011-03-05T14:12:49.310
|
2011-03-05T14:12:49.310
|
696
|
696
| null |
7890
|
1
| null | null |
3
|
1616
|
I have data relative to image intensity, which range from 0 to 255. For some models I need to run, I must transform this intensity data to a different scale, which must have a certain mean (for example, 1.5) and a certain maximum (for this example, 2). I have found information on how to convert data to a scale based on a min and a max, and also to a scale which has a given mean and variance, but these don't apply to my case - I just want to specify the mean and the maximum, deriving other properties from the data distribution.
Any ideas?
Many thanks!
|
How to transform data to have a new mean and maximum value?
|
CC BY-SA 2.5
| null |
2011-03-04T18:16:51.187
|
2011-03-04T21:53:32.937
|
2011-03-04T19:43:24.580
| null |
3374
|
[
"data-transformation"
] |
7891
|
1
| null | null |
3
|
156
|
Given a dataset $D$ and a distance measure, I want to split the dataset into two disjoint subsets $X, Y$ of a specified size (say 80% and 20% of the original size), so that the minimum distance of all pairs $(x, y)$ with $x \in X$ and $y \in Y$ is maximized. I found references to max-margin clustering without the size constraint, however my feeling is that the constraint should make the problem easier - does it? Is there some straightforward way to solve this problem?
|
Max-margin clustering with size constraint
|
CC BY-SA 2.5
| null |
2011-03-04T18:24:38.790
|
2011-03-05T00:03:34.177
|
2011-03-05T00:03:34.177
| null |
3336
|
[
"clustering",
"max-margin"
] |
7892
|
2
| null |
5733
|
2
| null |
(1) It's the CDF you'll need to generate your simulated time-series. To build it, first histogram your price changes/returns. Take a cumulative sum of bin population starting with your left-most populated bin. Normalize your new function by dividing by the total bin population. What you are left with is a CDF. Here is some numpy code that does the trick:
```
# Make a histogram of price changes
counts,bin_edges = np.histogram(deltas,numbins,normed=False) # numpy histogram
# Make a CDF of the price changes
n_counts,bin_edges2 = np.histogram(deltas,numbins,normed=True)
cdf = np.cumsum(n_counts) # cdf not normalized, despite above
scale = 1.0/cdf[-1]
ncdf = scale * cdf
```
(2) To generate correlated picks, use a copula. See [this answer](https://stats.stackexchange.com/questions/7515/what-are-some-techniques-for-sampling-two-correlated-random-variables) to my previous question on generating correlated time series.
| null |
CC BY-SA 2.5
| null |
2011-03-04T19:06:54.170
|
2011-03-04T19:06:54.170
|
2017-04-13T12:44:52.277
|
-1
|
2260
| null |
7893
|
2
| null |
7890
|
12
| null |
Let the mean of your sample be $\bar{x}$ and the desired mean be $\bar{y}$. Also, $x_{(n)}$ is your sample maximum and $y_{(n)}$ is the desired maximum. Then, create a transformation of your data $y=ax + b$ such that $\bar{x}$ corresponds to $\bar{y}$ and $x_{(n)}$ corresponds to $y_{(n)}$:
$$\begin{align*}
a &= \frac{y_{(n)}-\bar{y}}{x_{(n)}-\bar{x}} \quad \text{and} \\
b&=\bar{y}-a\bar{x}.
\end{align*}$$
Note that this is just a line through the points $(\bar{x},\bar{y})$ and $(x_{(n)},y_{(n)})$. It requires that $x_{(n)} \neq \bar{x}$, essentially implying that $x$ must have positive variance.
Since this is a linear transformation, deriving properties of the distribution of $y$ using the distribution of $x$ (other than the mean and maximum) should be easy.
Hope that helps.
| null |
CC BY-SA 2.5
| null |
2011-03-04T19:09:05.393
|
2011-03-04T21:53:32.937
|
2011-03-04T21:53:32.937
|
919
|
401
| null |
7894
|
1
|
7904
| null |
5
|
889
|
If my response variable has 2 levels, 0 and 1, I can use a [ROC curve](http://rocr.bioinf.mpi-sb.mpg.de/) to assess the accuracy of my predictive model. But what if my response variable has 3 levels, -1, 0, and 1? Is there a way to plot "[ROC surfaces](http://users.dsic.upv.es/grupos/elp/cferri/vus-ecml03-camera-ready4.pdf)" in R?
I started trying to code this up myself, but I couldn't figure out how to calculate and plot the surface.
```
library(rpart)
library(randomForest)
y <- as.factor(c(-1,-1,-1,0,0,0,1,1,1,1,1,0,0,0,1,1,1,-1,0,1,0,1,0,-1,0,-1,0,-1,1,0))
x1 <- rnorm(1:30)
x2 <- rnorm(1:30)
x3 <- rnorm(1:30)
model <- randomForest(y~x1+x2+x3)
model
y1 <- predict(model,type='prob')
DF <- data.frame(y=as.numeric(as.character(y)),'negone'=y1[,'-1'],'zero'=y1[,'0'],'one'=y1[,'1'])
DF$FPrOne <-
DF$FPrNegOne <-
DF$TPR <-
library(rgl)
plot3d(DF$FPrOne,DF$FPrNegOne,DF$TPR,type=1)
```
|
ROC surfaces in R
|
CC BY-SA 2.5
| null |
2011-03-04T19:10:07.560
|
2011-03-04T23:27:51.500
|
2011-03-04T19:17:36.493
|
2817
|
2817
|
[
"r",
"data-visualization",
"roc"
] |
7896
|
2
| null |
4155
|
2
| null |
Perhaps your ARIMA model needs to be tempered with identifiable deterministic structure such as "changes in intercept" or changes in trend. These models would then be classified as robust ARIMA models or Transfer FunctionModels. If there is a True Limiting Value then the data might suggest that as it grows towards that limit. Good analysis might lead to a forecast that reached its zenith at that limit.It might help the list if you actually posted one of these troublesome series.
| null |
CC BY-SA 2.5
| null |
2011-03-04T20:38:15.360
|
2011-03-04T20:38:15.360
| null | null |
3382
| null |
7897
|
1
| null | null |
10
|
1319
|
A newbie question here. I am currently performing a nonparametric regression using the np package in R. I have 7 features and using a brute force approach I identified the best 3. But, soon I will have many more than 7 features!
My question is what are the current best methods for feature selection for nonparametric regression. And which if any packages implement the methods. Thank you.
|
Best methods of feature selection for nonparametric regression
|
CC BY-SA 2.5
| null |
2011-03-04T21:17:23.477
|
2011-03-23T18:01:20.987
|
2011-03-04T21:25:50.763
| null |
3550
|
[
"r",
"machine-learning",
"nonparametric",
"feature-selection"
] |
7898
|
2
| null |
7890
|
1
| null |
Transforming based on maximum/minimums is dangerous because outliers can drastically affect the scaling. For example, if you have an image mostly in the 10-50 range, with a couple of noisy points in the 250 range, you will be adjusting the image based on a questionable maximum, hurting the dynamic range.
I think it would be safer to adjust your image based on mean/median and variance, and then cap the maximum (clipping). For example, you can set the variance so the vast majority of data points in your typical image would fall in the range you desire, and cap anything that goes above it.
| null |
CC BY-SA 2.5
| null |
2011-03-04T21:22:36.553
|
2011-03-04T21:22:36.553
| null | null |
2965
| null |
7899
|
1
|
7911
| null |
11
|
4175
|
I need to draw a complex graphics for visual data analysis.
I have 2 variables and a big number of cases (>1000). For example (number is 100 if to make dispersion less "normal"):
```
x <- rnorm(100,mean=95,sd=50)
y <- rnorm(100,mean=35,sd=20)
d <- data.frame(x=x,y=y)
```
1) I need to plot raw data with point size, corresponding the relative frequency of coincidences, so `plot(x,y)` is not an option - I need point sizes. What should be done to achieve this?
2) On the same plot I need to plot 95% confidence interval ellipse and line representing change of correlation (do not know how to name it correctly) - something like this:
```
library(corrgram)
corrgram(d, order=TRUE, lower.panel=panel.ellipse, upper.panel=panel.pts)
```

but with both graphs at one plot.
3) Finally, I need to draw a resulting linar regression model on top of this all:
```
r<-lm(y~x, data=d)
abline(r,col=2,lwd=2)
```
but with error range... something like on QQ-plot:

but for fitting errors, if it is possible.
So the question is:
How to achieve all of this at one graph?
|
Complex regression plot in R
|
CC BY-SA 2.5
| null |
2011-03-04T21:25:00.717
|
2011-03-06T15:50:56.043
|
2011-03-04T22:51:29.580
|
3376
|
3376
|
[
"r",
"data-visualization",
"regression"
] |
7900
|
1
|
7906
| null |
8
|
3118
|
I'm trying to generate random textual data based on regular expressions. I'd like to be able to do this in R, as I know that R does have regex capabilities. Any leads?
This question has come up before in forums ([StackOverflow Post 1](https://stackoverflow.com/questions/274011/random-text-generator-based-on-regex), [StackOverflow Post 2](https://stackoverflow.com/questions/2905229/reverse-regular-expressions-to-generate-data), etc.), but they always mention solutions based on other programming platforms (perl, .NET, ...), not R.
|
Generate random strings based on regular expressions in R
|
CC BY-SA 2.5
| null |
2011-03-04T22:29:50.427
|
2018-06-08T06:19:10.153
|
2017-05-23T12:39:26.523
|
-1
|
3551
|
[
"r",
"data-mining",
"random-generation",
"text-mining"
] |
7901
|
2
| null |
7899
|
2
| null |
For point 1 just use the `cex` parameter on plot to set the point size.
For instance
```
x = rnorm(100)
plot(x, pch=20, cex=abs(x))
```
To have multiple graphs in one plot use `par(mfrow=c(numrows, numcols))` to have an evenly spaced layout or `layout` to make more complex ones.
| null |
CC BY-SA 2.5
| null |
2011-03-04T22:35:44.150
|
2011-03-04T22:35:44.150
| null | null |
582
| null |
7902
|
1
| null | null |
3
|
1676
|
I will simplify our problem in this way. Say there are 100,000 cases in total to examine. Due to the time limitation, we randomly selected 2,000 of them. Then we found 1,000 of them are invalid, so we have only 1,000 valid cases left. Finally we categorize these 1,000 valid cases into 2 categories, A and B; they have 300 and 700 cases respectively.
We want to calculate the confidence interval, in order to check the statistical significance of the results. In other words, if the result shows that there are 30% of the cases in category A, how trustworthy this percentage is when talking about the whole population. We used this website, [http://www.surveysystem.com/sscalc.htm](http://www.surveysystem.com/sscalc.htm), to calculate the confidence interval, so the percentages will be like $30\pm 3.7\%$ and $70\pm 3.7\%$.
So there are two ways of deciding the population and the sample size.
(1) Population is 100,000 (real population) and sample size is 2,000;
(2) Population is 50,000 (estimated) and sample size is 1,000.
I think we should use option (2), because we actually found roughly only 50% of the original cases are valid cases. But this ratio is actually estimated by the 2,000 cases we sampled. How does this estimation affect the confidence of the result?
Would somebody recommend other ways to check the statistical significance of the result in our case?
Thanks for your help!
|
How to calculate confidence interval when only a part of the samples are valid?
|
CC BY-SA 2.5
| null |
2011-03-04T22:58:38.217
|
2011-03-05T01:53:21.813
|
2011-03-04T23:03:13.063
|
919
|
3545
|
[
"confidence-interval",
"sampling",
"sample-size",
"non-response"
] |
7903
|
1
|
7908
| null |
26
|
1983
|
QUESTION:
I have binary data on exam questions (correct/incorrect). Some individuals might have had prior access to a subset of questions and their correct answers. I don’t know who, how many, or which. If there were no cheating, suppose I would model the probability of a correct response for item $i$ as $logit((p_i = 1 | z)) = \beta_i + z$, where $\beta_i$ represents question difficulty and $z$ is the individual’s latent ability. This is a very simple item response model that can be estimated with functions like ltm’s rasch() in R. In addition to the estimates $\hat{z}_j$ (where $j$ indexes individuals) of the latent variable, I have access to separate estimates $\hat{q}_j$ of the same latent variable which were derived from another dataset in which cheating was not possible.
The goal is to identify individuals who likely cheated and the items they cheated on. What are some approaches you might take? In addition to the raw data, $\hat{\beta}_i$, $\hat{z}_j$, and $\hat{q}_j$ are all available, although the first two will have some bias due to cheating. Ideally, the solution would come in the form of probabilistic clustering/classification, although this is not necessary. Practical ideas are highly welcomed as are formal approaches.
So far, I have compared the correlation of question scores for pairs of individuals with higher vs. lower $\hat{q}_j -\hat{z}_j $ scores (where $\hat{q}_j - \hat{z}_j $ is a rough index of the probability that they cheated). For example, I sorted individuals by $\hat{q}_j - \hat{z}_j $ and then plotted the correlation of successive pairs of individuals’ question scores. I also tried plotting the mean correlation of scores for individuals whose $\hat{q}_j - \hat{z}_j $ values were greater than the $n^{th}$ quantile of $\hat{q}_j - \hat{z}_j $, as a function of $n$. No obvious patterns for either approach.
---
UPDATE:
I ended up combining ideas from @SheldonCooper and the helpful [Freakonomics paper](http://pricetheory.uchicago.edu/levitt/Papers/JacobLevitt2003.pdf) that @whuber pointed me toward. Other ideas/comments/criticisms welcome.
Let $X_{ij}$ be person $j$’s binary score on question $i$. Estimate the item response model $$logit(Pr(X_{ij} = 1 | z_j) = \beta_i + z_j,$$ where $\beta_i$ is the item’s easiness parameter and $z_j$ is a latent ability variable. (A more complicated model can be substituted; I’m using a 2PL in my application). As I mentioned in my original post, I have estimates $\hat{q_j } $ of the ability variable from a separate dataset $\{y_{ij}\}$ (different items, same persons) on which cheating was not possible. Specifically, $\hat{q_j} $ are empirical Bayes estimates from the same item response model as above.
The probability of the observed score $x_{ij}$, conditional on item easiness and person ability, can be written $$p_{ij} = Pr(X_{ij} = x_{ij} | \hat{\beta_i }, \hat{q_j }) = P_{ij}(\hat{\beta_i }, \hat{q_j })^{x_{ij}} (1 - P_{ij}(\hat{\beta_i }, \hat{q_j }))^{1-x_{ij}},$$ where $P_{ij}(\hat{\beta_i }, \hat{q_j }) = ilogit(\hat{\beta_i} + \hat{q_j})$ is the predicted probability of a correct response, and $ilogit$ is the inverse logit. Then, conditional on item and person characteristics, the joint probability that person $j$ has the observations $x_j$ is $$p_j = \prod_i p_{ij},$$ and similarly, the joint probability that item $i$ has the observations $x_i$ is $$p_i = \prod_j p_{ij}.$$ Persons with the lowest $p_j$ values are those whose observed scores are conditionally least likely -- they are possibly cheaters. Items with the lowest $p_j$ values are those which are conditionally least likely -- they are the possible leaked/shared items. This approach relies on the assumptions that the models are correct and that person $j$’s scores are uncorrelated conditional on person and item characteristics. A violation of the second assumption isn’t problematic though, as long as the degree of correlation does not vary across persons, and the model for $p_{ij}$ could easily be improved (e.g., by adding additional person or item characteristics).
An additional step I tried is to take r% of the least likely persons (i.e. persons with the lowest r% of sorted p_j values), compute the mean distance between their observed scores x_j (which should be correlated for persons with low r, who are possible cheaters), and plot it for r = 0.001, 0.002, ..., 1.000. The mean distance increases for r = 0.001 to r = 0.025, reaches a maximum, and then declines slowly to a minimum at r = 1. Not exactly what I was hoping for.
|
Detecting patterns of cheating on a multi-question exam
|
CC BY-SA 2.5
| null |
2011-03-04T23:19:54.100
|
2011-03-15T15:26:35.357
|
2011-03-15T15:26:35.357
|
3432
|
3432
|
[
"r",
"clustering",
"classification",
"psychometrics"
] |
7904
|
2
| null |
7894
|
6
| null |
Some idea to stuff it into 2D in 3D is a surface defined by true positive rate of each class ([He, Xin, B.D. Gallas, and E.C. Frey. 2009.](http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5306177)), so I'll go this way.
Let's say the `g` variable holds the vote proportions:
```
g<-model$votes;
```
First you need two thresholds, one for each of some two classes (let's say for 1 and -1) -- I'll call them `qP` and `qN`, which will create the surface:
```
qP<-qN<-seq(0,1,30)
```
Now, let's calculate a true positive rate for positive class (and negative, which is mostly the same):
```
tpP<-function(qP,qN,g) sum(g[,"1"]>qP & y=="1")/sum(y=="1")
tpN<-function(qP,qN,g) sum(g[,"-1"]>qN & y=="-1")/sum(y=="-1")
```
The zero class gets the rest of votes:
```
tpZ<-function(qP,qN,g) sum(!(q[,"-1"]>qN) & !(q[,"1"]>qP) & y=="0")/sum(y=="0")
```
Now we just have to apply it over the whole qPxqN, easiest way is to use `outer` on `Vectorize`d functions (note that this is not the effective method, though I don't see any obvious optimizations which will not degenerate readability):
```
X<-outer(qP,qN,Vectorize(tpP,vec=c("qP","qN")),g)
outer(qP,qN,Vectorize(tpN,vec=c("qP","qN")),g)->Y
Z<-outer(qP,qN,Vectorize(tpZ,vec=c("qP","qN")),g)
```
Now all points (`X[[i]]`,`Y[[i]]`,`Z[[i]]`) should lay on the surface; the matrix format is suitable to run `rgl.surface`:
```
rgl.surface(X,Z,Y) //Yup, the order is weired
```

Definitely does not seem too good, but this is what one can expect on a small random data ;-)
| null |
CC BY-SA 2.5
| null |
2011-03-04T23:22:09.147
|
2011-03-04T23:27:51.500
|
2011-03-04T23:27:51.500
| null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.