Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5683 | 2 | null | 5682 | 3 | null | I have never used it, but I know for new versions of IML, you can call R routines. Maybe start by looking at [http://support.sas.com/rnd/app/studio/statr.pdf](http://support.sas.com/rnd/app/studio/statr.pdf).
| null | CC BY-SA 2.5 | null | 2010-12-22T01:26:07.817 | 2010-12-22T01:26:07.817 | null | null | 2040 | null |
5684 | 2 | null | 5664 | 3 | null | In addition to the existing answers, you may find it useful to read up about test norms, a well established topic in psychology and education.
- Test Norms: Their Use and Interpretation.
- http://psychassessment.com.au/PDF/Chapter%2004.pdf
- Google 'test norms'
| null | CC BY-SA 2.5 | null | 2010-12-22T03:23:21.193 | 2010-12-22T03:23:21.193 | null | null | 183 | null |
5685 | 2 | null | 5681 | 1 | null | It may not make sense to take an unweighted average of all $N$ observations if the subjects are divided among $k$ subpopulations [=groups] - unless the sample sizes in the groups are proportional to the sizes of the subpopulations. Otherwise, what characteristic[s] of interest would an unweighted average estimate?
Your sample design seems to be that of a 1-way MANOVA - so you could finesse the issue of unequal treatment [=group] means by including [vector] group-effect terms in a MANOVA model. Then the logical thing is to test first for no group differences [with the appropriate MANOVA procedure] - and if it appears there are none - the $N$ observations can then be averaged and tested using Hotelling's-$T^2$.
| null | CC BY-SA 2.5 | null | 2010-12-22T04:48:57.077 | 2010-12-22T11:47:26.107 | 2010-12-22T11:47:26.107 | null | 1112 | null |
5686 | 1 | null | null | 54 | 75312 | I would like to know why some languages like R has both NA and NaN. What are the differences or are they equally the same? Is it really needed to have NA?
| What is the difference between NaN and NA? | CC BY-SA 2.5 | null | 2010-12-22T06:52:16.957 | 2022-11-24T14:03:06.527 | null | null | 2479 | [
"r"
]
|
5687 | 2 | null | 5686 | 46 | null | >
?is.nan
?is.na
?NA
?NaN
Should answer your question.
But, in short:
NaN means $\frac {0} {0}$ -- Stands for Not a Number
NA is generally interpreted as a missing value and has various forms - NA_integer_, NA_real_, etc.
Therefore, NaN $\neq$ NA and there is a need for NaN and NA.
| null | CC BY-SA 2.5 | null | 2010-12-22T07:08:13.557 | 2010-12-22T07:08:13.557 | 2020-06-11T14:32:37.003 | -1 | 1307 | null |
5688 | 2 | null | 5686 | 12 | null | NA is for missing data. NaN, as J.M. said is for arithmetic purpose. NaN is usually the product of some arithmetic operation, such as `0/0`. NA usually is declared in advance, or is a product of operation when you try to access something that is not there:
```
> a <- c(1,2)
> a[3]
[1] NA
```
| null | CC BY-SA 2.5 | null | 2010-12-22T07:09:16.243 | 2010-12-22T07:09:16.243 | null | null | 2116 | null |
5690 | 1 | 9097 | null | 12 | 7815 | I'm looking into median survival using Kaplan-Meier in different states for a type of cancer. There are quite big differences between the states. How can i compare the median survival between all the states and determine which ones are significantly different from the mean median survival all across the country?
| How to compare median survival between groups? | CC BY-SA 2.5 | null | 2010-12-22T09:55:50.503 | 2012-04-29T18:11:51.660 | 2011-04-29T01:09:38.913 | 3911 | 1291 | [
"multiple-comparisons",
"survival"
]
|
5691 | 1 | 5692 | null | 5 | 4201 | I wish to create a confidence interval for a statistic calculated on a table (let's say the p.value of chisqr.test).
For this, I might sample a bootstrap sample with the same size, from a table with the proportions of the table I have. And my questions are:
- Is this a correct procedure to use, or is there some catch here that I might be missing?
- How can a table be "sampled" in R? (my first instinct is to "open up" the table using melt, then expend the table to rows, sample them, and fold the results back to a table. Is there a better way?)
p.s: I am aware of the "simulate.p.value" switch in chisqr.test - my question is more general - does it make sense to use bootstrap in such a way.
Sample table if someone wishes to show something:
```
(x <- matrix(c(12, 5, 7, 7), ncol = 2))
prop.table(x)
# undoing a table
as.data.frame(as.table(x)) # but how do we open up the rows now? - **update**: this was answered in the comments thanks to chl.
```
| Is it o.k. to bootstrap sample of a table from its proportions - and how to do so (in R)? | CC BY-SA 3.0 | null | 2010-12-22T11:13:56.733 | 2016-03-28T23:09:59.270 | 2016-03-28T23:09:59.270 | 2910 | 253 | [
"r",
"chi-squared-test",
"bootstrap",
"contingency-tables"
]
|
5692 | 2 | null | 5691 | 4 | null | Let me see if I understand you correctly:
You have a contingency table say $M$ by $N$ and you calculate a statistic based on this table. You want a CI for this statistic and you don't have a theoretical CI (or may be you don't want to use it, due to some reason).
Generally, if you have a contingency table, the columns represent the possible levels of the response variable (here: $Y$ will have $N$ levels) and the rows represent the levels of the explanatory variables (here: $X$ will have $M$ levels).
Now if you want to bootstrap, you need to make sure that you couple $Y$s and $X$s together. That is, shuffle $(X_i, Y_i)$ together for $i$ from $1$ to total number of observations (note this will be greater than $MN$ unless all the entries in the table are $1$).
Basically, what you can do is translate your table to a "hypothetical" data set set by using row numbers as levels for your $X$s and the column number as the value for the corresponding $Y$s.
To make it mathematically clear:
Lets say your $(i, j)^{th}$ entry in the table is $a$, then you need to create, $a$ data points for which the value of $Y$ is $j$ (remember column number for $Y$ value) and $X$ value is $i$. Do the same for $i = 1 \ldots m$ and $j = 1 \ldots n$. Now you have a "hypothetical" data set. Bootstrap from this data set as you would do in linear regression, say $B$ number of times. Each of the newly generated data set is a bootstrap sample. For each of the bootstrapped sample calculate the corresponding contingency table (say for $Y=1$ how many $X$s=1, how may $X$s=2, ... , this will give you the (1,1), (2,1),... entry for your table). Calculate the statistic for each of the $B$ tables and you have the desired bootstrap distribution.
I don't know if this is what you meant by bootstrapping the table. Please let me know if you thought otherwise.
HTH
S.
| null | CC BY-SA 2.5 | null | 2010-12-22T11:46:08.653 | 2010-12-22T11:46:08.653 | null | null | 1307 | null |
5693 | 2 | null | 4687 | 6 | null | First of all, your usage of the term "prior probability" seems to be wrong. For any node N with discrete values $n_i$ the probability that a certain value of N occurs a priori is $p(N=n_i)$. If a node has no parents, one is interested in calculate this prob. But if a node has parents P, one is interested in calculating the conditional probability, i.e. the probability that a certain value of N occurs given its parents. This is $p(N=n_i|P)$.
Regarding your actual question: How to calculate the conditional probabilities of a child node if the parents are continuous and the child node is a) discrete or b) continuous. I will to explain one approach using a concrete example, but I hope the general idea will be clear though.
Consider this network:

where subsidy and buys are discrete nodes meanwhile harvest and cost are continuous.
a) p(Cost|Subsidy, Harvest)
Two options:
- Discretize Harvest and treat it as discrete => maybe information loss
- Model a mapping from the current value of harvest to a parameter describing the distribution of Cost.
Details to 2.:
Let's assume cost can be modeled using a normal distribution. In this case it is common to fix the variance and map the value of Harvest linearly to the mean of gaussian. The parent subsidy (binary) only add the constraint to create a separate distribution for subsidy=true and subsidy=false.
Result:
$p(Cost=c|Harvest=h,Subsidy=true)=N(\alpha_1 * h + \beta_1,\sigma_1^2)(c)$
$p(Cost=c|Harvest=h,Subsidy=false)=N(\alpha_2 * h + \beta_2,\sigma_2^2)(c)$
for some $\alpha$s and $\beta$s.
b) p(Buys|Cost)
In this case on need to map the probability of occurrence of certain costs to the probability of Buy=True (p(Buy=False) = 1-p(Buy=True)). (Note that this is the same task as in logistic regression). One approach is if the parent has a normal distribution, to calculate the integral from 0 to x of a standard normal distribution, where x is the z-transformed value of the parent. In our example:
$p(Buys=true|Cost=c) = integral(0,\frac{-c+\mu}{\sigma})$ with $\mu$=mean and $\sigma$=standard deviation of the Cost distribution. In this case $-c+\mu$ instead of $\mu-c$, because an observation (knowledge extracted from data !) is, that the lower the cost the more probable is a buy.
In the case of a non-binary discrete child-node, one approach is to transform the one multi-value-problem into multiple binary problems.
Example-Source / Further reading:
"Artificial Intelligence: A modern approach" by Russell and Norvig
| null | CC BY-SA 2.5 | null | 2010-12-22T12:52:08.763 | 2010-12-22T13:04:42.997 | 2010-12-22T13:04:42.997 | 264 | 264 | null |
5694 | 2 | null | 5691 | 1 | null | I once asked similar [question on stackoverflow](https://stackoverflow.com/questions/2281561/random-sample-from-given-bivariate-discrete-distribution). Basically you sample from table the same way you sample vector.
| null | CC BY-SA 2.5 | null | 2010-12-22T13:05:52.840 | 2010-12-22T13:05:52.840 | 2017-05-23T12:39:26.523 | -1 | 2116 | null |
5695 | 2 | null | 2891 | 2 | null | If these are exclusive behaviours - they must either turn left or right and can't go straight on, stop or anything else - then you have data that you might assume are binomially distributed: in the example you give there are then 1042 left turns (the 'ones') in 1084 'runs' which you are implicitly assuming to be independent observations.
Testing
You could either use a binomial distribution to test whether this is consistent with a particular true proportion of left turns, say 90%. In R, the test against 90% is
`binom.test(1042, n=1084, p=.90)`
and is rejected (and `chisq.test` agrees). You probably have covariates though and perhaps non-independent draws via clustering etc. For these, switch to a binomial logistic regression framework.
Plotting
For this data you should probably plot the empirical logits: here log(1042/42) = 3.211228, or their posterior means (see below). Visually this quantity represents a proportional increase and decrease in the counts as an equally sized increment up or down. A symmetrical representation in proportional rather than absolute terms is usually what you want for this sort of data.
You can get a quite a well-behaved confidence interval for the empirical logit via a Bayesian argument: the posterior distribution, assuming an invariant 'Jeffreys' Prior of $\text{Beta}(0.5,0.5)$, of the empirical logit is Normal with mean $\mu = \log{(\text{left}/\text{right})}$ and standard deviation $\sigma_\mu = (\text{left} + 0.5)^{-1} + (\text{right} + 0.5)^{-1}$.
| null | CC BY-SA 2.5 | null | 2010-12-22T14:21:26.737 | 2010-12-22T14:21:26.737 | null | null | 1739 | null |
5696 | 1 | null | null | 14 | 1271 | I have multiple independent coders who are trying to identify events in a time series -- in this case, watching video of face-to-face conversation and looking for particular nonverbal behaviors (e.g., head nods) and coding the time and category of each event. This data could reasonable be treated as a discrete-time series with a high sampling rate (30 frames/second) or as a continuous-time series, whichever is easier to work with.
I'd like to compute some measure of inter-rater reliability, but I expect there to be some uncertainty in when events occurred; that is, I expect that one coder might, for example, code that a particular movement began a quarter second later than other coders thought it started. These are rare events, if that helps; typically at least several seconds (hundreds of video frames) between events.
Is there a good way of assessing inter-rater reliability that looks at both of these kinds of agreement and disagreement: (1) do raters agree on what event occurred (if any), and (2) do they agree on when it occurred? The second is important to me because I'm interested in looking at the timing of these events relative to other things happening in the conversation, like what people are saying.
Standard practice in my field seems to be to divide things up into time slices, say 1/4 of a second or so, aggregate the events each coder reported per time slice, then compute Cohen's kappa or some similar measure. But the choice of slice duration is ad-hoc, and I don't get a good idea of uncertainty in time of events.
Best thought I have so far is that I could compute some kind of reliability curve; something like kappa as a function of the size of the window within which I consider two events as being coded at the same time. I'm not really sure where to go from there, though...
| Interrater reliability for events in a time series with uncertainty about event time | CC BY-SA 2.5 | null | 2010-12-22T15:41:17.387 | 2013-07-26T14:41:13.317 | 2010-12-22T15:59:33.153 | 930 | null | [
"time-series",
"reliability",
"agreement-statistics"
]
|
5698 | 1 | 5699 | null | 3 | 5794 | Please forgive me if this is not the right Stack Exchange (and for inventing terms).
For discrete random variables X and Y, the mutual information of X and Y can be defined as follows:
$I(X;Y) = \sum_{y \in Y} \sum_{x \in X}
p(x,y) \log{ \left( \frac{p(x,y)}{p_1(x)\,p_2(y)}
\right) }, \,\!$
I will define the mutual information of a "cell" $x_0$ to be:
$CI(x_0,Y) = \sum_{y \in Y} p(x_0,y) \log{ \left( \frac{p(x_0,y)}{p_1(x_0)\,p_2(y)}
\right) }, \,\!$
I'm not sure if this quantity goes by another name. Essentially I'm restricting focus to a single state of variable X (and then the full MI can be calculated by summing all the cell MIs).
My question: is it guaranteed that $CI(x_0,Y) \ge 0$? We know $I(X;Y)\ge0$ and we know that the pointwise mutual information can be negative. I feel like CI should be nonnegative and that I might be missing some obvious proof.
| Can the mutual information of a "cell" be negative? | CC BY-SA 2.5 | 0 | 2010-12-22T16:16:54.973 | 2016-02-25T12:23:41.740 | 2010-12-22T16:51:18.393 | 2485 | 2485 | [
"mutual-information",
"information-theory",
"probability"
]
|
5699 | 2 | null | 5698 | 1 | null | Answered on [Theoretical Computer Science](https://cstheory.stackexchange.com/questions/3939/can-the-mutual-information-of-a-cell-be-negative/3943#3943). CI cannot be negative.
The gist is that $CI=p(x_0) KL(p (y|x_0), p(y))$, and the Kullback-Leibler divergence is non-negative.
| null | CC BY-SA 3.0 | null | 2010-12-22T17:16:24.587 | 2016-02-25T12:23:41.740 | 2017-04-13T12:32:33.667 | -1 | 2485 | null |
5700 | 1 | 444148 | null | 13 | 12085 | Greetings,
I'm performing research that will help determine the size of observed space and the time elapsed since the big bang. Hopefully you can help!
I have data conforming to a piecewise linear function on which I want to perform two linear regressions. There is a point at which the slope and intercept change, and I need to (write a program to) find this point.
Thoughts?
| Finding the change point in data from a piecewise linear function | CC BY-SA 2.5 | null | 2010-12-22T17:44:21.793 | 2023-05-04T11:52:10.127 | 2011-02-28T12:34:24.513 | 2116 | 2488 | [
"regression",
"change-point"
]
|
5701 | 2 | null | 5700 | 6 | null | If the number of points is not too big, you may try all possibilities. Let's assume that the points are $X_i=(x_i,y_i)$ where $i=1,..,N$. Than, you may loop with $j$ from $2$ to $N-2$ and fit two lines to both $\{X_1,...,X_j\}$ and $\{X_{(j+1)},...,X_N\}$. Finally, you pick $j$ for which the sum of sum of squared residuals for both lines is minimal.
| null | CC BY-SA 2.5 | null | 2010-12-22T17:56:04.617 | 2010-12-22T17:56:04.617 | null | null | null | null |
5702 | 2 | null | 5700 | 8 | null | R package [strucchange](http://cran.r-project.org/web/packages/strucchange/index.html) might help you. Look at the vignette, it has a nice overview how to solve similar problems.
| null | CC BY-SA 2.5 | null | 2010-12-22T17:56:27.893 | 2010-12-22T17:56:27.893 | null | null | 2116 | null |
5703 | 1 | null | null | 8 | 589 | Given $n$-vectors $x, y_1, y_2$ such that the Spearman correlation coefficient of $x$ and $y_i$ is $\rho_i = \rho(x,y_i)$, are there known bounds on the Spearman coefficient of $x$ with $y_1 + y_2$, in terms of the $\rho_i$ (and $n$, presumably)? That is, can one find (non-trivial) functions $l(\rho_1,\rho_2,n), u(\rho_1,\rho_2,n)$ such that
$$l(\rho_1,\rho_2,n) \le \rho(x,y_1+y_2) \le u(\rho_1,\rho_2,n)$$
edit: per @whuber's example in the comment, it appears that in the general case, only the trivial bounds $l = -1, u = 1$ can be made. Thus, I would like to further impose the constraint:
- $y_1, y_2$ are permutations of the integers $1 \ldots n$.
| Are there bounds on the Spearman correlation of a sum of two variables? | CC BY-SA 2.5 | null | 2010-12-22T17:57:02.523 | 2010-12-23T20:53:49.907 | 2010-12-22T21:15:59.063 | 795 | 795 | [
"correlation",
"spearman-rho",
"bounds"
]
|
5705 | 2 | null | 4687 | 2 | null | Consider two simple cases,
1) a real valued variable X is the parent of another real valued variable Y
2) a real valued variable X is the parent of a discrete valued variable Y
Assume that the Bayes net is a directed graph X -> Y. The Bayes net is fully specified, in both cases, when P(X) and P(Y | X) are specified. Or, strictly speaking when P(X | a) and P(Y | X, b) are specified where a and b are the parameters governing the marginal distribution of X and the conditional distribution of Y given X, respectively.
Parametric Assumptions
If we are happy for some reason to assume that P(X | a) is Normal, so a contains its mean and its variance.
Consider case 1. Perhaps we are happy to assume that P(Y | X, b) is also Normal and that the conditional relationship is linear in X. Then b contains regression coefficients, as we normally understand them. That means here: an intercept, a slope parameter, and a conditional variance.
Now consider case 2. Perhaps we are willing to assume that P(Y | X, b) is Binomial and that the conditional relationship is linear in X. Then b contains the coefficients from a logistic regression model: here that means an intercept and a slope parameter.
In case 1 if we observe Y and try to infer X then we simply use Bayes theorem with the parametric assumptions noted above. The result is rather well known: the (posterior) mean of X given Y is a weighted average of the prior mean of X (gotten from a) and the observed value of Y, where the weights are a function of the standard deviations of X (from a) and Y given X (from b). More certainty about X pulls the posterior mean towards its mean.
In case 2 there may be no handy closed form for P(X | Y) so we might proceed by drawing a sample from P(X), making use of your assumptions about P(Y | X) and normalising the result. That is then a sample from P(X | Y).
In both cases, all we are doing is computing Bayes theorem, for which we need some parametric assumptions about the relevant distributions.
All this gets more complicated when you want to learn a and b, rather than assuming them, but that is, I think, beyond your question.
Sampling
When thinking about continuous parents it may be helpful to think about generating data from the network. To make on data point form the network in case 1 (with the assumptions as above) we would first sample from P(X) - a Normal distribution - then take that value and generate a conditional distribution over Y by plugging the value we generated into the regression model of P(Y | X) as an independent variable. That conditional distribution is also Normal (by assumption) so we then sample a Y value from that distribution. We now have a new data point. The same process applies when P(Y | X) is a logistic regression model. The model gives us a probability of 1 vs. 0 given the value of X we sampled. We then sample from that distribution - essentially tossing a coin biased according to the model probabilities - and we have a new data point. When we think about inference in a Bayes net, we are, conceptually speaking, just reversing this process.
Perhaps that helps?
| null | CC BY-SA 2.5 | null | 2010-12-22T18:30:25.820 | 2010-12-22T18:30:25.820 | null | null | 1739 | null |
5706 | 1 | null | null | 3 | 199 | Suppose I have a fleet of two-car trains riding around, and that each car is equipped with a data recording device. Unfortunately, some of the recording devices aren't working. I don't know either the exact size of the fleet or the percentage of failed recording devices. I'd like to make a reasonable guess about how many cars I'm missing data from.
In particular:
- Total fleet size $S$ is unknown.
- Failure rate (failed units / total cars) $F$ is unknown.
- I have data from both cars from $S_2$ trains and from exactly one car from $S_1$ trains.
- Therefore, I know that there are at least $S_1$ failed units. What I don't know is $S_0$, the number of trains with failed units on both cars.
- Let's assume that the distribution of cars with failed units is random, and whether one car has a failed unit is independent of whether it's mate does.
Does the following make sense for a first-order approximation?
- Guess that the failure rate $F'$ is equal to the proportion of missing cars that I know about to total cars that I know about: $F' = S_1 / (2*(S_2 + S_1))$
- Assume that the likelihood of a train having two failed units is $F'^2$.
- Therefore $S_0 = F'^2 * (S_0 + S_1 + S_2)$
- Therefore $S_0 = (S_2 + S_1) * F'^2 / (1 - F'^2)$
| Extrapolating the amount of data missing from the amount of data partially missing | CC BY-SA 2.5 | null | 2010-12-22T18:52:56.557 | 2010-12-23T14:41:47.153 | 2010-12-23T14:41:47.153 | null | null | [
"data-mining",
"missing-data",
"transportation"
]
|
5707 | 2 | null | 5700 | 5 | null | This is an (offline) changepoint detection problem. Our [previous discussion](https://stats.stackexchange.com/questions/2432/loess-that-allows-discontinuities/2445#2445) provides references to journal articles and R code. Look first at the [Barry and Hartigan](http://www.jstor.org/pss/2290726) "product partition model," because it handles changes in slope and has efficient implementations.
| null | CC BY-SA 2.5 | null | 2010-12-22T19:16:53.440 | 2010-12-22T19:16:53.440 | 2017-04-13T12:44:20.943 | -1 | 919 | null |
5708 | 2 | null | 5664 | 1 | null | I'd like to add another cautionary note, and a suggestion. When asked for a 1-5 rating, I usually think up some scale, like:
- worst ever
- real bad
- ok
- great
- awesome
If your raters did similarly, taking an average is somewhat questionable; the difference between "worst ever" and "real bad" may be larger than the difference between "great" and "awesome". In jargon, your data may be somewhere between ordinal and interval (see [http://en.wikipedia.org/wiki/Level_of_measurement](http://en.wikipedia.org/wiki/Level_of_measurement)).
For this reason, I'd suggest that you give your speakers a histogram of their ratings as part of their feedback. And maybe even rank speakers by percent of their ratings above, say, 3, rather than by their mean rating.
| null | CC BY-SA 2.5 | null | 2010-12-22T19:37:14.630 | 2010-12-22T19:37:14.630 | null | null | 2489 | null |
5709 | 2 | null | 5706 | 6 | null | This is a quick partial response to outline some options and correct some errors.
You are implicitly seeking a [method of moments estimator](http://en.wikipedia.org/wiki/Method_of_moments_%28statistics%29). Under your assumptions, letting $f$ be the failure rate and $n$ be the fleet size, the expectations of the $S_i$ (which are governed by a multinomial distribution) are
$$\eqalign{
\mathbb{E}_{f;n}[S_0] = &f^2 n \cr
\mathbb{E}_{f;n}[S_1] = &2 f (1-f) n \cr
\mathbb{E}_{f;n}[S_2] = &(1-f)^2 n.
}$$
From the algebraic relation
$$\mathbb{E}_{f;n}[S_0] = \frac{\mathbb{E}_{f;n}[S_1]^2}{4 \mathbb{E}_{f;n}[S_2]}$$
(which implies the failure rate should be about twice your estimate) and assuming $S_2 \ne 0$ you can derive the method of moments estimator
$$\hat{S}_0 = \frac{S_1^2}{4 S_2}.$$
This is likely to be biased, especially if $S_2$ is small, which impels us to consider other estimators.
More generally, this problem can be thought of as looking for a "good" estimator for $S_0$ of the form $\hat{S}_0 = t(S_1, S_2)$ in an experiment in which the outcomes are the sum of $n$ Binomial($1-f$, $2$) variables and $S_1$ and $S_2$ are the counts of the single and double "successes," respectively. To this end you need to supply a [loss function](http://en.wikipedia.org/wiki/Loss_function) $\Lambda(s,t)$ and analyze the statistical risk $r$, which is the expected loss
$$r_t(f,n) = \mathbb{E}_{f;n}[\Lambda(S_0, t(S_1, S_2))].$$
The loss function quantifies the cost of estimating that $S_0$ equals $t(S_1, S_2)$; usually the estimate is not perfect and there is a cost associated with that. The risk for a particular estimator $t$ is the expected loss; it depends on the unknown failure rate $f$ and the unknown fleet size $n$. Thus the exercise comes down to finding procedures with acceptable risk functions. If you have some quantitative information about the likely values of $f$ and $n$ you can exploit that either directly, by limiting the domain of the risk to the likely values, or with a Bayesian analysis in which you compute the expected value of $r_t$ under some assumed prior distribution of $(f,n)$. At this point the risk becomes solely a function of the procedure $t$ and it's "merely" a matter of finding a risk-minimizing procedure.
In any event, to make further progress you need to supply a loss function (or some reasonable approximation thereof). I would hesitate to recommend or use the method of moments estimator (derived above) without knowing something about your loss.
| null | CC BY-SA 2.5 | null | 2010-12-22T20:17:21.017 | 2010-12-22T20:17:21.017 | null | null | 919 | null |
5710 | 2 | null | 423 | 27 | null | From [SMBC](http://www.smbc-comics.com):

| null | CC BY-SA 2.5 | null | 2010-12-22T23:06:03.883 | 2010-12-22T23:06:03.883 | null | null | 1106 | null |
5712 | 2 | null | 5686 | 6 | null | I think of NA standing for 'Not Available', while NaN is 'Not a Number', although this is more mnemonic than explanation. By the way, I know of no language other than R (perhaps Splus?) that has both. Matlab, for example, has only NaN.
| null | CC BY-SA 2.5 | null | 2010-12-23T01:12:45.503 | 2010-12-23T17:23:28.107 | 2010-12-23T17:23:28.107 | 795 | 795 | null |
5713 | 1 | null | null | 9 | 13376 | Occasionally I see in literature that a categorical variable such as sex is “partialled” or “regressed” out in (fixed-effects or mixed-effects) regression analysis. I'm troubled with the following practical issues involved in such a statement:
(1) Usually the coding method is not mentioned in the paper. Such a variable has to be coded with quantitative values, and I feel the sensible way should be effect coding (e.g., male = 1, female = -1) so that partialling can be achieved with other effects interpreted at the grand mean of both sex groups. A different coding may render a different (and unwanted) interpretation. For example, dummy coding (e.g., male = 0, female = 1) would leave other effects associated with males, not the grand mean. Even centering this dummy-coded variable might not work well for their partialling purpose if there is unequal number of subjects across the two groups. Am I correct?
(2) If the effect of such a categorical variable is included in the model, examining its effects first seems necessary and should be discussed in the context because of its consequence on the interpretation of other effects. What troubles me is that sometimes the authors don't even mention the significance of sex effect, let alone any model building process. If the sex effect exists, a natural follow-up question is whether any interactions exist between sex and other variables in the model? If no sex effect and no interactions exist, sex should be removed from the model.
(3) If sex is considered of no interest to those authors, what is the point of including it in the model in the first place without checking its effects? Does the inclusion of such a categorical variable (and costing one degree of freedom on the fixed effect of sex) gain anything for their partialling purpose when sex effect exists (my limited experience says essentially no)?
| Partialling or regressing out a categorical variable? | CC BY-SA 2.5 | null | 2010-12-23T04:43:03.667 | 2010-12-24T00:48:56.240 | 2010-12-23T04:53:54.900 | 1513 | 1513 | [
"regression"
]
|
5714 | 2 | null | 2419 | 6 | null | I had good success with the tree-based learners in [Milk: Machine Learning Toolkit for Python](http://luispedro.org/software/milk). It seems to be under active development, but the documentation was a bit sparse when I was using it. The test suite (github.com/luispedro/milk/blob/master/tests/test_adaboost.py) contains a "boosted stump" though, which could get you going pretty quickly:
```
import numpy as np
import milk.supervised.tree
import milk.supervised.adaboost
def test_learner():
from milksets import wine
learner = milk.supervised.adaboost.boost_learner(milk.supervised.tree.stump_learner())
features, labels = wine.load()
features = features[labels < 2]
labels = labels[labels < 2] == 0
labels = labels.astype(int)
model = learner.train(features, labels)
train_out = np.array(map(model.apply, features))
assert (train_out == labels).mean() > .9
```
| null | CC BY-SA 2.5 | null | 2010-12-23T05:18:33.843 | 2010-12-23T05:18:33.843 | null | null | 2498 | null |
5715 | 2 | null | 5700 | 3 | null | Also the [segmented](http://cran.r-project.org/web/packages/segmented/index.html) package has helped me with similar problems in the past.
| null | CC BY-SA 2.5 | null | 2010-12-23T11:04:55.210 | 2010-12-23T21:31:31.840 | 2010-12-23T21:31:31.840 | 930 | 1291 | null |
5716 | 2 | null | 5713 | 4 | null | I don't think (1) makes any difference. The idea is to partial out from the response and the other predictors the effects of Sex. It doesn't matter if you code 0, 1 (Treatment contrasts) or 1, -1 (Sum to zero contrasts) as the models represent the same "amount" of information which is then removed. Here is an example in R:
```
set.seed(1)
dat <- data.frame(Size = c(rnorm(20, 180, sd = 5),
rnorm(20, 170, sd = 5)),
Sex = gl(2,20,labels = c("Male","Female")))
options(contrasts = c("contr.treatment", "contr.poly"))
r1 <- resid(m1 <- lm(Size ~ Sex, data = dat))
options(contrasts = c("contr.sum", "contr.poly"))
r2 <- resid(m2 <- lm(Size ~ Sex, data = dat))
options(contrasts = c("contr.treatment", "contr.poly"))
```
From these two models, the residuals are the same and it is this information one would then take into the subsequent model (plus the same thing removing Sex effect form the other covariates):
```
> all.equal(r1, r2)
[1] TRUE
```
I happen to agree with (2), but on (3) if Sex is no interest to the researchers, they might still want to control for Sex effects, so my null model would be one that includes Sex and I test alternatives with additional covariates plus Sex. Your point about interactions and testing for effects of the non-interesting variables is an important and valid observation.
| null | CC BY-SA 2.5 | null | 2010-12-23T11:47:46.440 | 2010-12-23T11:47:46.440 | null | null | 1390 | null |
5717 | 2 | null | 5713 | 1 | null | It looks like I can't add a long comment directly to Dr. Simpson's answer. Sorry I have to put my response here.
I really appreciate your response, Dr. Simpson! I should clarify my arguments a little bit. What I'm having trouble with the partialling business is not a theoretical but a practical issue. Suppose a linear regression model is of the following form
y = a + b * Sex + other fixed effects + residuals
I totally agree that, from the theoretical perspective, regardless how we quantify the Sex variable, we would have the same residuals. Even if I code the subjects with some crazy numbers such as male = 10.7 and female = 53.65, I would still get the same residuals as `r1` and `r2` in your example. However, what matters in those papers is not about the residuals. Instead, the focus is on the interpretation of the intercept `a` and other fixed effects in the model above, and this may invite problem when partialling. With such a focus in mind, how Sex is coded does seem to have a big consequence on the interpretation of all other effects in the above model. With dummy coding (`options(contrasts = c("contr.treatment", "contr.poly"))` in R), all other effects except 'b' should be interpreted as being associated with sex group with code "0" (males). With effect coding (`options(contrasts = c("contr.sum", "contr.poly"))` in R), all other effects except `b` are the average effects for the whole population regardless the sex.
Using your example, the model simplifies to
y = a + b * Sex + residuals.
The problem can be clearly seen with the following about the estimate of the intercept `a`:
```
> summary(m1)
Call: lm(formula = Size ~ Sex, data = dat)
...
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 180.9526 0.9979 181.332 < 2e-16 ***
> summary(m2)
Call: lm(formula = Size ~ Sex, data = dat)
...
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 175.4601 0.7056 248.659 < 2e-16 ***
```
Finally it looks like I have to agree that my original argument (3) might not be valid. Continuing your example,
```
> options(contrasts = c("contr.sum", "contr.poly"))
> m0 <- lm(Size ~ 1, data = dat)
> summary(m0)
Call: lm(formula = Size ~ 1, data = dat)
...
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 175.460 1.122 156.4 <2e-16 ***
```
It seems that including Sex in the model does not change the effect estimate, but it does increase the statistical power since more variability in the data is accounted for through the Sex effect. My previous illusion in argument (3) may have come from a dataset with a huge sample size in which adding Sex in the model didn't really change much for the significance of other effects.
However, in the conventional balanced ANOVA-type analysis, a between-subjects factor such as Sex does not have consequence on those effects unrelated to the factor because of the orthogonal partitioning of the variances?
| null | CC BY-SA 2.5 | null | 2010-12-23T15:31:46.357 | 2010-12-23T21:28:57.707 | 2010-12-23T21:28:57.707 | 930 | 1513 | null |
5718 | 2 | null | 4172 | 1 | null | Another [question](https://stats.stackexchange.com/questions/527/what-ways-are-there-to-show-two-analytical-methods-are-equivalent/) seems to have covered similar ground, at least in regards to similarity of discrete values. There I suggest regressing regression would be usable in theory for the GPS position, though I imagine a solution that respects the two dimensionality of the data would be preferable. @chi has a better answer on that same question (that includes citations). Similarity of distributions seems to be a question for a K-S test (I wonder if there is a multivariate version). For the categorization data it seems like Baaysean approach could be useful. Given categorization A at time slice 1 using Method 1... what is the Pr(Cat B) @ time slice 1 using Method 1? You might also do a lagged version of this to look at whether one method is picking up categorization changes before or after the other.
| null | CC BY-SA 2.5 | null | 2010-12-23T16:59:38.090 | 2010-12-23T16:59:38.090 | 2017-04-13T12:44:29.923 | -1 | 196 | null |
5719 | 2 | null | 5713 | 1 | null | Remember though that error will be reduced by adding any addtional factors. Even if gender is insignficant in your model it may still be useful in the study. Signficance can be found in any factor if the sample size is large enough. Conversly, if the sample size is not large enough a signficant effect may not be testable. Hence good model building and power analysis.
| null | CC BY-SA 2.5 | null | 2010-12-23T17:05:00.970 | 2010-12-23T17:05:00.970 | null | null | null | null |
5720 | 2 | null | 5603 | 3 | null | Frankly, what you have is still a Markov Chain, as vqv has very rightly pointed out. (PS I tried to add this as a comment but where is the button??)
| null | CC BY-SA 2.5 | null | 2010-12-23T17:10:23.330 | 2010-12-23T17:10:23.330 | null | null | 2472 | null |
5722 | 2 | null | 5703 | 4 | null | Spearman's rank correlation is just the Pearson product-moment correlation between the ranks of the variables. Shabbychef's extra constraint means that $y_1$ and $y_2$ are the same as their ranks and that there are no ties, so they have equal standard deviation $\sigma_y$ (say). If we also replace x by its ranks, the problem becomes the equivalent problem for the Pearson product-moment correlation.
By definition of the Pearson product-moment correlation,
$$\begin{align}
\rho(x,y_1+y_2)
&= \frac{\operatorname{Cov}(x,y_1+y_2)}
{\sigma_x \sqrt{\operatorname{Var}(y_1+y_2)}} \\
&= \frac{\operatorname{Cov}(x,y_1) + \operatorname{Cov}(x,y_2)}
{\sigma_x \sqrt{\operatorname{Var}(y_1)+\operatorname{Var}(y_2)
+ 2\operatorname{Cov}(y_1,y_2)}} \\
&= \frac{\rho_1\sigma_x\sigma_y + \rho_2\sigma_x\sigma_y}
{\sigma_x \sqrt{2\sigma_y^2 + 2\sigma_y^2\rho(y_1,y_2)}} \\
&= \frac{\rho_1 + \rho_2}
{\sqrt{2}\left(1+\rho(y_1,y_2)\right)^{1/2}}. \\
\end{align}$$
For any set of three variables, if we know two of their three correlations we can put bounds on the third correlation (see e.g. [Vos 2009](http://www.informaworld.com/smpp/content~content=a906010843), or from [the formula for partial correlation](http://en.wikipedia.org/wiki/Partial_correlation#Using_recursive_formula)):
$$\rho_1\rho_2 - \sqrt{1-\rho_1^2}\sqrt{1-\rho_2^2} \leq \rho(y_1,y_2) \leq
\rho_1\rho_2 + \sqrt{1-\rho_1^2}\sqrt{1-\rho_2^2} $$
Therefore
$$\frac{\rho_1 + \rho_2}
{\sqrt{2}\left(1+\rho_1\rho_2 + \sqrt{1-\rho_1^2}\sqrt{1-\rho_2^2}\right)^{1/2}}
\leq \rho(x,y_1+y_2) \leq
\frac{\rho_1 + \rho_2}
{\sqrt{2}\left(1+\rho_1\rho_2 - \sqrt{1-\rho_1^2}\sqrt{1-\rho_2^2}\right)^{1/2}}
$$
if $\rho_1 + \rho_2 \geq 0$; if $\rho_1 + \rho_2 \le 0$ you need to switch the bounds around.
| null | CC BY-SA 2.5 | null | 2010-12-23T19:11:42.133 | 2010-12-23T20:53:49.907 | 2010-12-23T20:53:49.907 | 449 | 449 | null |
5723 | 2 | null | 5682 | 4 | null | You might want to pick up (or look at) a copy of Rick Wicklin's book: Statistical Programming with SAS IML software
[https://support.sas.com/content/dam/SAS/support/en/books/statistical-programming-with-sas-iml-software/63119_excerpt.pdf](https://support.sas.com/content/dam/SAS/support/en/books/statistical-programming-with-sas-iml-software/63119_excerpt.pdf)
He also has a blog about IML.
And, on SAS' site, there is a section about IML:
[http://support.sas.com/forums/forum.jspa?forumID=47](http://support.sas.com/forums/forum.jspa?forumID=47)
And you will want IMLStudio, which offers a multiple window view that is much easier to integrate with Base SAS than the old IML was.
I have used Base SAS and SAS Stat a lot. I've only barely looked at IML. But, from what I've seen, your knowledge of R should help.
| null | CC BY-SA 4.0 | null | 2010-12-23T20:42:30.510 | 2019-02-14T17:43:29.250 | 2019-02-14T17:43:29.250 | 74358 | 686 | null |
5724 | 1 | null | null | 1 | 197 | I'm new here and wondering if anyone could give me some hints on how to estimate the time varying coefficient and state variable together. Here is my model:
observation equation: $Y(t)= A(t)X(t)+ w(t)$,
state equation: $X(t)=\phi X(t-1)+v(t)$,
here I have time varying coefficient $A(t)$, it doesn't depend on any predetermined parameter $\theta$, for example. If I treat $A(t)$ as another state variable, then it is nonlinear state space, I have no idea how to estimate multiplicative state variables. Any hints? Thank you.
| Multiplicative unobservable component in state space model | CC BY-SA 2.5 | null | 2010-12-23T22:25:19.767 | 2010-12-24T13:05:52.223 | 2010-12-24T13:05:52.223 | 2510 | 2510 | [
"time-series"
]
|
5725 | 2 | null | 5713 | 2 | null | It's true that the choice of coding method influences how you interpret the model coefficients. In my experience though (and I realise this can depend on your field), dummy coding is so prevalent that people don't have a huge problem dealing with it.
In this example, if male = 0 and female = 1, then the intercept is basically the mean response for males, and the Sex coefficient is the impact on the response due to being female (the "female effect"). Things get more complicated once you are dealing with categorical variables with more than two levels, but the interpretation scheme extends in a natural way.
What this ultimately means is that you should be careful that any substantive conclusions you draw from the analysis don't depend on the coding method used.
| null | CC BY-SA 2.5 | null | 2010-12-24T00:48:56.240 | 2010-12-24T00:48:56.240 | null | null | 1569 | null |
5726 | 2 | null | 5603 | 1 | null | You might find Scott and Smyth '03 of interest: [http://www.datalab.uci.edu/papers/ScottSmythV7.pdf](http://www.datalab.uci.edu/papers/ScottSmythV7.pdf).
They discuss markov modulated poisson processes, which varies the rate parameter based on which markov state it is in. You could have two states, one below and one above the given population of interest.
| null | CC BY-SA 2.5 | null | 2010-12-24T02:13:25.680 | 2010-12-24T02:13:25.680 | null | null | 2073 | null |
5727 | 1 | 5741 | null | 23 | 6384 | I have a question on calculating James-Stein Shrinkage factor in the [1977 Scientific American paper by Bradley Efron and Carl Morris, "Stein's Paradox in Statistics"](http://www-stat.stanford.edu/~ckirby/brad/other/Article1977.pdf).
I gathered the data for the baseball players and it is given below:
```
Name, avg45, avgSeason
Clemente, 0.400, 0.346
Robinson, 0.378, 0.298
Howard, 0.356, 0.276
Johnstone, 0.333, 0.222
Berry, 0.311, 0.273
Spencer, 0.311, 0.270
Kessinger, 0.289, 0.263
Alvarado, 0.267, 0.210
Santo, 0.244, 0.269
Swoboda, 0.244, 0.230
Unser, 0.222, 0.264
Williams, 0.222, 0.256
Scott, 0.222, 0.303
Petrocelli, 0.222, 0.264
Rodriguez, 0.222, 0.226
Campaneris, 0.200, 0.285
Munson, 0.178, 0.316
Alvis, 0.156, 0.200
```
`avg45` is the average after $45$ at bats and is denoted as $y$ in the article. `avgSeason` is the end of the season average.
The James-Stein estimator for the average ($z$) is given by
$$z = \bar{y} + c (y-\bar{y})$$
and the the shrinkage factor $c$ is given by (page 5 of the Scientific American 1977 article)
$$ c = 1 - \frac{(k-3) \sigma^2} {\sum (y - \bar{y})^2}, $$
where $k$ is the number of unknown means. Here there are 18 players so $k = 18$. I can calculate $\sum (y - \bar{y})^2$ using `avg45` values. But I don't know how to calculate $\sigma^2$. The authors say $c = 0.212$ for the given data set.
I tried using both $\sigma_{x}^2$ and $\sigma_{y}^2$ for $\sigma^2$ but they don't give the correct answer of $c = 0.212$
Can anybody be kind enough to let me know how to calculate $\sigma^2$ for this data set?
| James-Stein estimator: How did Efron and Morris calculate $\sigma^2$ in shrinkage factor for their baseball example? | CC BY-SA 3.0 | null | 2010-12-24T02:40:14.103 | 2018-07-01T01:05:25.077 | 2014-11-03T17:52:18.700 | 28666 | 2513 | [
"estimation",
"regularization",
"steins-phenomenon"
]
|
5728 | 1 | 5764 | null | 9 | 20123 | I am running a model for a problem in insurance domain. The final results show some false positive x and some false negative y. I am using SAS Enterprise Miner for this. Can somebody suggest me how to reduce false positive? I know for this i have to increase the false negative. I want to know two things:
- Is there any option in e-miner where I can give more weight to false negative and less to false positive?
- Is there any general approach in modeling which tells us any ways to reduce false negatives or is it just a hit and trial approach?
| Reducing false positive rate | CC BY-SA 2.5 | null | 2010-12-24T06:49:14.123 | 2021-06-15T09:41:06.170 | 2010-12-24T09:52:56.380 | null | 1763 | [
"data-mining",
"sas"
]
|
5729 | 2 | null | 5728 | 2 | null | What you can do if you do not find the weight option is create the same effect yourself, by increasing the amount of the positives, for example you can give as an input to the algorithm 2 times each of the known positives an leave the negatives as they where. You can even increase it 10 times, it is a matter of experimenting to get as near as you can to the best possible result.
| null | CC BY-SA 2.5 | null | 2010-12-24T08:47:24.223 | 2010-12-24T08:47:24.223 | null | null | 1808 | null |
5730 | 2 | null | 4753 | 0 | null | I had the same problem with a sparce matrix in NLP and what we did was select the columns that where more useful to clasify our rows (that gave more information for discerning the result), if you want I can explain it to you in more detail but it is really simple you can figure it out.
But your problem does not seem to be a classification one, I actually am a little confused about what you said about above the diagonal and below it. But I was thinking that you can use the [Apriori](http://en.wikipedia.org/wiki/Apriori_algorithm) data mining algorithm to discover the more important alliances between any number of items.
| null | CC BY-SA 2.5 | null | 2010-12-24T09:01:39.010 | 2010-12-24T09:01:39.010 | null | null | 1808 | null |
5731 | 2 | null | 5724 | 1 | null | I think this is related to [a previously asked question](https://stats.stackexchange.com/questions/4334/state-space-form-of-time-varying-ar1). One of the answers suggested to use the [AD Model Builder software](http://admb-project.org). Although I haven't used it myself, looking at the manual it looks like an alternative.
I wonder though if your problem is sufficiently specified. How does the coefficient At change? You need to put some structure on it, perhaps a smoothness constraint, it it is to be estimated at all.
| null | CC BY-SA 2.5 | null | 2010-12-24T09:22:15.193 | 2010-12-24T09:22:15.193 | 2017-04-13T12:44:44.530 | -1 | 892 | null |
5732 | 1 | null | null | 8 | 1050 | Why the test function in Kolmogorov-Smirnov test takes the supremum of the set of differences, not maximum? When maximum (greatest element) of a set does not exist then supremum may exist, but if greatest element exist then supremum is same as greatest element. I want to know why here greatest element does not exist?
Thanks
Prasenjit
| Why "supremum", not "maximum" in Kolmogorov-Smirnov test? | CC BY-SA 2.5 | null | 2010-12-24T09:55:08.067 | 2010-12-24T12:34:18.007 | 2010-12-24T10:03:05.737 | null | 2516 | [
"kolmogorov-smirnov-test"
]
|
5733 | 1 | 7892 | null | 10 | 5492 | I'm working on a Monte Carlo function for valuing several assets with partially correlated returns. Currently, I just generate a covariance matrix and feed to the the `rmvnorm()` function in R. (Generates correlated random values.)
However, looking at the distributions of returns of an asset, it is not normally distributed.
This is really a two part question:
1) How can I estimate some kind of PDF or CDF when all I have is some real-world data without a known distribution?
2) How can I generate correlated values like rmvnorm, but for this unknown (and non-normal) distribution?
Thanks!
---
The distributions do not appear to fit any known distribution. I think it would be very dangerous to assume a parametric and then use that for monte carlo estimation.
Isn't there some kind of bootstrap or "empirical monte carlo" method I can look at?
| Generate random multivariate values from empirical data | CC BY-SA 3.0 | null | 2010-12-24T10:35:28.497 | 2018-03-05T11:47:46.557 | 2018-03-05T11:47:46.557 | 7290 | 2566 | [
"markov-chain-montecarlo",
"monte-carlo",
"density-function"
]
|
5734 | 1 | null | null | 2 | 5550 | I try to implement a time series data analysis project, but I have to do in Java, C# or Python, is there any good libary such like LOESS, ARIMA in R you can recommend?
| Is there any library like LOESS or ARIMA in Java/C# or Python? | CC BY-SA 4.0 | null | 2010-12-24T10:46:30.457 | 2019-09-17T11:39:24.123 | 2019-09-17T11:39:24.123 | 11887 | 2454 | [
"time-series",
"software"
]
|
5735 | 2 | null | 5733 | 4 | null | Regarding the first question, you might consider resampling your data. There would be a problem in case your data were correlated over time (rather than contemporaneously correlated), in which case you would need something like a block bootstrap. But for returns data, a simple bootstrap is probably fine.
I guess the answer to the second question is very much dependent on the target distribution.
| null | CC BY-SA 2.5 | null | 2010-12-24T12:16:09.760 | 2010-12-24T12:16:09.760 | null | null | 892 | null |
5736 | 2 | null | 5733 | 3 | null | The answer to the first question is that you build a model. In your case this means choosing a distribution and estimating its parameters.
When you have the distribution you can sample from it using Gibbs or Metropolis algorithms.
On the side note, do you really need to sample from this distribution? Usually the interest is in some characteristic of the distribution. You can estimate it using empirical distribution via bootstrap, or again build a model for this characteristic.
| null | CC BY-SA 2.5 | null | 2010-12-24T12:23:59.530 | 2010-12-24T12:23:59.530 | null | null | 2116 | null |
5737 | 2 | null | 5732 | 10 | null | Suppose a difference is an increasing continuous function on open interval and zero everywhere else. Then the maximum will not exist, but supremum will.
| null | CC BY-SA 2.5 | null | 2010-12-24T12:34:18.007 | 2010-12-24T12:34:18.007 | null | null | 2116 | null |
5738 | 2 | null | 5727 | 14 | null | I am not quite sure about the $c = 0.212$, but the following article provides a much more detailed description of these data:
Efron, B., & Morris, C. (1975). Data analysis using Stein's estimator and its generalizations. Journal of the American Statistical Association, 70(350), 311-319 [(link to pdf)](http://www.medicine.mcgill.ca/epidemiology/hanley/bios602/MultilevelData/EfronMorrisJASA1975.pdf)
or more detailed
Efron, B., & Morris, C. (1974). Data analysis using Stein's estimator and its generalizations. R-1394-OEO, The RAND Corporation, March 1974 [(link to pdf)](https://www.rand.org/pubs/reports/R1394.html).
On page 312, you will see that Efron & Morris use an arc-sin transformation of these data, so that the variance of the batting averages is approximately unity:
```
> dat <- read.table("data.txt", header=T, sep=",")
> yi <- dat$avg45
> k <- length(yi)
> yi <- sqrt(45) * asin(2*yi-1)
> c <- 1 - (k-3)*1 / sum((yi - mean(yi))^2)
> c
[1] 0.2091971
```
Then they use c=.209 for the computation of the $z$ values, which we can easily back-transform:
```
> zi <- mean(yi) + c * (yi - mean(yi))
> round((sin(zi/sqrt(45)) + 1)/2,3) ### back-transformation
[1] 0.290 0.286 0.282 0.277 0.273 0.273 0.268 0.264 0.259
[10] 0.259 0.254 0.254 0.254 0.254 0.254 0.249 0.244 0.239
```
So these are the values of the Stein estimator. For Clemente, we get .290, which is quite close to the .294 from the 1977 article.
| null | CC BY-SA 4.0 | null | 2010-12-24T12:55:41.967 | 2018-07-01T01:05:25.077 | 2018-07-01T01:05:25.077 | 196461 | 1934 | null |
5739 | 2 | null | 4334 | 1 | null | Would it possible to treat Y(t-1) as an exogenous variable, and estimate this state space model with kalman filter. Then the routine for estimating the coefficient is very standard.
| null | CC BY-SA 2.5 | null | 2010-12-24T13:12:37.723 | 2010-12-24T13:12:37.723 | null | null | 2510 | null |
5740 | 2 | null | 5734 | 7 | null | R is an open source project, so you can look at the file `src/library/stats/src/loessc.c` which implements the C-level computation behind `loess()`. You should be able to use that for an extension module to the other languages you listed. Or, and this may be easier, you some of the existing ways to access R from Java, Python or C#.
| null | CC BY-SA 2.5 | null | 2010-12-24T14:10:09.720 | 2010-12-24T14:10:09.720 | null | null | 334 | null |
5741 | 2 | null | 5727 | 22 | null | The parameter $\sigma^2$ is the (unknown) common variance of the vector components, each of which we assume are normally distributed. For the baseball data we have $45 \cdot Y_i \sim \mathsf{binom}(45,p_i)$, so the normal approximation to the binomial distribution gives (taking $ \hat{p_{i}} = Y_{i}$)
$$
\hat{p}_{i}\approx \mathsf{norm}(\mathtt{mean}=p_{i},\mathtt{var} = p_{i}(1-p_{i})/45).
$$
Obviously in this case the variances are not equal, yet if they had been equal to a common value then we could estimate it with the pooled estimator
$$
\hat{\sigma}^2 = \frac{\hat{p}(1 - \hat{p})}{45},
$$
where $\hat{p}$ is the grand mean
$$
\hat{p} = \frac{1}{18\cdot 45}\sum_{i = 1}^{18}45\cdot{Y_{i}}=\overline{Y}.
$$
It looks as though this is what Efron and Morris have done (in the 1977 paper).
You can check this with the following R code. Here are the data:
```
y <- c(0.4, 0.378, 0.356, 0.333, 0.311, 0.311, 0.289, 0.267, 0.244, 0.244, 0.222, 0.222, 0.222, 0.222, 0.222, 0.2, 0.178, 0.156)
```
and here is the estimate for $\sigma^2$:
```
s2 <- mean(y)*(1 - mean(y))/45
```
which is $\hat{\sigma}^2 \approx 0.004332392$. The shrinkage factor in the paper is then
```
1 - 15*s2/(17*var(y))
```
which gives $c \approx 0.2123905$. Note that in the second paper they made a transformation to sidestep the variance problem (as @Wolfgang said). Also note in the 1975 paper they used $k - 2$ while in the 1977 paper they used $k - 3$.
| null | CC BY-SA 2.5 | null | 2010-12-24T16:06:05.953 | 2010-12-24T18:23:29.703 | 2010-12-24T18:23:29.703 | null | null | null |
5742 | 1 | 5743 | null | 3 | 277 |
## Background
I have a data set $Y$:
```
set.seed(0)
predictor <- c(rep(5,10), rep(10,10), rep(15,10), rep(20,10)) + rnorm(40)
response <- c(rnorm(10,1), rnorm(10,4), rnorm(10,2), rnorm(10,1))
plot(predictor, response)
```
and a set of models g$_i$:
```
fits <- list()
fits[['null']] <- lm(response ~ 1)
fits[['linear']] <- lm(response ~ poly(predictor, 1))
fits[['quadratic']] <- lm(response ~ poly(predictor, 2))
fits[['cubic']] <- lm(response ~ poly(predictor, 3))
fits[['Monod']] <- nls(response ~ a*predictor/(b+predictor),
start = list(a=1, b=1))
fits[['log']] <- lm(response ~ poly(log(predictor + 1), 1))
```
## Problem
I can find the best fit using
```
library(plyr)
ldply(fits, AIC)
```
I am revising a manuscript with results originally presented as raw AIC values, but I find the actual AIC values are fairly uninformative and difficult to interpret
## Question
Can I calculate the probability of each model given the set of models e.g. g$_i$ $$\frac{P(Y|\textrm{g}_i)}{\sum_{j=1:n}P(Y|\textrm{g}_j)}$$
An approach that most similar to using AIC wins, because I would like to minimize the amount of revisions to the methods and results that will be required.
## Other considerations:
- I have considered using ANOVA although this does not appear to have as straightforward of an interpretation as the above, or does it?:
anova(fits[[1]],fits[[2]],fits[[3]],fits[[4]],fits[[5]],fits[[6]])
- A Bayesian approach would require more work on both implementing statistics and because it would fundamentally change the interpretation from the likelihood interpretation of $P(Y|g)$ to the Bayesian $P(g|Y)$.
| How can I calculate the probability of model $g_i$ given a set of $n$ models and AIC values? | CC BY-SA 2.5 | null | 2010-12-24T17:07:49.823 | 2010-12-25T15:35:17.650 | 2010-12-25T15:35:17.650 | 1381 | 1381 | [
"probability",
"model-selection",
"maximum-likelihood"
]
|
5743 | 2 | null | 5742 | 2 | null | Although AIC may not be suitable in this context, Aikake weights provide the ratio of $$\frac{L(g_i|Y)}{\sum_{j=1:n}{L(g_j|Y)}}$$
## Solution
This can be calculated from AIC in this way for each model $i$ (closely following [Burnham and Anderson, 2002](http://books.google.com/books?id=BQYR6js0CC8C&pg=PA75&lpg=PA75&dq=burnham+and+anderson+akaike&source=bl&ots=i8bVidhbWF&sig=8-a_gaD1QiL_JoPICrzqA_gDyJM&hl=en&ei=AwYVTYunPIeenAfSyIi8CQ&sa=X&oi=book_result&ct=result&resnum=4&ved=0CC8Q6AEwAw#v=onepage&q&f=false)):
$$\Delta_i = AIC_i - AIC_{min}$$
where $AIC_{min}$ is the best fit model
normalizing these by the sum of the likelihoods gives the Aikake Weight ($W_i$) for each model
$$W_i=\frac{exp(-1/2\Delta_i)}{\sum_{j=1:n}{exp(-1/2\Delta_j)}}$$
which [Johnson and Olmland (2004)](http://faculty.washington.edu/skalski/classes/QERM597/papers/Johnson%20and%20Omland.pdf) interpret as
>
the probability that model i is the best model for the observed data given the candidate set of models.
Burnham and Anderson state that this approach applies for AIC$_c$, QAIC, QAIC$_c$, and TIC.
## References
- Johnson and Olmland, 2004. Model selection in ecology and evolution. TREE 19(2) dx.doi.org/doi:10.1016/j.tree.2003.10.013
- Burnham, K.P. and Anderson, D.R. (2002) Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach, Springer
| null | CC BY-SA 2.5 | null | 2010-12-24T20:56:03.353 | 2010-12-24T21:25:57.317 | 2020-06-11T14:32:37.003 | -1 | 1381 | null |
5744 | 2 | null | 5691 | 0 | null | how to bootstrap a table ought to depend on how it was obtained. there are a few ways this can be done.
for example, one can sample n individuals and classify all of them into an $r\times c$ table. in this scenario, neither row nor column totals are fixed. it would then seem plausible that a bootstrap sample would take a sample of size n with replacement from the original n respondents and classify them into an $r\times c$ table.
in another scenario, the row totals for the table could have been fixed in advance. in that case, one would draw with replacement a sample of $n_{i+}$ individuals from the $n_{i+}$ individuals in row $i$, separately for each row.
does this make sense? [apologies if this restates what has already been said. i don't quite grasp all of it.]
i would like to reiterate @whuber's query on why you are asking this question. one usually does not speak of obtaining a CI for a statistic [such as a p-value]. can you clarify this a bit?
| null | CC BY-SA 2.5 | null | 2010-12-25T04:24:25.403 | 2010-12-25T04:24:25.403 | null | null | 1112 | null |
5745 | 2 | null | 5629 | 5 | null | See Wikipedia's
[List of uncertainty propagation software](http://en.wikipedia.org/wiki/List_of_uncertainty_propagation_software),
in particular Python
[uncertainties](http://packages.python.org/uncertainties) .
There's even a conference:
[http://probabilistic-programming.org/wiki/NIPS*2008_Workshop](http://probabilistic-programming.org/wiki/NIPS*2008_Workshop)
(Beware -- random variables that propagate through a network and converge
and correlate get messy.)
| null | CC BY-SA 3.0 | null | 2010-12-25T13:15:31.367 | 2016-09-07T11:45:57.263 | 2016-09-07T11:45:57.263 | -1 | 557 | null |
5746 | 1 | null | null | 4 | 936 | The [Secretary problem](http://en.wikipedia.org/wiki/Secretary_problem)
has an algorithm for fixed N and immediate accept/reject
(that is, reject reject ... accept one, stop).
There are several variants;
in mine, secretaries or samples come from a real-valued source Xj,
payoff is from best-so-far not last,
and each sample costs $ c:
maximize payoff = Xbestsofar - c * Nsample
Here's a picture of random walks of this kind: either go up to a
high-water mark (new maximum) of X1 X2 ..., or down c:

Can anyone point me to stop-or-keep-looking rules for optimal stopping
for problems like this ?
Perhaps one could combine two kinds of rule:
- sample at least ..., take the best after that (Secretary problem)
- stop when the peak - current exceeds some Δ (Allaart)
As @whuber points out, one needs some model of the distribution of the samples
to define the problem. This I don't have.
Nonetheless, statisticians must have looked at problems of this kind --
sequential sampling, sequential design of experiments, optimal stopping,
optimal learning ?
Then please help me rephrase my question to ask for a tutorial on ...
Added links 29Dec, thanks @James; readers please extend —
[Optimal stopping](http://en.wikipedia.org/wiki/Optimal_stopping)
Hill, [Knowing when to stop](http://www.americanscientist.org/issues/feature/2009/2/knowing-when-to-stop/1)
2009, 3p, excellent
Allaart,
[Stopping the maximum of a correlated random walk with cost for observation](http://www.cas.unt.edu/~allaart/papers/crwmax.pdf)
2004, 12p
(Sorry to keep changing this — trying to converge to a standard formulation.)
| Optimal stopping from an unknown distribution | CC BY-SA 2.5 | null | 2010-12-25T15:29:04.023 | 2022-08-30T14:09:50.687 | 2010-12-29T13:00:33.357 | 557 | 557 | [
"sampling",
"search-theory",
"optimal-stopping"
]
|
5747 | 1 | 5753 | null | 74 | 39505 | I know empirically that is the case. I have just developed models that run into this conundrum. I also suspect it is not necessarily a yes/no answer. I mean by that if both A and B are correlated with C, this may have some implication regarding the correlation between A and B. But, this implication may be weak. It may be just a sign direction and nothing else.
Here is what I mean... Let's say A and B both have a 0.5 correlation with C. Given that, the correlation between A and B could well be 1.0. I think it also could be 0.5 or even lower. But, I think it is unlikely that it would be negative. Do you agree with that?
Also, is there an implication if you are considering the standard Pearson Correlation Coefficient or instead the Spearman (rank) Correlation Coefficient? My recent empirical observations were associated with the Spearman Correlation Coefficient.
| If A and B are correlated with C, why are A and B not necessarily correlated? | CC BY-SA 2.5 | null | 2010-12-25T19:24:45.540 | 2021-11-29T04:37:55.217 | 2011-01-12T16:15:01.780 | 919 | 1329 | [
"correlation",
"cross-correlation"
]
|
5748 | 2 | null | 5747 | 16 | null | I will leave the statistical demonstration to those who are better suited than me for it... but intuitively say that event A generates a process X that contributes to the generation of event C. Then A is correlated to C (through X). B, on the other hand generates Y, that also shapes C. Therefore A is correlated to C, B is correlated to C but A and B are not correlated.
| null | CC BY-SA 2.5 | null | 2010-12-25T19:30:52.613 | 2010-12-26T08:17:28.360 | 2010-12-26T08:17:28.360 | 582 | 582 | null |
5749 | 1 | null | null | 3 | 197 | If you want to regress y on x, where multiple y's are observed at each x, is it ever better to instead take the mean at each x, and the use those means for the regression? Does it depend on the distributional assumptions?
| When is it better to average observations at the same abscissa? | CC BY-SA 2.5 | null | 2010-12-25T21:13:06.783 | 2010-12-26T13:45:12.120 | 2010-12-26T13:45:12.120 | null | null | [
"regression"
]
|
5750 | 1 | 5754 | null | 71 | 23023 | I have several hundred measurements. Now, I am considering utilizing some kind of software to correlate every measure with every measure. This means that there are thousands of correlations. Among these there should (statistically) be a high correlation, even if the data is completely random (each measure has only about 100 datapoints).
When I find a correlation, how do I include the information about how hard I looked for a correlation, into it?
I am not at a high level in statistics, so please bear with me.
| Look and you shall find (a correlation) | CC BY-SA 3.0 | null | 2010-12-25T22:16:04.897 | 2017-02-15T22:44:30.453 | 2017-02-15T22:44:30.453 | 28666 | 888 | [
"correlation",
"multiple-comparisons",
"permutation-test"
]
|
5751 | 2 | null | 5747 | 17 | null | I think it's better to ask "why SHOULD they be correlated?" or, perhaps "Why should have any particular correlation?"
The following R code shows a case where x1 and x2 are both correlated with Y, but have 0 correlation with each other
```
x1 <- rnorm(100)
x2 <- rnorm(100)
y <- 3*x1 + 2*x2 + rnorm(100, 0, .3)
cor(x1,y)
cor(x2,y)
cor(x1,x2)
```
The correlation with $Y$ can be made stronger by reducing the .3 to .1 or whatever
| null | CC BY-SA 4.0 | null | 2010-12-25T22:17:12.493 | 2021-11-29T02:48:36.597 | 2021-11-29T02:48:36.597 | 11887 | 686 | null |
5752 | 2 | null | 5750 | 7 | null | This is an example of multiple comparisons. There's a large literature on this.
If you have, say, 100 variables, then you will have 100*99/2 =4950 correlations.
If the data are just noise, then you would expect 1 in 20 of these to be significant at p = .05. That's 247.5
Before going farther, though, it would be good if you could say WHY you are doing this. What are these variables, why are you correlating them, what is your substantive idea?
Or, are you just fishing for high correlations?
| null | CC BY-SA 2.5 | null | 2010-12-25T22:22:20.943 | 2010-12-25T22:22:20.943 | null | null | 686 | null |
5753 | 2 | null | 5747 | 62 | null | Because correlation is a mathematical property of multivariate distributions, some insight can be had purely through calculations, regardless of the statistical genesis of those distributions.
For the Pearson correlations, consider multinormal variables $X$, $Y$, $Z$. These are useful to work with because any non-negative definite matrix actually is the covariance matrix of some multinormal distributions, thereby resolving the existence question. If we stick to matrices with $1$ on the diagonal, the off-diagonal entries of the covariance matrix will be their correlations. Writing the correlation of $X$ and $Y$ as $\rho$, the correlation of $Y$ and $Z$ as $\tau$, and the correlation of $X$ and $Z$ as $\sigma$, we compute that
- $1 + 2 \rho \sigma \tau - \left(\rho^2 + \sigma^2 + \tau^2\right) \ge 0$ (because this is the determinant of the correlation matrix and it cannot be negative).
- When $\sigma = 0$ this implies that $\rho^2 + \tau^2 \le 1$. To put it another way: when both $\rho$ and $\tau$ are large in magnitude, $X$ and $Z$ must have nonzero correlation.
- If $\rho^2 = \tau^2 = 1/2$, then any non-negative value of $\sigma$ (between $0$ and $1$ of course) is possible.
- When $\rho^2 + \tau^2 \lt 1$, negative values of $\sigma$ are allowable. For example, when $\rho = \tau = 1/2$, $\sigma$ can be anywhere between $-1/2$ and $1$.
These considerations imply there are indeed some constraints on the mutual correlations. The constraints (which depend only on the non-negative definiteness of the correlation matrix, not on the actual distributions of the variables) can be tightened depending on assumptions about the univariate distributions. For instance, it's easy to see (and to prove) that when the distributions of $X$ and $Y$ are not in the same location-scale family, their correlations must be strictly less than $1$ in size. (Proof: a correlation of $\pm 1$ implies $X$ and $Y$ are linearly related a.s.)
As far as Spearman rank correlations go, consider three trivariate observations $(1,1,2)$, $(2,3,1)$, and $(3,2,3)$ of $(X, Y, Z)$. Their mutual rank correlations are $1/2$, $1/2$, and $-1/2$. Thus even the sign of the rank correlation of $Y$ and $Z$ can be the reverse of the signs of the correlations of $X$ and $Y$ and $X$ and $Z$.
| null | CC BY-SA 2.5 | null | 2010-12-25T23:54:38.263 | 2011-01-12T16:17:07.083 | 2011-01-12T16:17:07.083 | 919 | 919 | null |
5754 | 2 | null | 5750 | 76 | null | This is an excellent question, worthy of someone who is a clear statistical thinker, because it recognizes a subtle but important aspect of multiple testing.
There are [standard methods to adjust the p-values](http://www.technion.ac.il/docs/sas/stat/chap43/sect14.htm) of multiple correlation coefficients (or, equivalently, to broaden their confidence intervals), such as the Bonferroni and Sidak methods (q.v.). However, these are far too conservative with large correlation matrices due to the inherent mathematical relationships that must hold among correlation coefficients in general. (For some examples of such relationships see the [recent question and the ensuing thread](https://stats.stackexchange.com/q/5747/919).) One of the best approaches for dealing with this situation is to conduct a [permutation (or resampling) test](http://en.wikipedia.org/wiki/Resampling_%28statistics%29). It's easy to do this with correlations: in each iteration of the test, just randomly scramble the order of values of each of the fields (thereby destroying any inherent correlation) and recompute the full correlation matrix. Do this for several thousand iterations (or more), then summarize the distributions of the entries of the correlation matrix by, for instance, giving their 97.5 and 2.5 percentiles: these would serve as mutual symmetric two-sided 95% confidence intervals under the null hypothesis of no correlation. (The first time you do this with a large number of variables you will be astonished at how high some of the correlation coefficients can be even when there is no inherent correlation.)
When reporting the results, no matter what computations you do, you should include the following:
- The size of the correlation matrix (i.e., how many variables you have looked at).
- How you determined the p-values or "significance" of any of the correlation coefficients (e.g., left them as-is, applied a Bonferroni correction, did a permutation test, or whatever).
- Whether you looked at alternative measures of correlation, such as Spearman rank correlation. If you did, also indicate why you chose the method you are actually reporting on and using.
| null | CC BY-SA 2.5 | null | 2010-12-26T00:13:23.897 | 2010-12-26T00:13:23.897 | 2017-04-13T12:44:32.747 | -1 | 919 | null |
5755 | 2 | null | 5749 | 6 | null | When you use the means you are removing much of the variation of the y's around their averages. This would be incorrect if you are assessing the relationships between the individual y's and the x's. In particular it will cause you to overestimate the correlation between the y's and the x's and give you too much confidence in the estimated regression coefficients. If your purpose is to assess the relationship between the average y and the x's, that might be ok. (One subtlety occurs when the numbers of y's involved in the averages vary, because then you should be using [weighted least squares](http://www.itl.nist.gov/div898/handbook/pmd/section1/pmd143.htm) methods rather than ordinary least squares.) Just beware of the [ecological fallacy](http://en.wikipedia.org/wiki/Ecological_fallacy), which is the mistake of using the aggregated (i.e., averaged) y's in place of their individual values.
These considerations are independent of any distributional assumptions.
| null | CC BY-SA 2.5 | null | 2010-12-26T00:19:39.053 | 2010-12-26T00:19:39.053 | null | null | 919 | null |
5756 | 2 | null | 5747 | 22 | null | Correlation is the cosine of the angle between two vectors. In the situation described, (A,B,C) is a triple of observations, made n times, each observation being a real number. The correlation between A and B is the cosine of the angle between $V_A=A-E(A)$ and $V_B=B-E(B)$ as measured in n-dimensional euclidean space. So our situation reduces to considering 3 vectors $V_A$, $V_B$ and $V_C$ in n dimensional space. We have 3 pairs of vectors and therefore 3 angles. If two of the angles are small (high correlation) then the third one will also be small. But to say "correlated" is not much of a restriction: it means that the angle is between 0 and $\pi/2$. In general this gives no restriction at all on the third angle. Putting it another way, start with any angle less than $\pi$ between $V_A$ and $V_B$ (any correlation except -1). Let $V_C$ bisect the angle between $V_A$ and $V_B$. Then C will be correlated with both A and B.
| null | CC BY-SA 2.5 | null | 2010-12-26T07:26:29.060 | 2010-12-26T07:54:47.530 | 2010-12-26T07:54:47.530 | 2526 | 2526 | null |
5757 | 1 | null | null | 7 | 2643 | In a double-blind study, when are deviations from the control group considered statistically significant? And is this related to the number of samples?
I realise that every experiment is different and statistical significance should depend on the deviations in measurements and the size of the sample group, but I'm hoping there is an intuitive rule-of-thumb formula that can "flag" an event as interesting.
I have a light stats background in engineering, but am by no means a stats guru. A worked example would be much appreciated to help me understand and apply it to every-day things.
---
Update with example: OK, here's a (not so simple) thought experiment of what I mean. Suppose I want to measure the toxicity of additives in a village's water supply by comparing mortality rates over time. Given the village's population, natality and mortality rates over several years and the date upon which an additive was introduced into the water supply (disregard quantity), when would a rise in the mortality rate become interesting?
Intuitively, if the mortality rate remains between 0.95% and 1.25% for 10 years, and suddenly spikes to 2.00%, then surely this would be an interesting event if an additive was added that year (assume short-term toxic effects). Obviously there could be other explanations for the rise, but let's focus on statistical significance. Now, how about if it rises to 1.40%? Is that statistically significant? Where do you draw the line?
I'm starting to get the feeling that I need to "choose" a critical region, which feel less authoritative. Can a Gaussian distribution guide me on this? What other information do I need to determine statistical significance?
| When is a deviation statistically significant? | CC BY-SA 2.5 | null | 2010-12-26T11:41:21.370 | 2010-12-26T16:39:54.303 | 2010-12-26T16:39:54.303 | 1497 | 1497 | [
"algorithms",
"statistical-significance"
]
|
5758 | 2 | null | 4949 | 1 | null | It seems that you can go about this in two ways, depending on what model assumptions you are happy to make.
Generative Approach
Assuming a generative model for the data, you also need to know the prior probabilities of each class for an analytic statement of the classification error. Look up [Discriminant Analysis](http://en.wikipedia.org/wiki/Linear_discriminant_analysis) to get the optimal decision boundary in closed form, then compute the areas on the wrong sides of it for each class to get the error rates.
I assume this is the approach intended by your invocation of the Bayes classifier, which is defined only when everything about the data generating process is specified. Since this is seldom possible it is always also worth considering the
Discrimination Approach
If you don't want to or cannot specify the prior class probabilities, you can take advantage of the fact that the discriminant function can under many circumstances (roughly, exponential family class conditional distributions) be modelled directly by a logistic regression model. The error rate calculation is then the one for the relevant logistic regression model.
For a comparison of approaches and a discussion of error rates, [Jordan 1995](http://www.cs.berkeley.edu/~jordan/papers/uai.ps) and [Jordan 2001](http://www.cs.berkeley.edu/~jordan/papers/ng-jordan-nips01.ps) and references may be of interest.
| null | CC BY-SA 2.5 | null | 2010-12-26T12:51:17.367 | 2010-12-26T12:51:17.367 | null | null | 1739 | null |
5759 | 2 | null | 4396 | 0 | null | I think one option is to split a SPSS file in groups. The menu approach is Data > Split file.
I have worked with the rank option, but I think that is better to split the file, for doing other analysis by groups.
| null | CC BY-SA 2.5 | null | 2010-12-26T13:37:32.393 | 2010-12-26T13:37:32.393 | null | null | null | null |
5760 | 2 | null | 5757 | 7 | null | This question gets to the heart of statistical thinking by recognizing that both (a) "every experiment is different," implying no single "cookbook" recipe will suffice to assess experimental results in all cases and (b) "significance should depend on the deviations in measurements," pointing towards the importance of probability theory in modeling the deviations.
Unfortunately (a) indicates that a universal "simple formula" is not possible. However, some things can be said in general. These include
- Deviations in measurements can be attributed partly to predictable phenomena as determined by properties of the subjects or the experiments. For example, weights of people depend (on average) partly on their gender. Deviations that are not predictable or determinate are usually modeled as random variables. How you analyze the deviations into these deterministic and random components is a probability model. Probability models can be as simple as textbook descriptions of dice and coins, but for realistic situations they can be quite complex.
- Statistical "significance" measures the chance, when (hypothetically) the treatment has no effect, that some measure of difference between treatment and control groups would cause us to infer the groups are indeed different. This description is lengthy because so much is involved in its preparation: measuring the results, expressing the differences between the groups in some way (the test statistic), and selecting an inference procedure based on that statistic. Each of these things is under the control of the experimenter or observer and each is important.
A simple textbook example concerns a controlled experiment in which only two outcomes are possible for each subject, such as "dies" and "lives". With good experimental design--double-blinding, randomization of subjects, careful measurement, etc.--we can view the experimental outcomes behaving random draws of [tickets from a box](http://www.quantdec.com/envstats/notes/class_06/tutorial.htm), where each ticket is labeled with one of the two outcomes. This is the probability model. The test statistic is usually the difference in proportions between the two groups (e.g., the difference between their mortality rates). Statistical theory, in the form of the [Neyman-Pearson Lemma](http://en.wikipedia.org/wiki/Neyman%E2%80%93Pearson_lemma), tells us to base our determination of the experimental result on whether this difference exceeds some predetermined threshold. Probability theory allows us to analyze this tickets-in-a-box model to come up with an appropriate threshold (the test's [critical region](http://www.stats.gla.ac.uk/steps/glossary/hypothesis_testing.html#cr)). The theory shows precisely how that threshold depends on the sizes of the control and treatment groups.
To go further, you need to learn some basic probability theory and see applications in some exemplary cases. This will accustom you to the habit of "thinking statistically" about everyday things. Two great resources are [Smith & Gonick's Cartoon Guide to Statistics](http://www.larrygonick.com/html/pub/books/sci7.html) and the classic Freedman et al. textbook [Statistics](http://rads.stackoverflow.com/amzn/click/0393970833), used at Berkeley for several generations. (The link goes to an older edition which you can inspect online and buy used for almost nothing.)
| null | CC BY-SA 2.5 | null | 2010-12-26T15:52:51.497 | 2010-12-26T15:52:51.497 | null | null | 919 | null |
5761 | 2 | null | 5746 | 5 | null | Not an answer, but maybe this helps to clarify the question...
I don't think the secretary problem is without dependence on the underlying distribution. For example, if the observations aren't stationary, the 37% approach is not likely to be optimal.
For a concrete example, a person making monthly purchases in an index fund for their retirement account could use the 37% rule to try to buy in at the low of the each month. In practice, this strategy doesn't work well because prices tend to trend upward. If some poster can cite work which deals with optimal stopping rules when the observations display trends, I'd be grateful.
Perhaps a more helpful response to the original question is this paper [Skip the Square Root of n: A New Secretary Problem](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.510.9156&rep=rep1&type=pdf) which deals with the objective of maximizing the expected rank of the chosen element, rather than the probability of choosing the top ranked element as in the classical secretary problem.
| null | CC BY-SA 4.0 | null | 2010-12-26T17:02:26.383 | 2022-08-30T14:09:50.687 | 2022-08-30T14:09:50.687 | 69508 | null | null |
5762 | 1 | 5784 | null | 2 | 494 | My project is about whether early language scores (on a reading task) can predict later scores on a social attribution task. I have to conduct a logistic regression for my project but I am stuck on which one is appropriate. This is because I have conducted a binary logistic and multinomial logistic regression but I always get warnings and I do not know what to do. Any help will be much appreciated.
It's just that my supervisor specifically instructed me to use a logistic regression, sorry I forgot to add that there are three groups involved autism, language impaired and typically developing kids and I need to look at whether langauge scores of the autistic kids and language impaired kids are also a signifcant predictor of scoring low on a social attribution task in later life. Please can you help with this??
| What logistic regression is best to use? | CC BY-SA 2.5 | null | 2010-12-26T17:08:51.420 | 2010-12-28T15:30:54.830 | 2010-12-26T23:14:08.420 | null | null | [
"logistic",
"self-study"
]
|
5763 | 2 | null | 5762 | 6 | null | Logistic regression is to be used when the outcome of interest is a binary variable (e.g., success/failure), whereas multinomial logistic is reserved for the case of a multi-category response variable (e.g., blue/red/green). In both cases, the response variable to be predicted is a categorical variable. The predictors might be categorical or continuous variables.
From what you described, you are interested in predicting scores on a social attribution task based on observed scores on a reading scores. If both sets of scores are numerical, then it is simply a linear regression model. In R, it is something like
```
# fake data
x <- rnorm(100)
y <- 1.2*x + rnorm(100)
# model fit
summary(lm(y ~ x))
```
The output reads:
```
Call:
lm(formula = y ~ x)
Residuals:
Min 1Q Median 3Q Max
-2.6635 -0.6227 -0.1589 0.6360 2.2494
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.03769 0.10042 -0.375 0.708
x 1.31590 0.10796 12.188 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.003 on 98 degrees of freedom
Multiple R-squared: 0.6025, Adjusted R-squared: 0.5985
F-statistic: 148.6 on 1 and 98 DF, p-value: < 2.2e-16
```
Here an increase of one unit in $x$ is associated to an increase of 1.32 units in $y$. This is basically what the regression coefficient tells you when you assume a model like $\mathbb{E}(y|x)=\beta_0+\beta_1x+\varepsilon$, where $\varepsilon$ are independent and identically distributed as a centered gaussian with variance $\sigma^2$ (unknown).
Now, you may still be interested in a binary outcome. For instance, I can define "Success" as $y>0$. In this case, a logistic regression would look like
```
yb <- ifelse(y>0, 1, 0)
summary(glm(yb ~ x, family=binomial))
```
where the regression coefficient gives you the log of the odds of passing the exam (compared to failing to reach a score of 0):
```
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.5047 0.3194 -1.580 0.114
x 3.3959 0.6871 4.943 7.7e-07 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
Edit
Some pointers with multinomial logistic regression have already been given. What I wonder is the outcome you want to model: If you're interested in whether the autistic group has lower scores on the attribution task (as would be expected from the literature), then in a logistic model both scores will be continuous predictors, and reading scores will be considered as a covariate; in other words, you model the odds of being in one of the diagnostic class as a function of attrition task, after adjusting for baseline differences in language proficiency. But it seems it would make sense only if the diagnostic categories are not a priori defined and it does not directly answer the question you are asking (whether language is a predictor for attrition that may act differentially according to the diagnostic group); otherwise, I would rather model the attrition scores as a function of a grouping variable (diagnostic category) + reading scores, which is basically an [ANCOVA](http://en.wikipedia.org/wiki/Analysis_of_covariance) model.
Both models are available in R: `mlogit()` in the package [mlogit](http://cran.r-project.org/web/packages/mlogit/index.html) for multinomial logistic regression, and `lm()` for ANCOVA.
| null | CC BY-SA 2.5 | null | 2010-12-26T17:29:24.403 | 2010-12-28T08:33:23.127 | 2010-12-28T08:33:23.127 | 930 | 930 | null |
5764 | 2 | null | 5728 | 12 | null | Regarding first (and second) question: A general approach to reduce misclassifications error by iteratively training models and reweighting rows (based on classification error) is [Boosting](http://en.wikipedia.org/wiki/Boosting). I think you might find that technique interesting.
Regarding second question: The question sounds a little bit naive to me (but I maybe did not understand your true intention), since reducing misclassification error = improving model performance is one of the challenges in Data Mining / Machine Learning. So if there were a general all-time working strategy, we all would have been replaced by machines (earlier than we will anyways). So I think that yes, the general approach here is educated trial and error. I suggest this question, [Better Classification of default in logistic regression](https://stats.stackexchange.com/questions/4830/better-classification-of-default-in-logistic-regression), which may give you some ideas for both questioning and model improvement.
I suggest to play around a little bit and then come back to ask more specific questions. General questions regarding model improvement are hard to answer without data and/or additional information of the circumstances. Good luck !
| null | CC BY-SA 2.5 | null | 2010-12-26T20:51:31.283 | 2010-12-26T20:58:12.193 | 2017-04-13T12:44:36.927 | -1 | 264 | null |
5765 | 1 | null | null | 3 | 2293 | Given a list of numbers, is it possible to find out (or in other words, is there a statistical measure to tells the) the closeness of the numbers
(do note that i am not talking about correlation - this would be for 2 sequences - something like correlation between height and weight).
I am looking for something like a closeness coefficient for a given series of numbers,
so given a series [0,10,20,30,40] - the 'closeness coefficient' gives me the spread of the numbers.
It would also be nice if the 'closeness coefficient' depicts the 'density' of the numbers - but if this is a different computable statistical measure, then it shouldnt be a problem.
| Measure of closeness | CC BY-SA 2.5 | null | 2010-12-27T05:22:20.193 | 2017-06-12T14:02:49.980 | 2017-06-12T14:02:49.980 | 11887 | 2535 | [
"descriptive-statistics",
"measurement",
"winsorizing"
]
|
5766 | 2 | null | 5765 | 8 | null | The simplest would be the standard deviation. Other measures of 'scale' include the MAD (median absolute deviation), the IQR (interquartile range), Winsorized standard deviation, etc. You might also be interested in the [Index of Disperson](http://en.wikipedia.org/wiki/Index_of_dispersion), the [Coefficient of Variation](http://en.wikipedia.org/wiki/Coefficient_of_variation), or other measures of ['dispersion'](http://en.wikipedia.org/wiki/Statistical_dispersion).
| null | CC BY-SA 2.5 | null | 2010-12-27T05:34:18.727 | 2010-12-27T05:47:20.770 | 2010-12-27T05:47:20.770 | 795 | 795 | null |
5767 | 2 | null | 5765 | 1 | null | I'd use standard deviation, like shabbychef said. See [http://en.wikipedia.org/wiki/Variance](http://en.wikipedia.org/wiki/Variance) to learn more.
| null | CC BY-SA 2.5 | null | 2010-12-27T06:36:10.327 | 2010-12-27T06:36:10.327 | null | null | 2489 | null |
5768 | 1 | null | null | 3 | 1557 | I have been confused with the problem for a very long time, and hope that somebody here can help me out.
I have a experiment installed in split-plot design, with 3 temporal groups as blocks, and 2 factors (each have 3 fixed levels) combinations in each of the block, and I have 4 replicates in each treatments. I used to build the model like this:
```
model <- aov(Var ~ Block+A*B+Error(Block/A/B), data)
```
where `A` and `B` are fixed factors of my focus. It seems that R will calculate the Block as an error term, where only DF, SS and MS were reported, but not F-ratio and p-value. My question is: is it make sense for me to estimate the effect of block (i.e. different time period) in R?
Actually, the 4 replication were sampled in four continuous days to develop a temporal sequence for another analysis. I would like to know how could I test the sampling effect, is it another block, or could be treated as a fixed factor nested within the treatment?
| Can I test the effect of Block in a split plot design in R? | CC BY-SA 4.0 | null | 2010-12-27T09:00:13.790 | 2019-04-05T12:14:07.653 | 2019-04-05T12:14:07.653 | 11887 | 2536 | [
"r",
"experiment-design",
"split-plot"
]
|
5769 | 2 | null | 5734 | 2 | null | With respect to loess the [BioPython](http://biopython.org/wiki/Biopython) project has a [lowess() function.](http://www.koders.com/python/fid5A91A606E15507B6823DEC7A059488A6624C4832.aspx?s=python)
As for ARIMA model fitting, [PyIMSL Studio](http://www.vni.com/campaigns/pyimslstudioeval/) contains a number of very useful time series analysis functions, including ones for [automatically fitting the best ARIMA model](http://www.vni.com/products/imsl/documentation/PyIMSL/stat/wwhelp/wwhimpl/js/html/wwhelp.htm#href=Csch8.9.9.html) and [identifying outliers](http://www.vni.com/products/imsl/documentation/PyIMSL/stat/wwhelp/wwhimpl/js/html/wwhelp.htm#href=Csch8.9.7.html) from it. These same functions are available in the other IMSL libraries if you choose to instead code in C, C#, Java, or Fortran.
| null | CC BY-SA 2.5 | null | 2010-12-27T15:20:52.803 | 2010-12-27T16:36:11.353 | 2010-12-27T16:36:11.353 | 1080 | 1080 | null |
5770 | 2 | null | 5768 | 2 | null | I think you know this already, but you can calculate the F Statistic by the ratio of MS and the F Statistic follows $F_{df1, df2}$ where df1 and df2 are degrees of freedom of the numerator and denominator.
About your estimating the block effect question:
First, I don't understand what is your block effect? Normally in split plot design time is not the block, but sometimes it does introduce correlation (like block). Generally, the blocks are the subjects (but if they are blocks as in RBD or samples as in CRD depends on the design)
If I understand your question correctly:
your whole plot treatment is the fixed factor treatment and your split plot treatment is t and you have four observation per individual (lets assume a block). You have got a longitudinal mixed effects model (aka repeated measures). If you have access to SAS can easily handle that using the glimmix (or mixed procedure). All you need to do is impose a temporal correlation (generally AR (1)). If you want to use R, please refer to George Casella's [website](http://www.stat.ufl.edu/~casella/StatDesign/WebRPrograms/), an incredible resource for R related experimental design programs. The relevant example for you will be Hypertension.R His book's 5th chapter is my favorite piece on split plot design.
Please let me know if I misunderstood you.
| null | CC BY-SA 2.5 | null | 2010-12-27T16:46:32.043 | 2010-12-27T16:46:32.043 | null | null | 1307 | null |
5771 | 1 | 5780 | null | 5 | 1084 | Suppose I am doing some experimental procedure on two treatment groups. The procedure has several stages, each of which may fail. Failure at any stage halts the experiment. If all stages are passed then there is some useful result.
Although I'm primarily interested in the final result, the treatments might also entail different failure rates along the way. I'd like to quantify this, and since we're looking at simple counts it seems like a chi square or Fisher exact test would be appropriate.
If I want to use such a test as it were recursively, to the groups passing each stage, do I need to apply some correction for multiple comparisons?
That is, supposing the groups progressed like this:
```
Group_A Group_B
Start 100 100
Stage_1 90 95
Stage_2 80 85
Stage_3 60 75
Stage_4 55 30
Results ... ...
```
Does it make sense to do a sequence of 2x2 tests of the form:
```
Group_A Group_B
Passed_N X Y
Failed_N Started_N-X Started_N-Y
```
I feel like I should just know the answer, but I can't figure out whether this counts as doing repeated tests or not. The populations are somewhat distinct each time, but heavily overlapping.
Also, would it make a difference if I had physical reasons to suppose that only stage 4 should be at all affected by the treatments? Could I just choose to ignore any differences in passage through the other stages in that case?
(Feel free also to post answers like "ZOMG, don't use that sort of test here, use XXXX, in manner YYYY, for reasons ZZZZ.")
| Multiple comparisons on nested subsets of data | CC BY-SA 2.5 | null | 2010-12-27T19:50:12.313 | 2010-12-28T10:48:09.420 | null | null | 174 | [
"hypothesis-testing",
"multiple-comparisons",
"chi-squared-test"
]
|
5772 | 2 | null | 5762 | 1 | null | Good explanation. Remember in your program output for the multilogistic model will only give estimates for two models even though there are three categories. The model that produces the largest estimate has the higheset odds from coming from that group. The last group can be determined by 1-estimate1-estimate2. From the top of my head, I think the following holds: exp(model1)/(1-exp(model2))=chance of model 1 and exp(model2)/(1-exp(model2))=chance of model 2.
| null | CC BY-SA 2.5 | null | 2010-12-28T02:35:22.147 | 2010-12-28T02:35:22.147 | null | null | 2539 | null |
5773 | 2 | null | 5747 | 105 | null | I'm on an annual fishing trip right now. There is a correlation between the time of day I fish and the amount of fish I catch. There is also a correlation between the size of the bait I use and the amount of fish I catch. There is no correlation between the size of the bait and the time of day.
| null | CC BY-SA 2.5 | null | 2010-12-28T02:45:45.653 | 2010-12-28T02:45:45.653 | null | null | 2539 | null |
5774 | 1 | null | null | 215 | 248909 | I have a dataset that has both continuous and categorical data. I am analyzing by using PCA and am wondering if it is fine to include the categorical variables as a part of the analysis. My understanding is that PCA can only be applied to continuous variables. Is that correct? If it cannot be used for categorical data, what alternatives exist for their analysis?
| Can principal component analysis be applied to datasets containing a mix of continuous and categorical variables? | CC BY-SA 3.0 | null | 2010-12-28T03:47:52.190 | 2020-11-19T10:01:35.460 | 2018-04-20T16:31:14.647 | 11887 | 2540 | [
"categorical-data",
"pca",
"correspondence-analysis",
"mixed-type-data"
]
|
5775 | 2 | null | 5750 | 11 | null | From your follow up response to Peter Flom's question, it sounds like you might be better served by techniques that look at higher level structure in your correlation matrix.
Techniques like factor analysis, PCA, multidimensional scaling, and cluster analysis of variables can be used to group your variables into sets of relatively more related variables.
Also, you may want to think theoretically about what kind of structure should be present. When your number of variables is large and the number of observations is small, you are often better relying more on prior expectations.
| null | CC BY-SA 2.5 | null | 2010-12-28T05:36:53.667 | 2010-12-28T05:36:53.667 | null | null | 183 | null |
5776 | 1 | null | null | 2 | 660 | There are 8 people playing poker.
So, the odds of winning the entire round = 1/8
2 rounds are played, and Bill wins both rounds.
What are the odds this was random? (Hypothesis test?)
NullH = Bill has no added skill. (Got lucky)
AltH = Bill has skill.
```
p = .13 = 1/8
q = .87 = 7/8
n = 2
SD = sqrt(pq/n) = .23
actual (p-hat) = 1
z = 3.74
p-value = 0%
```
Conclusion: Odds of winning 2 out of 2 rounds randomly is unlikely.
Reject null hypothesis. Bill has skill.
Is this right?
Thanks!!
| Odds of winning 2 trials in a row | CC BY-SA 4.0 | null | 2010-12-28T06:35:28.500 | 2020-09-05T14:26:16.770 | 2020-09-05T14:26:16.770 | 56211 | 1279 | [
"hypothesis-testing"
]
|
5777 | 2 | null | 5774 | 119 | null | Although a PCA applied on binary data would yield results comparable to those obtained from a [Multiple Correspondence Analysis](http://en.wikipedia.org/wiki/Multiple_correspondence_analysis) (factor scores and eigenvalues are linearly related), there are more appropriate techniques to deal with mixed data types, namely Multiple Factor Analysis for mixed data available in the [FactoMineR](http://cran.r-project.org/web/packages/FactoMineR/index.html) R package (`FAMD()`). If your variables can be considered as structured subsets of descriptive attributes, then [Multiple Factor Analysis](http://factominer.free.fr/factomethods/multiple-factor-analysis.html) (`MFA()`) is also an option.
The challenge with categorical variables is to find a suitable way to represent distances between variable categories and individuals in the factorial space. To overcome this problem, you can look for a non-linear transformation of each variable--whether it be nominal, ordinal, polynomial, or numerical--with optimal scaling. This is well explained in [Gifi Methods for Optimal Scaling in R: The Package homals](http://www.jstatsoft.org/v31/i04/paper), and an implementation is available in the corresponding R package [homals](http://cran.r-project.org/web/packages/homals/index.html).
| null | CC BY-SA 4.0 | null | 2010-12-28T07:09:51.967 | 2020-11-19T09:42:47.060 | 2020-11-19T09:42:47.060 | 930 | 930 | null |
5778 | 2 | null | 5774 | 35 | null | A Google search "pca for discrete variables" gives this [nice overview](http://staskolenikov.net/talks/Gustavo-Stas-PCA-generic.pdf) by S. Kolenikov (@StasK) and G. Angeles. To add to chl answer, the PC analysis is really analysis of eigenvectors of covariance matrix. So the problem is how to calculate the "correct" covariance matrix. One of the approaches is to use
[polychoric correlation](http://en.wikipedia.org/wiki/Polychoric_correlation).
| null | CC BY-SA 3.0 | null | 2010-12-28T07:41:44.453 | 2017-06-26T13:26:28.997 | 2017-06-26T13:26:28.997 | 28845 | 2116 | null |
5779 | 2 | null | 5776 | 1 | null | What you are observing is a random variable with [binomial distribution](http://en.wikipedia.org/wiki/Binomial_distribution), where number of trials is 2 and success probability $1/8$. You then can calculate p-value directly, without resorting to normal approximation. Your statistic is then the number of successes, which under null hypothesis is distributed as $Bin(n,p)$, with $n=2$, and $p=1/8$. The p-value is then
$$P(\hat{X}\ge 2)=P(\hat{X}=2)={2\choose 2}\left(\frac{1}{8}\right)^2\left(1-\frac{1}{8}\right)^0=\left(\frac{1}{8}\right)^2=0.015625$$
So yes the null hypothesis should be rejected.
| null | CC BY-SA 2.5 | null | 2010-12-28T07:59:47.490 | 2010-12-28T09:10:10.303 | 2010-12-28T09:10:10.303 | 2116 | 2116 | null |
5780 | 2 | null | 5771 | 2 | null | Assuming I understood your question correctly, I think what you are doing is fine. However, it does sound like you should correct your P values for multiple comparisons (for example using holm method with p.adjust function in R). The reason for the adjustment is that you are searching for "interesting results" over the entire sequence of stages.
Also, instead of using Fisher's test you might want to use [Bernard's exact test](http://www.r-statistics.com/2010/02/barnards-exact-test-a-powerful-alternative-for-fishers-exact-test-implemented-in-r/) (which is more powerful then fisher).
Another way I can think of doing this is to have for each subject the maximum stage it reached, and then do a wilcox.test on the ranks of the stages patients reached (between groups A and B).
Hope it helps...
| null | CC BY-SA 2.5 | null | 2010-12-28T10:48:09.420 | 2010-12-28T10:48:09.420 | null | null | 253 | null |
5781 | 2 | null | 5733 | 3 | null | I'm with @mpiktas in that I also think you need a model.
I think the standard method here would be to estimate a copula to capture the dependence structure between the different assets and use e.g. skew-normal- or t-distributed marginal distributions for the different assets. That gives you a very general model class (more general that assuming e.g. a multivariate t-distribution) that is pretty much the standard for your kind of task (e.g. I think Basel II requires financial institutions to use copula-methods to estimate their VaR). There's a `copula` package for R.
| null | CC BY-SA 2.5 | null | 2010-12-28T10:48:46.020 | 2010-12-28T10:48:46.020 | null | null | 1979 | null |
5782 | 1 | 5790 | null | 40 | 52835 | Lets say we have random variable $X$ with known variance and mean. The question is: what is the variance of $f(X)$ for some given function f. The only general method that I'm aware of is the delta method, but it gives only aproximation. Now I'm interested in $f(x)=\sqrt{x}$, but it'd be also nice to know some general methods.
Edit 29.12.2010
I've done some calculations using Taylor series, but I'm not sure whether they are correct, so I'd be glad if someone could confirm them.
First we need to approximate $E[f(X)]$
$E[f(X)] \approx E[f(\mu)+f'(\mu)(X-\mu)+\frac{1}{2}\cdot f''(\mu)(X-\mu)^2]=f(\mu)+\frac{1}{2}\cdot f''(\mu)\cdot Var[X]$
Now we can approximate $D^2 [f(X)]$
$E[(f(X)-E[f(X)])^2] \approx E[(f(\mu)+f'(\mu)(X-\mu)+\frac{1}{2}\cdot f''(\mu)(X-\mu)^2 -E[f(X)])^2]$
Using the approximation of $E[f(X)]$ we know that $f(\mu)-Ef(x) \approx -\frac{1}{2}\cdot f''(\mu)\cdot Var[X]$
Using this we get:
$D^2[f(X)] \approx \frac{1}{4}\cdot f''(\mu)^2\cdot Var[X]^2-\frac{1}{2}\cdot f''(\mu)^2\cdot Var[X]^2 + f'(\mu)^2\cdot Var[X]+\frac{1}{4}f''(\mu)^2\cdot E[(X-\mu)^4] +\frac{1}{2}f'(\mu)f''(\mu)E[(X-\mu)^3]$
$D^2 [f(X)] \approx \frac{1}{4}\cdot f''(\mu)^2 \cdot [D^4 X-(D^2 X)^2]+f'(\mu)\cdot D^2 X +\frac{1}{2}f'(\mu)f''(\mu)D^3 X$
| Variance of a function of one random variable | CC BY-SA 3.0 | null | 2010-12-28T14:13:16.227 | 2020-03-01T13:35:12.273 | 2013-05-11T22:13:54.820 | -1 | 1643 | [
"variance",
"random-variable",
"delta-method"
]
|
5783 | 2 | null | 5782 | 9 | null | To know the first two moments of X (mean and variance) is not enough, if the function f(x) is arbitrary (non linear). Not only for computing the variance of the transformed variable Y, but also for its mean.
To see this -and perhaps to attack your problem- you can assume that your transformation function has a Taylor expansion around the mean of X and work from there.
| null | CC BY-SA 2.5 | null | 2010-12-28T14:59:28.877 | 2010-12-28T14:59:28.877 | null | null | 2546 | null |
5784 | 2 | null | 5762 | 2 | null | It sounds like either your professor gave you an incorrect recommendation, or you misunderstood, or you have not communicated what you want in a clear way. Something is wrong, in any case.
If you are trying to predict a numeric score (like IQ, SAT, etc) then you do not want logistic regression of any type. You probably want linear regression.
But perhaps that is not what you want to do. Perhaps you want to use the scores to predict the group that the person is in; since there are three groups, you do not want "ordinary" or binomial logistic regression, but you might want either ordinal or multinomial logistic, depending on whether the outcome is ordered or not.
One additional possibility is that your professor expects you to turn the numerical value of score into a categorical one; this is nearly always a bad idea, but it is often done, nonetheless. In this case, you would probably want ordinal logistic regression
If you are using SAS, you may find a presentation of mine informative; it is at www.nesug.org/Proceedings/nesug10/sa/sa03.pdf
| null | CC BY-SA 2.5 | null | 2010-12-28T15:30:54.830 | 2010-12-28T15:30:54.830 | null | null | 686 | null |
5785 | 2 | null | 5776 | 6 | null | No, this is not right, because no observational context has been provided. All the following scenarios are consistent with the information given:
- Two games will be played. Beforehand, you hypothesize that Bill will win both. Assuming the results are independent and Bill has 1/8 chance of winning, the chance of this occurring is 1/64 = about 1.5%.
- Again two games will be played. Beforehand, you hypothesize that somebody will win both games. Under the same assumptions the chance of this occurring is 1/64 for each player. Because all eight possibilities (for the eight players) are mutually exclusive, the chances add, giving a probability of 1/8 = 12.5% for this outcome.
- This time an indefinite number of games will be played, with the series stopping when Bill wins two in a row. The chance that the series does end is 100% and we observe that Bill won the last two games.
- Again an indefinite number of games will be played, with the series stopping when anybody wins two in a row. The chance the series ends is 100% unless there is perfect anti-correlation among the winners. Assuming all players have equal chances of winning, the chance that Bill is the one to end it is 1/8 = 12.5%.
Because neither a clear null hypothesis nor an alternative have been specified, this is not a well-defined hypothesis testing situation. Therefore the idea of "p-value" is meaningless (and is unnecessary anyway). In scenario #1 the low probability provides some evidence--not much--that Bill is not just winning independently at random. Some alternative explanations include (a) some other players have essentially no chances of winning and Bill just got lucky; (b) the cards are not shuffled well between games, causing the winner of one to be the likely winner of the next, and Bill just got lucky in the first game; (c) in some games certain players have higher chances of winning not through skill but due to their position in the deal, and Bill happened to be in a good position in one or both games.
We cannot generally conclude that the winner of two successive games has "skill". For instance, he might have a partner who is setting him up to win. Or he might be the mark in a group of card sharks who are letting him win to get his confidence, etc.
| null | CC BY-SA 2.5 | null | 2010-12-28T16:24:26.547 | 2010-12-28T16:24:26.547 | null | null | 919 | null |
5786 | 1 | null | null | 11 | 1752 | Recently I have used Platt's scaling of SVM-outputs to estimate probabilities of default-events. More direct alternatives seem to be "Kernel logistic Regression" (KLR) and the related "Import Vector Machine".
Can anyone tell which kernel method giving probability-outputs is currently state of the art? Does an R-implementation of KLR exist?
Thank you very much for your help!
| Which kernel method gives the best probability outputs? | CC BY-SA 2.5 | null | 2010-12-28T16:32:06.750 | 2015-04-14T19:15:54.360 | 2015-04-14T19:15:54.360 | 9964 | 2549 | [
"logistic",
"svm",
"kernel-trick"
]
|
5787 | 2 | null | 5734 | 0 | null | As far as I know there aren't many options. Libraries for time series analysis free or otherwise are hard to come by.
If you are doing this for work you can have a look at [NAG](http://www.nag.com). It's commercial and not Java native. But you can use JNI with their C library.
They have a [time series chapter](http://www.nag.com/numeric/CL/nagdoc_cl09/xhtml/G13/g13conts.xml#g13) which does quite a bit.
On the free side there is [gretl](http://gretl.sourceforge.net) looks good, but I have no direct experience with it. If anyone does, I'd like to hear their opinions.
Have a look at [this question](https://stackoverflow.com/questions/950004/java-library-to-do-time-series-analysis) on stackoverflow as well.
| null | CC BY-SA 2.5 | null | 2010-12-28T16:45:03.330 | 2010-12-28T16:45:03.330 | 2017-05-23T12:39:26.143 | -1 | 300 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.