Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8013
|
1
| null | null |
9
|
1911
|
I have data that is equivalent to:
```
shopper_1 = ['beer', 'eggs', 'water',...]
shopper_2 = ['diapers', 'beer',...]
...
```
I would like to do some analysis on this data set to get a correlation matrix that would have an implication similar to: if you bought x, you are likely to buy y.
Using python (or perhaps anything but MATLAB), how can I go about that? Some basic guidelines, or pointers to where I should look would help.
Thank you,
Edit - What I have learned:
- These kinds of problems are known as association rule discovery. Wikipedia has a good article covering some of the common algorithms to do so. The classic algorithm to do so seems to be Apriori, due Agrawal et. al.
- That lead me to orange, a python interfaced data mining package. For Linux, the best way to install it seems to be from source using the supplied setup.py
- Orange by default reads input from files, formatted in one of several supported ways.
- Finally, a simple Apriori association rule learning is simple in orange.
|
How to do a 'beer and diapers' correlation analysis
|
CC BY-SA 2.5
| null |
2011-03-08T12:51:04.077
|
2011-04-11T16:13:02.430
|
2011-03-09T05:54:19.903
|
3618
|
3618
|
[
"correlation",
"econometrics",
"python",
"cross-correlation"
] |
8014
|
2
| null |
8000
|
51
| null |
What you describe is in fact a "sliding time window" approach and is different to recurrent networks. You can use this technique with any regression algorithm. There is a huge limitation to this approach: events in the inputs can only be correlatd with other inputs/outputs which lie at most t timesteps apart, where t is the size of the window.
E.g. you can think of a Markov chain of order t. RNNs don't suffer from this in theory, however in practice learning is difficult.
It is best to illustrate an RNN in contrast to a feedfoward network. Consider the (very) simple feedforward network $y = Wx$ where $y$ is the output, $W$ is the weight matrix, and $x$ is the input.
Now, we use a recurrent network. Now we have a sequence of inputs, so we will denote the inputs by $x^{i}$ for the ith input. The corresponding ith output is then calculated via $y^{i} = Wx^i + W_ry^{i-1}$.
Thus, we have another weight matrix $W_r$ which incorporates the output at the previous step linearly into the current output.
This is of course a simple architecture. Most common is an architecture where you have a hidden layer which is recurrently connected to itself. Let $h^i$ denote the hidden layer at timestep i. The formulas are then:
$$h^0 = 0$$
$$h^i = \sigma(W_1x^i + W_rh^{i-1})$$
$$y^i = W_2h^i$$
Where $\sigma$ is a suitable non-linearity/transfer function like the sigmoid. $W_1$ and $W_2$ are the connecting weights between the input and the hidden and the hidden and the output layer. $W_r$ represents the recurrent weights.
Here is a diagram of the structure:

| null |
CC BY-SA 2.5
| null |
2011-03-08T12:51:29.233
|
2011-03-08T15:43:55.373
|
2011-03-08T15:43:55.373
|
2860
|
2860
| null |
8015
|
1
| null | null |
12
|
3115
|
I was wondering if anyone could help me with information about kurtosis (i.e. is there any way to transform your data to reduce it?)
I have a questionnaire dataset with a large number of cases and variables. For a few of my variables, the data shows pretty high kurtosis values (i.e. a leptokurtic distribution) which is derived from the fact that many of the participants gave the exact same score for the variable. I do have a particularly large sample size, so according to the central limit theorem, violations of normality should still be fine.
The problem, however, is that fact that the particularly high levels of kurtosis are producing a number of univariate outliers in my dataset. As such, even if I transform the data, or remove/adjust the outliers, the high levels of kurtosis mean that the next most extreme scores automatically become outliers. I aim to use (discriminant function analysis). DFA is said to be robust to departures from normality provided that the violation is caused by skewness and not outliers. Furthermore, DFA is also said to be particularly influenced by outliers in the data (Tabachnick & Fidel).
Any ideas of how to get around this? (My initial thought was some way of controlling the kurtosis, but isn't it kind of a good thing if most of my sample are giving similar ratings?)
|
Treatment of outliers produced by kurtosis
|
CC BY-SA 4.0
| null |
2011-03-08T13:16:00.873
|
2020-09-27T09:03:05.620
|
2020-09-27T09:03:05.620
|
22047
|
3619
|
[
"distributions",
"assumptions",
"discriminant-analysis",
"kurtosis"
] |
8016
|
2
| null |
8010
|
4
| null |
The initial problem is:
$$\min_{f\in H}\frac{1}{n}\sum_{i=1}^{n}\phi\left(y_{i}f\left(x_{i}\right)\right)+\lambda\left\Vert f\right\Vert _{H}^{2}$$
$$\lambda\geq0$$
$$\phi\left(u\right)=\max\left(1-u,\,0\right)^{2}$$
---
Since, $f=\sum_{i=1}^{n}\alpha_i K_{x_i}$ and we are considering a RKHS, the primal problem is:
$$\min_{\alpha\in\mathbb{R}^{n},\zeta\in\mathbb{R}^{n}}\frac{1}{n}\sum_{i=1}^{n}\zeta_{i}^{2}+\lambda\alpha^{T}K\alpha$$
$$\forall i\in\left\{ 1,\ldots,n\right\} ,\,\zeta_{i}\geq0$$
$$\forall i\in\left\{ 1,\ldots,n\right\} ,\,\zeta_{i}-1+y_{i}\left(K\alpha\right)_{i}\geq0$$
---
Using Lagrangian multipliers, we get the result mentioned in the question. The computations are right: $\nu=0$ gives the result we want to achieve since there is a special link between $\alpha$ and $\mu$, thanks to the Lagrangian formulation:
$$\forall i\in\left\{ 1,\ldots,n\right\} ,\,\alpha_{i}^{*}=\frac{\mu_{i}y_{i}}{2\lambda}$$
where:
$$\forall i\in\left\{ 1,\ldots,n\right\} ,\, y_i\in\left\{ -1,1\right\}$$
---
Why is $\nu=0$?
Of course, $\nu=0$ since:
- there is only one term depending on $\nu$
- we want the max
- $\mu$ and $\nu$ are positive.
| null |
CC BY-SA 2.5
| null |
2011-03-08T13:19:37.123
|
2011-03-09T18:19:41.447
|
2011-03-09T18:19:41.447
|
1351
|
1351
| null |
8017
|
2
| null |
7989
|
5
| null |
One thing that you must recognise is that the term "noise" is relative not absolute. Thinking about an obvious example (not anything to do with statistics per se), imagine you are at a night club. What is "noise" in here?
If you trying to have a conversation with someone, then the music is "noise", and so are the other conversations going on inside the night club. The "signal" is the dialogue of the converstion.
If you are dancing, then the music is no longer "noise", but it becomes the "signal" to which you react to. The conversation has changed from "signal" into "noise" merely by a change in your state of mind!
Statistics works in exactly the same way (you could in theory develop a statistical model which describes both these "noise" processes).
In a regression setting, take the simple linear case with 1 covariate, X, and 1 dependent variable Y. What you are effectively saying here is that you want to extract the linear component of X that is related to Y. The general conditions for "additive noise", so that you have:
$$Y_{i}=a+b X_{i} + n_{i}$$
Is "small noise", or more precisely in a mathematical sense, the conditions for Taylor Series linearisation are good enough for your purposes. To show this in the multiplicative case, suppose the actual distribution is:
$$Y_{i}=(a+b X_{i})n_{i}^{(T)}$$
Which we can consider as a function of $n_{i}^{(T)}$, a taylor series expansion about the value 1 gives:
$$(a+b X_{i})n_{i}^{(T)}=a+b X_{i}+(a+b X_{i})(n_{i}^{(T)}-1)$$
If the noise is "small" then it should not differ much from 1, and so the second term will be much small than the first, when the noise is "small" compared to the "signal", which is given by the regression line. So we can make the approximation
$$(a+b X_{i})n_{i}^{(T)} \approx a+b X_{i}+n_{i}^{(A)}$$
Where the approximate noise ignores the dependence on the actual regression line. This dependence will only matter when the noise is large, compared to the slope. If the slope does not vary appreciably over the range of the model, then the "fanning" of the true noise will be indistinguishable from independent noise. This also applies for the general case, for any function g satisfying $g(1)=1$:
$$(a+bX)g(n_{i}^{(T)}) \approx a+b X_{i}+(g(n_{i}^{(T)})-1)(a+bX)g^{(1)}(n_{i}^{(T)}) $$
$$\approx a+b X_{i}+n_{i}^{(A)}$$
Where
$$g^{(1)}(x)=\frac{\partial g(x)}{\partial x}$$
But note that this approximation will only apply in the case of "small noise", or that $g(n_{i}^{(T)}) \approx 1$. This "smallness" make all the details of the particular function g irrelevant for all practical purpose. Going through the laborious calculations using g directly will only matter in the decimal places (estimate is 1.0189, using true g it is 1.0233). The more the function g departs from 1, the further up the decimal values will be affected. This is why "small noise" is required
| null |
CC BY-SA 2.5
| null |
2011-03-08T13:35:37.283
|
2011-03-08T13:35:37.283
| null | null |
2392
| null |
8018
|
2
| null |
8015
|
10
| null |
The obvious "common sense" way to resolving your problem is to
- Get the conclusion using the full data set. i.e. what results will you declare ignoring intermediate calculations?
- Get the conclusion using the data set with said "outliers" removed. i.e. what results will you declare ignoring intermediate calculations?
- Compare step 2 with step 1
- If there is no difference, forget you even had a problem. Outliers are irrelevant to your conclusion. The outliers may influence some other conclusion that may have been drawn using these data, but this is irrelevant to your work. It is somebody else's problem.
- If there is a difference, then you have basically a question of "trust". Are these "outliers" real in the sense that they genuinely represent something about your analysis? Or are the "outliers" bad in that they come from some "contaminated source"?
In situation 5 you basically have a case of what-ever "model" you have used to describe the "population" is incomplete - there are details which have been left unspecified, but which matter to the conclusions. There are two ways to resolve this, corresponding to the two "trust" scenarios:
- Add some additional structure to your model so that is describes the "outliers". So instead of $P(D|\theta)$, consider $P(D|\theta)=\int P(\lambda|\theta)P(D|\theta,\lambda) d\lambda$.
- Create a "model-model", one for the "good" observations, and one for the "bad" observations. So instead of $P(D|\theta)$ you would use $P(D|\theta)=G(D|\theta)u+B(D|\theta)(1-u)$, were u is the probability of obtaining a "good" observation in your sample, and G and B represent the models for the "good" and "bad" data.
Most of the "standard" procedures can be shown to be approximations to these kind of models. The most obvious one is by considering case 1, where the variance has been assumed constant across observations. By relaxing this assumption into a distribution you get a mixture distribution. This is the connection between "normal" and "t" distributions. The normal has fixed variance, whereas the "t" mixes over different variances, the amount of "mixing" depends on the degrees of freedom. High DF means low mixing (outliers are unlikely), low DF means high mixing (outliers are likely). In fact you could take case 2 as a special case of case 1, where the "good" observations are normal, and the "bad" observations are Cauchy (t with 1 DF).
| null |
CC BY-SA 2.5
| null |
2011-03-08T14:11:17.500
|
2011-03-08T14:11:17.500
| null | null |
2392
| null |
8019
|
1
|
8023
| null |
26
|
34009
|
Let's say I test how variable `Y` depends on variable `X` under different experimental conditions and obtain the following graph:

The dash lines in the graph above represent linear regression for each data series (experimental setup) and the numbers in the legend denote the Pearson correlation of each data series.
I would like to calculate the "average correlation" (or "mean correlation") between `X` and `Y`. May I simply average the `r` values? What about the "average determination criterion", $R^2$? Should I calculate the average `r` and than take the square of that value or should I compute the average of individual $R^2$'s?
|
Averaging correlation values
|
CC BY-SA 2.5
| null |
2011-03-08T15:06:44.510
|
2019-01-11T05:47:12.990
|
2011-03-08T15:57:47.940
|
919
|
1496
|
[
"regression",
"correlation",
"mean"
] |
8021
|
1
| null | null |
12
|
13977
|
I have posted a [previous question](https://stats.stackexchange.com/questions/7977/how-to-generate-uniformly-distributed-points-on-the-surface-of-the-3-d-unit-spher), this is related but I think it is better to start another thread. This time, I am wondering how to generate uniformly distributed points inside the 3-d unit sphere and how to check the distribution visually and statistically too? I don't see the strategies posted there directly transferable to this situation.
|
How to generate uniformly distributed points in the 3-d unit ball?
|
CC BY-SA 3.0
| null |
2011-03-08T15:34:34.147
|
2018-06-26T19:35:56.567
|
2017-04-13T12:44:23.203
|
-1
|
3552
|
[
"random-generation"
] |
8022
|
2
| null |
8021
|
16
| null |
The easiest way is to sample points uniformly in the corresponding hypercube and discard those that do not lie within the sphere. In 3D, this should not happen that often, about 50% of the time. (Volume of the hypercube is 1, volume of the sphere is $\frac{4}{3}\pi r^3 = 0.523...$.)
| null |
CC BY-SA 2.5
| null |
2011-03-08T15:52:11.650
|
2011-03-08T15:52:11.650
| null | null |
2860
| null |
8023
|
2
| null |
8019
|
17
| null |
The simple way is to add a categorical variable $z$ to identify the different experimental conditions and include it in your model along with an "interaction" with $x$; that is, $y \sim z + x\#z$. This conducts all five regressions at once. Its $R^2$ is what you want.
To see why averaging individual $R$ values may be wrong, suppose the direction of the slope is reversed in some of the experimental conditions. You would average a bunch of 1's and -1's out to around 0, which wouldn't reflect the quality of any of the fits. To see why averaging $R^2$ (or any fixed transformation thereof) is not right, suppose that in most experimental conditions you had only two observations, so that their $R^2$ all equal $1$, but in one experiment you had a hundred observations with $R^2=0$. The average $R^2$ of almost 1 would not correctly reflect the situation.
| null |
CC BY-SA 3.0
| null |
2011-03-08T15:57:22.620
|
2018-03-13T17:55:04.707
|
2018-03-13T17:55:04.707
|
919
|
919
| null |
8025
|
1
|
8026
| null |
45
|
63590
|
Precision is defined as:
>
p = true positives / (true positives + false positives)
What is the value of precision if (true positives + false positives) = 0? Is it just undefined?
Same question for recall:
>
r = true positives / (true positives + false negatives)
In this case, what is the value of recall if (true positives + false negatives) = 0?
P.S. This question is very similar to the question [What are correct values for precision and recall in edge cases?](https://stats.stackexchange.com/questions/1773/what-are-correct-values-for-precision-and-recall-in-edge-cases).
|
What are correct values for precision and recall when the denominators equal 0?
|
CC BY-SA 2.5
| null |
2011-03-08T16:31:51.660
|
2017-10-02T11:17:57.327
|
2017-04-13T12:44:40.807
|
-1
|
3604
|
[
"precision-recall"
] |
8026
|
2
| null |
8025
|
21
| null |
The answers to the linked earlier question apply here too.
If (true positives + false negatives) = 0 then no positive cases in the input data, so any analysis of this case has no information, and so no conclusion about how positive cases are handled. You want N/A or something similar as the ratio result, avoiding a division by zero error
If (true positives + false positives) = 0 then all cases have been predicted to be negative: this is one end of the ROC curve. Again, you want to recognise and report this possibility while avoiding a division by zero error.
| null |
CC BY-SA 2.5
| null |
2011-03-08T17:02:35.940
|
2011-03-08T17:02:35.940
| null | null |
2958
| null |
8028
|
1
| null | null |
0
|
128
|
My second question is about Test Statistics.
The questions and answers are on this PDF: [http://www.mediafire.com/?b74e633lxdb49rb](http://www.mediafire.com/?b74e633lxdb49rb)
I understand that they work out the join density and I know how to calculate this.
What I really don’t understand is how they figure out the test statistic. They randomly multiply by 1 in some cases and in some cases they just pick a sum. I don’t understand WHY that’s the test statistic - is it a guess?
Also, how do you check a test statistic you have calculated is correct?
|
Test Statistics
|
CC BY-SA 2.5
| null |
2011-03-08T18:35:12.617
|
2011-03-08T19:20:35.300
| null | null | null |
[
"descriptive-statistics"
] |
8029
|
1
|
8030
| null |
3
|
2370
|
I am writing a program in C# that requires me to use the Ttest formula. I have to effectively interpret the Excel formula:
```
=TTEST(range1,range2,1,3)
```
I am using the formula given [here](http://www.monarchlab.umn.edu/lab/research/stats/2SampleT.aspx)
and have interpreted into code as such:
```
double TStatistic = (mean1 - mean2) / Math.Sqrt((Math.Pow(variance1, 2) /
count1) + (Math.Pow(variance2, 2) / count2));
```
However, I don't fully understand t-test and the values I am getting are completely different than those calculated within Excel.
I have been using the following ranges:
```
R1:
91.17462277,
118.3936425,
96.6746393,
102.488785,
91.26831043
R2:
17.20546254,
19.56969811,
19.2831241,
13.03360631,
13.86577314
```
The value I am getting using my attempt is 1.8248, however that from Excel is 1.74463E-05. Could somebody please point me in the right direction?
|
How to implement formula for a independent groups t-test in C#?
|
CC BY-SA 3.0
| null |
2011-03-08T18:17:59.787
|
2013-12-18T11:17:36.437
|
2011-09-23T05:42:13.263
|
183
|
3624
|
[
"t-test"
] |
8030
|
2
| null |
8029
|
5
| null |
There are at least two problems with what you have done.
- You have misinterpreted the formula
$$t = \frac{\bar{x}_1-\bar{x}_2}{\sqrt{s_1^2 / n_1 + s_2^2 / n_2}}$$ since $s^2$ is already a variance (square of standard deviation) and does not need to be squared again.
- You are comparing eggs and omelettes: you need compare your "calculated $t$-value, with $k$ degrees of freedom ... to the $t$ distribution table". Excel has already done this with TTEST().
There are other possible issues such as using a population variance or sample variance formula.
| null |
CC BY-SA 2.5
| null |
2011-03-08T19:04:14.603
|
2011-03-08T19:04:14.603
| null | null |
2958
| null |
8032
|
2
| null |
8028
|
2
| null |
They don't randomly multiply by 1. What they do is split the joint density into the product of two functions: $g(\text{sufficient statistic},\text{parameter})$ and $h(\text{data})$.
The advantage of $h()$ is that it removes sometimes complicated parts of the density function which provide no useful information about estimating the parameter. At other times this is unnecessary, in which case $h()$ can be set to 1 and ignored. In either case you can concentrate on the [sufficient statistic](http://en.wikipedia.org/wiki/Sufficient_statistic).
| null |
CC BY-SA 2.5
| null |
2011-03-08T19:20:35.300
|
2011-03-08T19:20:35.300
| null | null |
2958
| null |
8033
|
1
| null | null |
4
|
4607
|
A lecturer wishes to "grade on the curve". The students' marks seem to be normally distributed with mean 70 and standard deviation 8. If the lecturer wants to give 20% A's, what should be the threshold between an A grade and a B grade?
|
How to find percentiles of a Normal distribution?
|
CC BY-SA 3.0
| null |
2011-03-08T20:00:17.407
|
2011-11-16T20:37:44.703
|
2011-11-16T20:37:44.703
|
919
| null |
[
"self-study",
"normal-distribution"
] |
8034
|
2
| null |
8013
|
7
| null |
In addition to the links that were given in comments, here are some further pointers:
- Association rules and frequent itemsets
- Survey on Frequent Pattern Mining -- look around Table 1, p. 4
About Python, I guess now you have an idea of what you should be looking for, but the [Orange](http://orange.biolab.si/) data mining package features a package on [Association rules](http://orange.biolab.si/doc/reference/associationRules.htm) and Itemsets (although for the latter I cannot found any reference on the website).
Edit:
I recently came across [pysuggest](http://code.google.com/p/pysuggest/) which is
>
a Top-N recommendation engine that
implements a variety of recommendation
algorithms. Top-N recommender systems,
a personalized information filtering
technology, are used to identify a set
of N items that will be of interest to
a certain user. In recent years, top-N
recommender systems have been used in
a number of different applications
such to recommend products a customer
will most likely buy; recommend
movies, TV programs, or music a user
will find enjoyable; identify
web-pages that will be of interest; or
even suggest alternate ways of
searching for information.
| null |
CC BY-SA 3.0
| null |
2011-03-08T20:59:01.840
|
2011-04-11T16:13:02.430
|
2011-04-11T16:13:02.430
|
930
|
930
| null |
8035
|
2
| null |
7975
|
4
| null |
When you have supporting/causal/helping/right-hand side/exogenous/predictor series, the approach that is preferred is to construct a single equation, multiple-input Transfer Function. One needs to examine possible model residuals for both unspecified/omitted deterministic inputs i.e. do Intervention Detection ala Ruey Tsay 1988 Journal of Forecasting and unspecified stochastic inputs via an ARIMA component. Thus you can explicitly include not only the user-suggested causals (and any needed lags !) but two kinds of omitted structures ( dummies and ARIMA ).
Care should be taken to ensure that the parameters of the final model do not change significantly over time otherwise data segmentation might be in order and that the residuals from the final model can not be proven to have heterogeneous variance.
The trend in the original series may be due to trends in the predictor series or due to Autoregressive dynamics in the series of interest or potentially due to an omitted deterministic series proxied by a steady state constant or even one or more local time trends.
| null |
CC BY-SA 2.5
| null |
2011-03-08T21:06:10.590
|
2011-03-08T21:06:10.590
| null | null |
3382
| null |
8036
|
1
| null | null |
3
|
170
|
I have gazillion documents (your tax return) that I need to check for correctness, but I don't the manpower nor the will power to read through all of it. Even if I do, I can not guarantee the quality and consistency of the proof reading process.
The only thing I can do is to pick a sample collection of document to proof read and assign `accept` or `reject` to it. From that I want to determine the confidence level of certain confidence interval...I am clueless on what I should do next or I am using the right approach.
I have no experience with this problem domain, perhaps someone with more QA experience can point me in the right direction. Like what question to ask ...
Thank you for reading :)
|
Proofreading lots of documents based on small sample
|
CC BY-SA 2.5
| null |
2011-03-08T21:08:10.473
|
2011-03-31T12:47:51.767
|
2011-03-31T07:36:08.540
| null |
3625
|
[
"confidence-interval",
"quality-control"
] |
8037
|
2
| null |
7975
|
22
| null |
Based upon the comments that you've offered to the responses, you need to be aware of spurious causation. Any variable with a time trend is going to be correlated with another variable that also has a time trend. For example, my weight from birth to age 27 is going to be highly correlated with your weight from birth to age 27. Obviously, my weight isn't caused by your weight. If it was, I'd ask that you go to the gym more frequently, please.
As you are familiar with cross-section data, I'll give you an omitted variables explanation. Let my weight be $x_t$ and your weight be $y_t$, where
$$\begin{align*}x_t &= \alpha_0 + \alpha_1 t + \epsilon_t \text{ and} \\ y_t &= \beta_0 + \beta_1 t + \eta_t.\end{align*}$$
Then the regression
$$\begin{equation*}y_t = \gamma_0 + \gamma_1 x_t + \nu_t\end{equation*}$$
has an omitted variable---the time trend---that is correlated with the included variable, $x_t$. Hence, the coefficient $\gamma_1$ will be biased (in this case, it will be positive, as our weights grow over time).
When you are performing time series analysis, you need to be sure that your variables are stationary or you'll get these spurious causation results. An exception would be integrated series, but I'd refer you to time series texts to hear more about that.
| null |
CC BY-SA 2.5
| null |
2011-03-08T22:02:19.000
|
2011-03-08T22:02:19.000
| null | null |
401
| null |
8040
|
1
|
8048
| null |
11
|
108357
|
Is there an easy way to apply the trend line formula from a chart to any given X value in Excel?
For example, I want to get the Y value for a given X = $2,006.00. I've already taken the formula and retyped it out be:
```
=-0.000000000008*X^3 - 0.00000001*X^2 + 0.0003*X - 0.0029
```
I am continually making adjustments to the trend line by adding more data, and don't want to retype out the formula every time.

|
Use a trendline formula to get values for any given X with Excel
|
CC BY-SA 2.5
| null |
2011-03-08T22:32:45.597
|
2011-03-11T00:04:11.817
|
2011-03-11T00:04:11.817
| null |
3626
|
[
"regression",
"excel"
] |
8041
|
1
| null | null |
3
|
1918
|
I have a bunch of experiments in which I am calculating precision and recall. I want to present a mean precision and recall for these experiments. Should these values be weighted by anything?
|
Should the mean precision and recall be weighted?
|
CC BY-SA 2.5
| null |
2011-03-08T22:40:06.737
|
2011-09-08T19:46:24.037
| null | null |
3604
|
[
"precision-recall"
] |
8042
|
2
| null |
7996
|
3
| null |
EDIT: After some reflection, I modified my answer substantially.
The best thing to do would be to try to find a reasonable model for your data (for example, by using multiple linear regression). If you cannot get enough data to do this, I would try the following "non-parametric" approach. Suppose that in your data set, the covariate $A$ takes on the values $A=a_1, ..., a_{n_A}$, and likewise for $B$, $C$, etc. Then what you can do is perform a linear regression on your dependent variables against the indicator variables $I(A= a_1), I(A=a_2), ..., I(A = a_{n_A}), I(B = b_1),...$ etc. If you have enough data you can also include interaction terms such as $I(A=a_1, B=b_1)$. Then you can use model selection techniques to eliminate the covariates that have the least effect.
| null |
CC BY-SA 2.5
| null |
2011-03-08T22:49:23.450
|
2011-03-08T23:40:46.820
|
2011-03-08T23:40:46.820
|
3567
|
3567
| null |
8043
|
2
| null |
7996
|
5
| null |
A few comments:
- Why did you go with your particular experimental design set-up? For example, fix A+B and vary C. What would you fix A + B at? If you are interesting in determining the effect of A and B, it seems a bit strange that you can fix them at "optimal values". There are standard statistical techniques for sampling from multi-dimension space. For example, latin hypercubes.
- Once you have your data, why not start with something simple, say multiple linear regression. You have 3 inputs A, B, C and one response variable. I suspect from your description, you may have to include interaction terms for the covariates.
Update
A few comments on your regression:
- Does the data fit your model? You need to check the residuals. Try googling "R and regression".
- Just because one of your covariates has a smaller p-value, it doesn't mean that it has the strongest effect. For that, look at the estimates of the $\beta_i$ terms: 0.8, -0.23, -0.31.
So a one unit change in $A$ results in $T$ increasing by 0.8, whereas a one unit change in $S$ results in $T$ decreasing by 0.23. However, are the units of the covariates comparable? For example, is it may be physically impossible for $A$ to change by 1 unit. Only you can make that decision.
BTW, try not to update your question so that it changes your original meaning. If you have a new question, then just ask a new question.
| null |
CC BY-SA 2.5
| null |
2011-03-08T23:15:32.683
|
2011-03-09T12:46:36.880
|
2020-06-11T14:32:37.003
|
-1
|
8
| null |
8044
|
1
|
8050
| null |
18
|
2425
|
I use R. Every day. I think in terms of data.frames, the apply() family of functions, object-oriented programming, vectorization, and ggplot2 geoms/aesthetics. I just started working for an organization that primarily uses SAS. I know there's a book about [learning R for SAS users](http://rads.stackoverflow.com/amzn/click/B001Q3LXNI), but what are some good resources for R users who've never used SAS?
|
Resources for an R user who must learn SAS
|
CC BY-SA 2.5
| null |
2011-03-08T23:27:13.440
|
2011-03-09T14:29:35.183
| null | null |
36
|
[
"r",
"sas"
] |
8045
|
5
| null | null |
0
| null |
Overview
A [distribution](http://en.wikipedia.org/wiki/Distribution) is a mathematical description of probabilities or frequencies. It can be applied to observed frequencies, estimated probabilities or frequencies, and theoretically hypothesized probabilities or frequencies. Distributions can be univariate, describing outcomes written with a single number, or multivariate, describing outcomes requiring ordered tuples of numbers.
Two devices are in common use to present univariate distributions. The cumulative form, or "cumulative distribution function" (CDF), gives--for every real number $x$--the chance (or frequency) of a value less than or equal to $x$. The "density" form, or "probability density function" (PDF), is the derivative (rate of change) of the CDF. The PDF might not exist (in this restricted sense), but a CDF always will exist. The CDF for a set of observations is called the "empirical density function" (EDF). Thus, its value at any number $x$ is the proportion of observations in the dataset less than or equal to $x$.
References
The following questions contain references to resources about probability distributions:
- Book recommendations for beginners about probability distributions
- Reference with distributions with various properties
| null |
CC BY-SA 3.0
| null |
2011-03-08T23:41:16.543
|
2013-09-01T23:15:41.880
|
2013-09-01T23:15:41.880
|
27581
|
919
| null |
8046
|
4
| null | null |
0
| null |
A distribution is a mathematical description of probabilities or frequencies.
| null |
CC BY-SA 3.0
| null |
2011-03-08T23:41:16.543
|
2014-05-22T03:02:41.767
|
2014-05-22T03:02:41.767
|
7290
|
919
| null |
8048
|
2
| null |
8040
|
16
| null |
Use `LINEST`, as shown:

The method is to create new columns (C:E here) containing the variables in the fit. Perversely, `LINEST` returns the coefficients in the reverse order, as you can see from the calculation in the "Fit" column. An example prediction is shown outlined in blue: all the formulas are exactly the same as for the data.
Note that `LINEST` is an array formula: the answer will occupy p+1 cells in a row, where p is the number of variables (the last one is for the constant term). Such formulas are created by selecting all the output cells, pasting (or typing) the formula in the formula textbox, and pressing Ctrl-Shift-Enter (instead of the usual Enter).
| null |
CC BY-SA 2.5
| null |
2011-03-09T00:12:52.290
|
2011-03-09T00:12:52.290
| null | null |
919
| null |
8049
|
2
| null |
8040
|
8
| null |
Try trend(known_y's, known_x's, new_x's, const).
Column A below is X. Column B is X^2 (the cell to the left squared). Column C is X^3 (two cells to the left cubed). The trend() formula is in Cell E24 where the cell references are shown in red.
The "known_y's" are in E3:E22
The "known_x's" are in A3:C22
The "new_x's" are in A24:C24
The "const" is left blank.
Cell A24 contains the new X, and is the cell to change to update the formula in E24
Cell B24 contains the X^2 formula (A24*A24) for the new X
Cell C24 contains the X^3 formula (A24*A24*A24) for the new X
If you change the values in E3:E22, the trend() function will update Cell E24 for your new input at Cell A24.

Edit ====================================
Kirk,
There's not much to the spreadsheet. I posted a "formula view" below.
The "known_x" values are in green in A3:C22
The "known_y" values are in green in E3:E22
The "new_x" values are in cells A24:C24, where B24 and C24 are the formulas as shown. And, cell E24 has the trend() formula. You enter your new "X" in cell A24.
That's all there is to it.

| null |
CC BY-SA 2.5
| null |
2011-03-09T00:46:15.910
|
2011-03-09T01:42:27.743
|
2011-03-09T01:42:27.743
|
2775
|
2775
| null |
8050
|
2
| null |
8044
|
20
| null |
15 months ago, I started my current job as someone who had been using R exclusively for about 3 years; I had used SAS in my first-ever stats class, loathed it, and never touched it again until I started here. Here's what has been helpful for me, and what hasn't:
Helpful:
---
- Colleagues' code. This is the single most useful source, for me. Some of it was very good code, some of it was very bad code, but all of it showed me how to think in SAS.
- SUGI. Though they are often almost unbearably corny, there is a vast wealth of these little how-to papers all over the Internet. You don't need to look for them; just Google, and they'll present themselves to you.
- The O'Reilly SQL Pocket Guide, by Gennick. I dodge a lot of SAS coding by using PROC SQL for data manipulation and summarization. This is cheating, and I don't care.
- This paper explaining formats and informats (PDF). This is without a doubt the least-intuitive part of SAS for me.
- UCLA's Academic Technology Services' Statistical Computing site. UCLA has heaps of great introductory material here, and there's a lot of parallel material between its R and SAS sections (like these analysis examples).
Not helpful:
---
- Anything I've ever read that is intended for people transitioning between R and SAS. I have the "R and SAS" book from Kleinman and Horton, which I've opened twice only to not find the answers I needed. I've read a few other guides here and there. Maybe it's just my learning style, but none of this stuff has ever stuck with me, and I inevitably end up googling for it once I actually need it.
You'll be okay, though. Just read your colleagues' code, ask questions here and on StackOverflow, and - whatever you do - don't try to plot anything.
| null |
CC BY-SA 2.5
| null |
2011-03-09T00:47:43.870
|
2011-03-09T01:07:51.007
|
2011-03-09T01:07:51.007
|
71
|
71
| null |
8051
|
2
| null |
7990
|
3
| null |
EDIT 2: We have to simplify the problem by removing the restriction that $x \ne y$ always in case two. Otherwise correlation issues vastly complicate the answer.
Let $|v|$, the 1-norm, denote the sum of the coordinates of the vector.
Let $v * w$ denote the coordinate-wise product of the vectors $v$ and $w$.
(So the dot-product of $v$ and $w$ is $|v * w|$.
Note that $|a_i| = |b_i|$ always.
In case one:
$$E[|a_{i+1}|] = E[|a_{i}| + a_{i+1}] = E[|a_{i}|] + E[a_{i+1}]$$
$$= E[|a_{i+1}|] + \frac{1}{i}E[|a_{i+1}|]= \frac{i+1}{i} E[|a_i|],$$
$$E[|a_{i+1}|^2] = E[(|a_{i}| + a_{i+1})^2] = E[|a_i|^2 + 2|a_i|a_{i+1} + a_{i+1}^2]$$
$$= E[|a_i|^2] + 2E[|a_i|a_{i+1}] + E[a_{i+1}^2]$$
$$= E[|a_i|^2] + 2E[E[|a_i|a_{i+1}]:|a_i|=k] + E[a_{i+1}^2]$$
$$= E[|a_i|^2] + 2E[kE[a_{i+1}]:|a_i|=k] + E[a_{i+1}^2]$$
$$= E[|a_i|^2] + 2E[k\frac{1}{i} k:|a_i|=k] + E[a_{i+1}^2]$$
$$= E[|a_i|^2] + \frac{2}{i}E[|a_i|^2] + E[a_{i+1}^2]$$
$$= E[|a_i|^2] + \frac{2}{i}E[|a_i|^2] + E[E[a_{i+1}^2] : |a_i| = k]$$
$$= E[|a_i|^2] + \frac{2}{i}E[|a_i|^2] + E[E[\frac{k}{i}^2] : |a_i| = k]$$
$$= E[|a_i|^2] + \frac{2}{i}E[|a_i|^2] + E[\frac{1}{i^2}|a_i|^2]$$
$$= \frac{(i+1)^2}{i^2} E[|a_i|^2],$$
$$E[|a_{i+1}*b_{i+1}|] = \frac{i+1}{i} E[|a_i * b_i|].$$
In case two,
$$E[|a_{i+1}|] = \frac{i+1}{i} E[|a_i|] + 1,$$
$$E[|a_{i+1}|^2] = \frac{(i+1)^2}{i^2} E[|a_i|^2] + 2E[|a_i|] + 1,$$
$$E[|a_{i+1}*b_{i+1}|]= E|a_i * b_i| + E[a_{i+1} b_{i+1}]$$
$$=E|a_i * b_i| + E[kE(a_{i+1}): b_{i+1} = k]$$
$$=E|a_i * b_i| + E[k(\frac{1}{i}|a_i| + 1): b_{i+1} = k]$$
$$=E|a_i * b_i| + E[(\frac{1}{i}k|a_i| + k): b_{i+1} = k]$$
$$=E|a_i * b_i| + \frac{1}{i} E[(\frac{1}{i}|b_i|+1)|a_i|] + E[b_{i+1}]$$
$$=E|a_i * b_i| + \frac{1}{i^2} E[|a_i||b_i|]+ \frac{1}{i} E[|a_i|] + \frac{1}{i}E[|b_i|] + 1$$
$$=E|a_i * b_i| + \frac{1}{i^2} E[|a_i|^2]+ \frac{2}{i} E[|a_i|] + 1.$$
[The last equation would be much more complicated if we require $x \ne y$ in case 2.]
So overall,
$$E[|a_{i+1}|] = \frac{i+1}{i} E[|a_i|] + (1-p),$$
$$E[|b_{i+1}|] = \frac{i+1}{i} E[|b_i|] + (1-p),$$
$$E[|a_{i+1}|^2|]= \frac{(i+1)^2}{i^2} E[|a_i|^2] + (1-p) [E[|a_i|^2] + 2E[|a_i|] + 1]$$
$$E[|a_{i+1}*b_{i+1}|] = \frac{p+i}{i} E[|a_i * b_i|] + (1-p)[\frac{1}{i^2} E[|a_i|^2]+ \frac{1}{i} [2E[|a_i|] + 1].$$
Which allows us to efficiently tabulate the answer for $i = 1, 2, 3, ...$
DISCLAIMER: I have yet to numerically verify this answer.
| null |
CC BY-SA 2.5
| null |
2011-03-09T00:50:01.100
|
2011-03-09T10:28:37.683
|
2011-03-09T10:28:37.683
|
3567
|
3567
| null |
8052
|
1
|
8053
| null |
5
|
5722
|
Suppose I'm doing binary classification, and I want to test whether using feature X is significant or not. (For example, I could be building a decision tree, and I want to see whether I should prune feature X or not.)
I believe the standard method is to use a chi-square test on the 2x2 table
```
X = 0 X = 1
Outcome = 0 A B
Outcome = 1 C D
```
A "simpler" (IMO) test, though, would be to calculate a statistic on the probability that X gives the correct outcome: take p = [(x = 0 and Outcome = 0) + (x = 1 and Outcome = 1)] / [Total number of observations], and calculate the significance that p is far from 0.5 (say, by using a normal approximation or a Wilson score).
What are the disadvantages/advantages of this approach, compared to the chi-square method? Is it totally misguided? Are they equivalent?
|
2x2 chi-square test vs. binomial proportion statistic
|
CC BY-SA 2.5
| null |
2011-03-09T00:54:14.237
|
2011-06-26T01:43:59.280
| null | null |
1106
|
[
"statistical-significance",
"chi-squared-test"
] |
8053
|
2
| null |
8052
|
3
| null |
Suppose you have
Case 1:
>
A=200, B=100
C=100, D=200
versus
Case 2:
>
A=200, B=0
C=200, D=200
The B=0 in case 2 means that case 2 provides much stronger evidence than case 1 of a relationship between X and Outcome; but in your test, both cases would be scored the same.
The Chi-Square test, informally speaking, not only takes into account the "X XOR Outcome" relationship (which is what you test) but also "X implies Outcome", "not X implies Outcome" and so on.
| null |
CC BY-SA 2.5
| null |
2011-03-09T01:14:39.337
|
2011-03-09T02:04:47.480
|
2020-06-11T14:32:37.003
|
-1
|
3567
| null |
8054
|
2
| null |
8005
|
2
| null |
I'm not sure I understand your question. But if you are working with nontraditional distributions you might want to look at nonparametric methods--in particular, you can use the nonparametric bootstrap to get confidence intervals for "4yrMean - 1yrMean" or whatever statistic you need.
| null |
CC BY-SA 2.5
| null |
2011-03-09T02:52:53.240
|
2011-03-09T02:52:53.240
| null | null |
3567
| null |
8055
|
1
| null | null |
21
|
24903
|
Could someone walk me through an example on how to use DLM Kalman filtering in R on a time series. Say I have a these values (quarterly values with yearly seasonality); how would you use DLM to predict the next values? And BTW, do I have enough historical data (what is the minimum)?
```
89 2009Q1
82 2009Q2
89 2009Q3
131 2009Q4
97 2010Q1
94 2010Q2
101 2010Q3
151 2010Q4
100 2011Q1
? 2011Q2
```
I'm looking for a R code cookbook-style how-to step-by-step type of answer. Accuracy of the prediction is not my main goal, I just want to learn the sequence of code that gives me a number for 2011Q2, even if I don't have enough data.
|
How to use DLM with Kalman filtering for forecasting
|
CC BY-SA 4.0
| null |
2011-03-08T18:50:07.840
|
2018-08-28T16:14:47.997
|
2018-08-28T16:14:47.997
|
128677
|
3645
|
[
"r",
"time-series",
"forecasting"
] |
8056
|
2
| null |
8055
|
17
| null |
The paper at [JSS 39-02](http://www.jstatsoft.org/v39/i02) compares 5 different Kalman filtering R packages and gives sample code.
| null |
CC BY-SA 2.5
| null |
2011-03-08T21:47:06.360
|
2011-03-08T21:47:06.360
| null | null |
4704
| null |
8057
|
2
| null |
8055
|
8
| null |
I suggest you read the dlm vignette [http://cran.r-project.org/web/packages/dlm/vignettes/dlm.pdf](http://cran.r-project.org/web/packages/dlm/vignettes/dlm.pdf) especially the chapter 3.3
| null |
CC BY-SA 2.5
| null |
2011-03-09T05:56:56.280
|
2011-03-09T05:56:56.280
| null | null |
1709
| null |
8058
|
1
|
8060
| null |
6
|
11039
|
I'm building a logistic regression, and two of my variables are categorical with three levels each. (Say one variable is male, female, or unknown, and the other is single, married, or unknown.)
How many dummy variables am I supposed to make? Do I make 4 in total (2 for each of the categorical variables, e.g., a male variable, a female variable, a single variable, and a married variable) or 5 in total (2 for one of the categorical variables, 3 for the other)?
I know most textbooks say that when you're dummy encoding a categorical variable with k levels, you should only make k-1 dummy variables, since otherwise you'll get a collinearity with the constant. But what do you do when you're dummy encoding several categorical variables? By the collinearity argument, it sounds like I'd only make k-1 dummy variables for one of the categorical variables, and for the rest of the categorical variables I'd build all k dummy variables.
|
How to choose number of dummy variables when encoding several categorical variables?
|
CC BY-SA 2.5
| null |
2011-03-09T06:04:31.097
|
2017-07-12T11:28:18.607
|
2017-07-12T11:28:18.607
|
3277
| null |
[
"logistic",
"categorical-data",
"categorical-encoding"
] |
8059
|
1
|
8076
| null |
6
|
950
|
I'm doing some work on modelling transition matrices, and for this I need a measure of discrepancy or lack of fit: that is, if I have a matrix $T$ and a target matrix $T_0$, I want to be able to calculate how far $T$ is from $T_0$. Would anyone be able to provide pointers on what measure I should be using?
I've seen some references to using an elementwise squared-error measure, ie sum up the squared differences of the elements of $T$ and $T_0$, but this seems rather ad-hoc.
|
Discrepancy measures for transition matrices
|
CC BY-SA 2.5
| null |
2011-03-09T06:14:27.973
|
2011-03-09T16:06:28.627
|
2011-03-09T16:06:28.627
|
223
|
1569
|
[
"estimation",
"markov-process",
"distance-functions"
] |
8060
|
2
| null |
8058
|
9
| null |
You would make k-1 dummy variables for each of your categorical variables. The textbook argument holds; if you were to make k dummies for any of your variables, you would have a collinearity. You can think of the k-1 dummies as being contrasts between the effects of their corresponding levels, and the level whose dummy is left out.
| null |
CC BY-SA 2.5
| null |
2011-03-09T06:43:14.660
|
2011-03-09T06:43:14.660
| null | null |
1569
| null |
8063
|
1
| null | null |
3
|
3213
|
How can I measure linear correlation of non-normally distributed variables?
Pearson coefficient is not valid for non-normally distributed data, and Spearman's rho does not capture linear correlation.
Thank you
|
Measuring linear correlation of non-normally distributed variables
|
CC BY-SA 2.5
| null |
2011-03-09T09:43:14.043
|
2012-11-05T17:50:45.337
|
2011-03-09T09:59:04.997
|
930
|
3465
|
[
"correlation",
"measurement"
] |
8064
|
2
| null |
8063
|
3
| null |
Why do you require normality for computing a correlation? How about a simple scatterplot?
As long as the data are continuous, ordinary (Pearson) correlation should be fine. All that it is measuring is the strength of the linear relationship between two variables (if indeed there is such a relationship).
| null |
CC BY-SA 2.5
| null |
2011-03-09T09:49:24.237
|
2011-03-09T09:49:24.237
| null | null |
1945
| null |
8065
|
1
|
8069
| null |
1
|
137
|
I have the following data of 2 diseases from 5 areas. I want to see if there is any relationship between the 2 diseases.
Incidence Rates of 2 diseases (cases per million per year)
```
Areas Disease 1 Disease 2
1 4.653 0.751
2 6.910 1.121
3 4.957 0.745
4 2.870 0.848
5 2.819 1.166
Actual number of cases are
Areas Disease 1 Disease 2
1 1152 186
2 2601 422
3 1051 158
4 403 119
5 290 120
```
I am a beginner in Biostatistics. Kindly advise me in simple terms
|
Establishing relationship between 2 diseases
|
CC BY-SA 2.5
| null |
2011-03-09T10:11:34.210
|
2011-03-09T12:36:20.820
|
2011-03-09T12:36:20.820
|
2956
|
2956
|
[
"correlation"
] |
8066
|
2
| null |
8063
|
2
| null |
What @galit said is absolutely right. You can find the linear correlation between any two continuously distributed variables.
But perhaps you are thinking of the meaning of such a correlation? Indeed, [Anscombe's quartet](http://en.wikipedia.org/wiki/Anscombe%27s_quartet) shows that while the correlation is defined for any pair of continuous variables, and its mathematical and statistical meaning is the same, its substantive meaningfulness may vary.
| null |
CC BY-SA 2.5
| null |
2011-03-09T10:11:52.887
|
2011-03-09T10:11:52.887
| null | null |
686
| null |
8067
|
2
| null |
8044
|
6
| null |
A couple things to add to what @matt said:
In addition to SUGI (which is now renamed SAS Global Forum, and will be held this year in Las Vegas) there are numerous local and regional SAS user groups. These are smaller, more intimate, and (usually) a lot cheaper. Some local groups are even free. See [here](http://support.sas.com/usergroups/)
SAS-L. This is a mailing list for SAS questions. It is quite friendly, and some of the participants are among the best SAS programmers there are.
The book [SAS and R: Data Management, Statistical Analysis and Graphics](http://rads.stackoverflow.com/amzn/click/1420070576) by Kleinman and Horton. Look up what you want to do in the R index, and you'll find how to do it in SAS as well. Sort of like a inter-language dictionary.
| null |
CC BY-SA 2.5
| null |
2011-03-09T10:18:53.657
|
2011-03-09T10:18:53.657
| null | null |
686
| null |
8068
|
2
| null |
8059
|
5
| null |
Why does one want the measure of discrepancy to be a true metric? There is a huge literature on axiomatic characterizations of I-divergence as measure of distance. It is neither symmetric nor satisfies triangle inequality.
I hope by 'transition matrix' you mean 'probability transition matrix'. Never mind, as long as the entries are non negative, I-divergence is considered to be the "best" measure of discrimination. See for example [http://www.mdpi.com/1099-4300/10/3/261/](http://www.mdpi.com/1099-4300/10/3/261/). In fact certain axioms which any one would feel desirable lead to measures which are nonsymmetric in general.
| null |
CC BY-SA 2.5
| null |
2011-03-09T10:34:57.833
|
2011-03-09T10:34:57.833
| null | null |
3485
| null |
8069
|
2
| null |
8065
|
3
| null |
There is no simple answer for how to "establish a relationship between two variables;" indeed, your question is one of the central issues in statistics and research is still going on on how to do this. But some basics: first you will want to plot your data, and then you will want to carry out a linear regression to test some specific type of relationship between variables in your data. You will need to obtain the "p-score" of the regression to get an idea of how well your purported relationship is supported by the data. Generally if you can get a very low p-score (e.g. p < 0.01), then it will be safe to say that there is a relationship between variables.
| null |
CC BY-SA 2.5
| null |
2011-03-09T10:51:36.343
|
2011-03-09T10:56:46.197
|
2011-03-09T10:56:46.197
|
3567
|
3567
| null |
8070
|
2
| null |
7994
|
2
| null |
See the papers by Sebastian Mika and co-authors (I think this was the subject of Mika's PhD thesis). The original paper is [here](http://dx.doi.org/10.1109/NNSP.1999.788121) and for free [here](http://ml.cs.tu-berlin.de/publications/MikRaeWesSchMue99.pdf).
| null |
CC BY-SA 2.5
| null |
2011-03-09T11:16:18.363
|
2011-03-09T11:16:18.363
| null | null |
887
| null |
8071
|
1
| null | null |
156
|
591713
|
How do I know when to choose between Spearman's $\rho$ and Pearson's $r$? My variable includes satisfaction and the scores were interpreted using the sum of the scores. However, these scores could also be ranked.
|
How to choose between Pearson and Spearman correlation?
|
CC BY-SA 2.5
| null |
2011-03-09T11:28:52.587
|
2021-01-30T01:07:10.827
|
2017-03-02T22:43:24.067
|
28666
| null |
[
"correlation",
"pearson-r",
"spearman-rho"
] |
8072
|
1
|
8102
| null |
4
|
1731
|
If I want to achieve a margin-of-error of <= 5 % for a representative population sample, how large a sample do I need when: The interviewees are picked from X regions and from Y age groups? That is, how many samples from each age group & region?
I know how many people live in each region and how they are distributed in the Y age groups in each region.
|
Choosing sample size to achieve pre-specified margin-of-error
|
CC BY-SA 2.5
| null |
2011-03-09T11:34:14.167
|
2011-03-09T22:35:21.137
|
2011-03-09T11:52:29.903
|
3401
|
3401
|
[
"statistical-significance",
"sampling"
] |
8073
|
2
| null |
8071
|
45
| null |
This happens often in statistics: there are a variety of methods which could be applied in your situation, and you don't know which one to choose. You should base your decision the pros and cons of the methods under consideration and the specifics of your problem, but even then the decision is usually subjective with no agreed-upon "correct" answer. Usually it is a good idea to try out as many methods as seem reasonable and that your patience will allow and see which ones give you the best results in the end.
The difference between the Pearson correlation and the Spearman correlation is that the Pearson is most appropriate for measurements taken from an interval scale, while the Spearman is more appropriate for measurements taken from ordinal scales. Examples of interval scales include "temperature in Fahrenheit" and "length in inches", in which the individual units (1 deg F, 1 in) are meaningful. Things like "satisfaction scores" tend to be of the ordinal type since while it is clear that "5 happiness" is happier than "3 happiness", it is not clear whether you could give a meaningful interpretation of "1 unit of happiness". But when you add up many measurements of the ordinal type, which is what you have in your case, you end up with a measurement which is really neither ordinal nor interval, and is difficult to interpret.
I would recommend that you convert your satisfaction scores to quantile scores and then work with the sums of those, as this will give you data which is a little more amenable to interpretation. But even in this case it is not clear whether Pearson or Spearman would be more appropriate.
| null |
CC BY-SA 4.0
| null |
2011-03-09T11:34:52.310
|
2021-01-30T01:04:45.773
|
2021-01-30T01:04:45.773
|
22047
|
3567
| null |
8074
|
2
| null |
8071
|
6
| null |
While agreeing with Charles' answer, I would suggest (on a strictly practical level) that you compute both of the coefficients and look at the differences. In many cases, they will be exactly the same, so you don't need to worry.
If however, they are different then you need to look at whether or not you met the assumptions of Pearson's (constant variance and linearity) and if these are not met, you are probably better off using Spearman's.
| null |
CC BY-SA 4.0
| null |
2011-03-09T11:54:45.967
|
2021-01-30T01:07:10.827
|
2021-01-30T01:07:10.827
|
22047
|
656
| null |
8075
|
2
| null |
8072
|
3
| null |
If you weight your measurements (proportion of subpopulation/proportion of subpopulation in sample), your estimates will be unbiased. I assume this is what you meant by "poll results being skewed".
If I interpret your question correctly, your goal is the simultaneous estimation of multiple population proportions, where your proportions are
>
P_1 = proportion of population voting yes on poll question 1
P_2 = proportion of population voting yes on poll question 2
etc. (Let's work with one region at a time for now.) These can be represented in a proportion vector $P= (P_1, P_2, ...)$. We will denote a point estimate of $P$ by $\hat{P}$.
By what you want is probably not a point estimate, but a 95% confidence interval. This is an interval $(P_1 \pm t, P_2 \pm t, ...)$ where $t$ is your tolerance. (What 95% confidence means is a tricky issue which is hard to explain and easy to misunderstand, so I'll skip it for now.)
The thing is, it is always possible to construct a 95% confidence set no matter how small your sample size is. For your problem to be properly defined you need to specify the $t$, which is how accurate you require your estimates to be. The more accuracy you require the more samples you will need. In the problem as I have set it up it is possible to find the minimum number of samples given $t$, but there you can get better results if you can estimate the variability of your respective subpopulations ahead of time (which does not seem to be the case in your problem.)
Please give further clarification to your problem, though.
| null |
CC BY-SA 2.5
| null |
2011-03-09T12:13:57.863
|
2011-03-09T12:28:18.687
|
2020-06-11T14:32:37.003
|
-1
|
3567
| null |
8076
|
2
| null |
8059
|
4
| null |
As long as your matrix represent conditional probability I think that using a general matrix norm is a bit artificial. Using some sort of geodesic distance on the set of transition matrix might be more relevant but I clearly prefer to come back to probabilities.
I assume you want to compare $Q=(Q_{ij})$ and $P=(P_{ij})$ with $P_{ij}=P(X^P_{t}=j|X^P_{t-1}=j)$ and that for $P$ (resp. $Q$) there exists a unique stationnary measure $\pi_{P}$ (resp. $\pi_{Q}$).
Under these assumptions, I guess it is meaningfull to compare $\pi_{P}$ and $\pi_{Q}$ for example with a $L_{1}$ distance: $\sum_{j}|\pi_{P}[j]-\pi_{Q}[j]|$ or hellinger distance: $\sum_{j}|\pi^{1/2}_{P}[j]-\pi^{1/2}_{Q}[j]|^2$ or Kullback divergence: $\sum_{j}\pi_{P}[j] \log(\frac{\pi_{P}[j]}{\pi_{Q}[j]})$.
| null |
CC BY-SA 2.5
| null |
2011-03-09T13:09:16.753
|
2011-03-09T13:09:16.753
| null | null |
223
| null |
8077
|
2
| null |
8071
|
59
| null |
Shortest and mostly correct answer is:
Pearson benchmarks linear relationship, Spearman benchmarks monotonic relationship (few infinities more general case, but for some power tradeoff).
So if you assume/think that the relation is linear (or, as a special case, that those are a two measures of the same thing, so the relation is $y=1\cdot x+0$) and the situation is not too weired (check other answers for details), go with Pearson. Otherwise use Spearman.
| null |
CC BY-SA 2.5
| null |
2011-03-09T13:16:03.530
|
2011-03-09T13:16:03.530
| null | null | null | null |
8078
|
2
| null |
8000
|
10
| null |
You may also consider simply using a number of transforms of time series for the input data. Just for one example, the inputs could be:
- the most recent interval value
(7)
- the next most recent interval
value (6)
- the delta between most
recent and next most recent (7-6=1)
- the third most recent interval
value (5)
- the delta between the
second and third most recent (6-5=1)
- the average of the last three
intervals ((7+6+5)/3=6)
So, if your inputs to a conventional neural network were these six pieces of transformed data, it would not be a difficult task for an ordinary backpropagation algorithm to learn the pattern. You would have to code for the transforms that take the raw data and turn it into the above 6 inputs to your neural network, however.
| null |
CC BY-SA 2.5
| null |
2011-03-09T13:57:54.917
|
2011-03-09T13:57:54.917
| null | null |
2917
| null |
8079
|
1
| null | null |
0
|
146
|
Let say we have only one sample in which we are interested only on one statistics c of this sample. This is usually true in practice when we usually have only one data set. So, most probably the usual estimating method like MLE cant be applied.
And let say we want to estimate 2 parameters (i.e. a & b) of a certain dist which depend on c. Is bootstrap technique will work for this? Is there any theoretical proof for such practice?
Any help will be appreciated.
tsukiko
|
Bootstrap for estimating parameters with only one sample
|
CC BY-SA 2.5
|
0
|
2011-03-09T14:15:07.337
|
2011-03-09T14:15:07.337
| null | null | null |
[
"estimation",
"bootstrap"
] |
8080
|
2
| null |
8044
|
4
| null |
In addition to Matt Parkers excellent advice (particularly about reading colleagues code), the actual SAS documentation can be surprisingly helpful (once you've figured out the name of what you want):
[http://support.sas.com/documentation/](http://support.sas.com/documentation/)
And the Global Forum/SUGI proceedings are available here:
[http://support.sas.com/events/sasglobalforum/previous/online.html](http://support.sas.com/events/sasglobalforum/previous/online.html)
| null |
CC BY-SA 2.5
| null |
2011-03-09T14:29:35.183
|
2011-03-09T14:29:35.183
| null | null |
495
| null |
8082
|
1
|
8084
| null |
3
|
1496
|
I am running `ivprobit` in Stata to look at the determinants of enrolment in health insurance (cbhi). I have several exogenous regressors and one endogenous regressor (consumption). I am using wealthindex as an intrumental variable for consumption. However, when I run the `ivprobit` model all my exogenous regressors appear in the "instruments" list. Could someone please tell me how to prevent this from happening? I am copying the stata code and results below.
Thank you in advance for your help.
```
. # delimit;
delimiter now ;
. ivprobit cbhi age_hhead age2 edu_hh2 edu_hh3 edu_hh4 edu_hh5 mar_hh hhmem_cat2 hhmem_cat3
> hhhealth_poor hh_chronic hh_diffic risk pregnancy wra any_oldmem j_hh2 j_hh3 j_hh4
> hlthseek_mod campaign qual_percep urban door_to_door chief_mem
> pharm_in_vil drugsel_in_vil hlthcent_in_vil privclin_in_vil young edu_chief num_camp_vill home_collection
> qual2 qual3 size_ln name_hosp1 name_hosp3 name_hosp4 name_hosp5 name_hosp6 (cons_pcm=wealthindex) ;
Fitting exogenous probit model
Iteration 0: log likelihood = -1909.5425
Iteration 1: log likelihood = -1687.946
Iteration 2: log likelihood = -1681.6413
Iteration 3: log likelihood = -1681.6142
Iteration 4: log likelihood = -1681.6142
Fitting full model
Iteration 0: log likelihood = -42769.776
Iteration 1: log likelihood = -42766.676
Iteration 2: log likelihood = -42736.028
Iteration 3: log likelihood = -42734.629
Iteration 4: log likelihood = -42734.269
Iteration 5: log likelihood = -42734.266
Iteration 6: log likelihood = -42734.266
Probit model with endogenous regressors Number of obs = 3000
Wald chi2(42) = 1113.82
Log likelihood = -42734.266 Prob > chi2 = 0.0000
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
cons_pcm | 3.59e-06 2.15e-07 16.68 0.000 3.17e-06 4.01e-06
age_hhead | .0320042 .0129973 2.46 0.014 .00653 .0574784
age2 | -.0001774 .0001228 -1.45 0.148 -.0004181 .0000632
edu_hh2 | .051426 .0879334 0.58 0.559 -.1209203 .2237723
edu_hh3 | -.0075595 .1023748 -0.07 0.941 -.2082106 .1930915
edu_hh4 | -.000683 .1285689 -0.01 0.996 -.2526735 .2513075
edu_hh5 | -.2986249 .1713749 -1.74 0.081 -.6345136 .0372638
mar_hh | -.0539035 .067056 -0.80 0.421 -.1853308 .0775237
hhmem_cat2 | .4222092 .0675468 6.25 0.000 .2898198 .5545985
hhmem_cat3 | .7099401 .0822271 8.63 0.000 .5487778 .8711023
hhhealth_p~r | .1550617 .0699122 2.22 0.027 .0180362 .2920872
hh_chronic | .159681 .0718297 2.22 0.026 .0188973 .3004647
hh_diffic | .2613713 .0741209 3.53 0.000 .116097 .4066456
risk | -.0482252 .0467953 -1.03 0.303 -.1399423 .0434919
pregnancy | .1200344 .1214774 0.99 0.323 -.1180569 .3581257
wra | .1033809 .0289192 3.57 0.000 .0467004 .1600614
any_oldmem | .0792428 .0680778 1.16 0.244 -.0541872 .2126729
j_hh2 | .0512404 .0815692 0.63 0.530 -.1086323 .211113
j_hh3 | -.349126 .0806268 -4.33 0.000 -.5071516 -.1911004
j_hh4 | .1928496 .0843208 2.29 0.022 .0275838 .3581154
hlthseek_mod | -.0359996 .0969011 -0.37 0.710 -.2259223 .153923
campaign | .5068671 .0781293 6.49 0.000 .3537365 .6599977
qual_percep | .1840314 .0514758 3.58 0.000 .0831408 .2849221
urban | -.0999875 .0590944 -1.69 0.091 -.2158103 .0158353
door_to_door | .0564977 .0719369 0.79 0.432 -.084496 .1974914
chief_mem | .0811248 .0599964 1.35 0.176 -.036466 .1987156
pharm_in_vil | .0269211 .0582651 0.46 0.644 -.0872763 .1411185
drugsel_in~l | .0693485 .0586169 1.18 0.237 -.0455385 .1842355
hlthcent_i~l | .0089279 .0823509 0.11 0.914 -.1524769 .1703327
privclin_i~l | -.0547559 .0857815 -0.64 0.523 -.2228846 .1133728
young | .0899074 .0715818 1.26 0.209 -.0503903 .2302051
edu_chief | -.0888131 .0528856 -1.68 0.093 -.1924671 .0148408
num_camp_v~l | .0655667 .0548026 1.20 0.232 -.0418444 .1729779
home_colle~n | .1944581 .0606159 3.21 0.001 .0756531 .3132631
qual2 | -.1778623 .0896638 -1.98 0.047 -.3536001 -.0021245
qual3 | -.2049839 .1081036 -1.90 0.058 -.416863 .0068952
size_ln | -.0059746 .042326 -0.14 0.888 -.0889321 .0769829
name_hosp1 | .1476783 .1168295 1.26 0.206 -.0813032 .3766598
name_hosp3 | .536909 .1350044 3.98 0.000 .2723053 .8015128
name_hosp4 | .4347936 .1069197 4.07 0.000 .2252348 .6443523
name_hosp5 | .228802 .1274495 1.80 0.073 -.0209945 .4785985
name_hosp6 | .9189969 .1301651 7.06 0.000 .6638779 1.174116
_cons | -4.82205 .4438201 -10.86 0.000 -5.691922 -3.952179
-------------+----------------------------------------------------------------
/athrho | -.8556093 .0998439 -8.57 0.000 -1.0513 -.6599188
/lnsigma | 12.27712 .01291 950.98 0.000 12.25181 12.30242
-------------+----------------------------------------------------------------
rho | -.6939885 .0517571 -.7823111 -.5783094
sigma | 214725.5 2772.097 209360.5 220228.1
------------------------------------------------------------------------------
Instrumented: cons_pcm
Instruments: age_hhead age2 edu_hh2 edu_hh3 edu_hh4 edu_hh5 mar_hh
hhmem_cat2 hhmem_cat3 hhhealth_poor hh_chronic hh_diffic risk
pregnancy wra any_oldmem j_hh2 j_hh3 j_hh4 hlthseek_mod
campaign qual_percep urban door_to_door chief_mem
pharm_in_vil drugsel_in_vil hlthcent_in_vil privclin_in_vil
young edu_chief num_camp_vill home_collection qual2 qual3
size_ln name_hosp1 name_hosp3 name_hosp4 name_hosp5
name_hosp6 wealthindex
------------------------------------------------------------------------------
Wald test of exogeneity (/athrho = 0): chi2(1) = 73.44 Prob > chi2 = 0.0000
```
|
Why is Stata automatically converting regressors to instrumental variables in ivprobit model?
|
CC BY-SA 2.5
| null |
2011-03-09T15:10:25.447
|
2011-03-09T16:32:43.060
|
2011-03-09T16:32:43.060
|
930
|
834
|
[
"stata",
"instrumental-variables"
] |
8083
|
1
| null | null |
1
|
1723
|
Revised Version
Situation:
We have a sample of size $r$. Then, we count how many out of these $r$ to be "success".
Let $X=$ the no. of the "successes" and say $X$ follow Poisson dist.
Goal: To test whether $X$ is really Poisson using Dispersion Test with test statistics given by
$D=\frac{\sum_{i=1}^{n}(X_{i}-\overline{X})^2}{\overline{X}}$ --- As state in [here](http://www.stats.uwo.ca/faculty/aim/2004/04-259/notes/DispersionTests.pdf).
Problem:
- I have only one sample, so my n=1. So, D=0 if i used $\overline{X}=X_1$. To overcome this, i use bootstrap to generate "n" bootstrap samples and obtain $\overline{X}$. Is this valid?
- Should i choose the generated bootstrap samples used for hypothesis testing in such a way the resulting $\overline{X}=Var(X)$? -- I think this rule should be used if i want to estimate the Poisson parameter. I'm confused in using bootstrap for estimation and bootstrap for hypothesis testing.
3.Illustration:
a. The data: 0.42172863 0.28830514 0.66452743 0.01578868 0.02810549. So,$ r=5$.
b.Define “success”: Observation which larger than 0.4. Here, $X=2$.
c.Obtain let say 3 bootstrap samples (from data in Step a):
i) 0.28830514 0.01578868 0.66452743 0.02810549 0.02810549. So, $X=1$.
ii) 0.42172863 0.66452743 0.02810549 0.66452743 0.66452743. So, $X=4$.
iii) 0.02810549 0.66452743 0.01578868 0.66452743 0.42172863. So, $X=3$.
d.So, $\overline{X}= 2.666667$ and $Var(X)= 2.333333$. Note that here, I didn’t control on how the 3 bootstrap samples (in Step c) are generated. That’s why $\overline{X}>Var(X)$.
So, my questions:
1.Is bootstrap a valid technique to obtain $\overline{X}$ in this case?
2.Should I control on how the bootstrap samples generated so that the resulting $\overline{X}=Var(X)$ to be used for the hypothesis testing?
|
One-sample-dispersion-test for Poisson parameter
|
CC BY-SA 2.5
| null |
2011-03-09T15:12:20.300
|
2011-03-10T13:31:19.273
|
2011-03-10T03:59:32.407
| null | null |
[
"hypothesis-testing",
"poisson-distribution",
"bootstrap"
] |
8084
|
2
| null |
8082
|
6
| null |
In general all exogenous variables are always included as instruments. Usually instruments are picked for variables which are endogenous, but we can think (it follows from the mathematical derivation of instrumental variable estimation) that we need to choose the instruments for all the variables. Instruments for exogenous variables then are naturally themselves.
| null |
CC BY-SA 2.5
| null |
2011-03-09T15:27:12.967
|
2011-03-09T15:27:12.967
| null | null |
2116
| null |
8086
|
2
| null |
8083
|
4
| null |
First of all, I don't think you can estimate (over)dispersion from a sample of size 1. Overdispersion means that the sample variance is more than predicted based on other properties of the sample (mean) and properties of the assumed model (for Poisson models, variance = mean). But for a sample of size one, the population variance estimate is not finite (it involves dividing by $n-1 = 0$).
In any case, your ideas about the bootstrap procedure are not headed in the right direction. When you bootstrap, you are sampling data values from the sample, which in this case won't help. (i.e., say that $X_1 = 1$. Then the possible bootstrap samples are $[1,1], [1,1,1] ...$) Certainly, drawing extra data from a process that constrains the mean to be equal to the variance will invalidate your estimates of dispersion, as that is exactly the hypothesis under test.
Finally, I'm not familiar with the test formula you are using. Perhaps you are trying for the [variance to mean ratio](https://secure.wikimedia.org/wikipedia/en/wiki/Variance-to-mean_ratio) aka Index of dispersion, in which case the formula you give for $D$ is not correct?
| null |
CC BY-SA 2.5
| null |
2011-03-09T16:40:43.410
|
2011-03-09T16:40:43.410
| null | null |
2975
| null |
8087
|
1
|
44407
| null |
7
|
527
|
I am trying to study hidden conditional random fields but I still have some fundamental questions about those methods. I would be immensely grateful if someone could provide some clarification over the notation used on most papers about the topic.
In several papers the most common form of the the HCRF model is given as:
$p(w|o;\theta) = \frac{1}{z(o; \theta)} \sum_{s} \exp{ \Psi(w, s, o; \theta) } $
In which $\theta$ is the parameter vector, $w$ is the class label, $o$ is the observation sequence, $s$ is the hidden state sequence and $\Psi$ is the potential function. However, I still could't figure out what $s$ means. Is it just a sequence of integer numbers, or is it actually a sequence of nodes in a graph? How one actually computes this summation?
[Most papers](http://people.csail.mit.edu/sybor/cvpr06_wang.pdf) I [have read](http://www.stanford.edu/~jurafsky/asru09.pdf) mention only that each $s_i \in S$ captures certain underlying structure of each class ($S$ being the set of hidden states in the model). But I still couldn't figure out what this actually means.
|
Hidden states in hidden conditional random fields
|
CC BY-SA 2.5
| null |
2011-03-09T16:46:25.777
|
2014-09-02T13:56:03.697
|
2011-03-09T16:53:24.623
| null |
1538
|
[
"machine-learning",
"classification",
"image-processing"
] |
8088
|
1
| null | null |
8
|
1128
|
I'm working on a project where we observe behaviour on a task (eg. response time) and model this behaviour as a function of several experimentally manipulated variables as well as several observed variable (participant sex, participant IQ, responses on a follow-up questionnaire). I don't have concerns about multicollinearity amongst the experimental variables because they were specifically manipulated to be independent, but I am concerned about the observed variables. However, I'm unsure how to assess independence amongst the observed variables, partially because I seem to get somewhat different results depending on how I set up the assessent, and also because I'm not very familiar with correlation in the context where one or both variables are dichotomous.
For example, here are two different approaches to determining if sex is independent of IQ. I'm not a fan of null hypothesis significance testing, so in both approaches I build two models, one with a relationship and one without, then compute and AIC-corrected log likelihood ratio:
```
m1 = lm(IQ ~ 1)
m2 = lm(IQ ~ sex)
LLR1 = AIC(m1)-AIC(m2)
m3 = glm(sex~1,family='binomial')
m4 = glm(sex~IQ,family='binomial')
LLR2 = AIC(m3)-AIC(m4)
```
However, these approaches yield somewhat different answers; LLR1 is about 7, suggesting strong evidence in favor of a relationship, while LLR2 is about 0.3, suggesting very weak evidence in favor of a relationship.
Furthermore, if I attempt to assess the independence between sex and another dichotomous observed variable, "yn", the resultant LLR similarly depends on whether I set up the models to predict sex from yn, or to predict yn from sex.
Any suggestions on why these differences are arising and how to most reasonably proceed?
|
Assessing multicollinearity of dichotomous predictor variables
|
CC BY-SA 2.5
| null |
2011-03-09T17:14:51.320
|
2011-03-11T17:47:46.867
|
2011-03-09T18:54:53.200
|
364
|
364
|
[
"regression",
"binomial-distribution",
"independence",
"multicollinearity",
"likelihood-ratio"
] |
8089
|
1
|
8129
| null |
11
|
1246
|
Why doesn't recall take into account true negatives? In experiments where true negatives are just as important as true positives, is their a comparable metric that does take it into account?
|
Why doesn't recall take into account true negatives?
|
CC BY-SA 2.5
| null |
2011-03-09T17:28:58.713
|
2011-03-10T14:51:01.483
| null | null |
3604
|
[
"precision-recall"
] |
8090
|
1
|
8092
| null |
8
|
1158
|
I have a dataset that's nominally 16-dimensional. I have about 100 samples in one case and about 20,000 in another. Based on various exploratory analyses I've conducted using PCA and heat maps, I'm convinced that the true dimensionality (i.e. the number of dimensions needed to capture most of the "signal") is around 4. I want to create a slide to that effect for a presentation. The "conventional wisdom" about this data, which I'm looking to disprove, is that the true dimensionality is one or two.
What's a good, simple visualization for showing the true dimensionality of a dataset? Preferably it should be understandable to people who have some background in statistics but are not "real" statisticians.
|
How to visualize the true dimensionality of the data?
|
CC BY-SA 2.5
| null |
2011-03-09T17:49:53.070
|
2011-03-10T04:47:44.177
|
2011-03-09T17:57:03.650
| null |
1347
|
[
"data-visualization",
"pca",
"dimensionality-reduction"
] |
8091
|
2
| null |
8090
|
4
| null |
One way to visualize this would be as follows:
- Perform a PCA on the data.
- Let $V$ be the vector space spanned by the first two principal component vectors, and let $V^\top$ be the complement.
- Decompose each vector $x_i$ in your data set as the sum of an element in $V$ plus a remainder term (which is in $V^\top$). Write this as $x_i = v_i + c_i$. (this should be easy using the results of the PCA.)
- Create a scatter plot of $||c_i||$ versus $||v_i||$.
If the data is truly $\le 2$ dimensional, the plot should look like a flat line.
In Matlab (ducking from all the shoes being thrown):
```
lat_d = 2; %the latent dimension of the generating process
vis_d = 16; %manifest dimension
n = 10000; %number of samples
x = randn(n,lat_d) * randn(lat_d,vis_d) + 0.1 * randn(n,vis_d); %add some noise
xmu = mean(x,1);
xc = bsxfun(@minus,x,xmu); %Matlab syntax for element recycling: ugly, weird.
[U,S,V] = svd(xc); %this will be slow;
prev = U(:,1:2) * S(1:2,1:2);
prec = U(:,3:end) * S(3:end,3:end);
normv = sqrt(sum(prev .^2,2));
normc = sqrt(sum(prec .^2,2));
scatter(normv,normc);
axis equal; %to illlustrate the differences in scaling, make axis 'square'
```
This generates the following scatter plot:

If you change `lat_d` to 4, the line is less flat.
| null |
CC BY-SA 2.5
| null |
2011-03-09T18:03:12.767
|
2011-03-09T18:19:03.830
|
2011-03-09T18:19:03.830
|
795
|
795
| null |
8092
|
2
| null |
8090
|
6
| null |
A standard approach would be to do PCA and then show a scree plot, which you ought to be able to get that out of any software you might choose. A little tinkering and you could make it more interpretable for your particular audience if necessary. Sometimes they can be convincing, but often they're ambiguous and there'a always room to quibble about how to read them so a scree plot may (edit: not!) be ideal. Worth a look though.
| null |
CC BY-SA 2.5
| null |
2011-03-09T18:11:31.550
|
2011-03-10T04:47:44.177
|
2011-03-10T04:47:44.177
|
26
|
26
| null |
8094
|
2
| null |
7973
|
2
| null |
You just asked for a mathematical shortcut. But I want to take a step back and point out that your approach could give you misleading results.
Are you planning this approach? Add additional observations only if you aren't happy with the initial results. Once the results make you happy, presumably because the P value is low enough, then you'll stop adding more samples. If the null hypothesis were true (all population means equal), this approach is far more likely than 5% to give you a P value less than 0.05. If you go long enough, you are certain to get a P value less than 0.05. But that may take longer than the age of the universe! With time constraints, you aren't certain to get a P value less than 0.05, but the chance is way more than 5% (depending, of course, on how often you add more data and retest). You really need to choose your sample size in advance (or use specialized techniques that account for sequential testing.)
| null |
CC BY-SA 2.5
| null |
2011-03-09T18:50:07.717
|
2011-03-09T18:50:07.717
| null | null |
25
| null |
8096
|
1
| null | null |
1
|
6174
|
How do I establish the +/- in terms of a set of mean ages. My mean age is 27.2. The ages are 20 23 24 43 22 26 18 32 18 41 22 20 26 46 21 27 19 19 39 40 19 39 18 38 24 24 23 30.
Thank you.
|
How to calculate confidence interval of the mean age of a sample?
|
CC BY-SA 2.5
| null |
2011-03-09T19:54:06.577
|
2014-08-25T17:05:57.230
|
2011-03-10T03:29:13.523
|
183
| null |
[
"mean"
] |
8097
|
2
| null |
8090
|
0
| null |
I've done similar using PROC Varclus in SAS. The basic idea is to generate a 4 cluster solution, pick the highest correlated variable with each cluster, and then to demonstrate that this 4 cluster solution explains more of the variation than the two cluster solution. For the 2 cluster solution you could use either Varclus or the first 2 Principal Components, but I like Varclus since everything is explained via variables and not the components. There is a varclus in R, but I'm not sure if it does the same thing.
-Ralph Winters
| null |
CC BY-SA 2.5
| null |
2011-03-09T20:11:09.467
|
2011-03-09T20:18:42.687
|
2011-03-09T20:18:42.687
|
3489
|
3489
| null |
8099
|
1
|
8101
| null |
4
|
181
|
I have a linear string of unit length, and I randomly sample two locations a and b from Uniform(0, 1). Then I cut the string at these two locations to get a sub-string. What is the distribution for the size of the sub-string (i.e. |a-b|)? Otherwise, I want to at least know the mean and variance.
|
Distribution of a random segment on a string
|
CC BY-SA 2.5
| null |
2011-03-09T20:39:48.563
|
2011-03-10T01:29:29.180
| null | null |
578
|
[
"distributions",
"variance",
"mean"
] |
8100
|
2
| null |
8099
|
7
| null |
Here's a snippet of Maple code that shows that the PDF of $\lvert a - b \rvert$ is $2 - 2 t$:
```
with(Statistics):
a := RandomVariable(Uniform(0,1));
b := RandomVariable(Uniform(0,1));
PDF(abs(X - Y), t) assuming 0 < t, t < 1;
```
returns $2 - 2 t$. You can see that it's true by computing the probability that $\lvert a - b \rvert < t$ by hand: it's the area of the subset of the unit square that has horizontal or vertical distance to the diagonal of at most $t$; which means that it's one minus the area of two right triangles with legs of length $1 - t$; which means that it's $1 - 2 \cdot \frac{(1 - t)^2}{2} = 1 - (1 - t)^2$. The derivative of that is $2 - 2t$.
(Disclaimer: I maintain Maple's Statistics package.)
| null |
CC BY-SA 2.5
| null |
2011-03-09T21:12:21.203
|
2011-03-09T21:12:21.203
| null | null |
2898
| null |
8101
|
2
| null |
8099
|
8
| null |
$(a,b)$ constitute one draw from the uniform distribution on the unit square. The region where $|a - b| \gt x$ is a pair of equilateral right triangles, one at $(0,1)$ and the other at $(1,0)$ with side lengths $1-x$; they fit together into a square with area $(1-x)^2$. Thus the CDF of $x = |a-b|$ equals $1 - (1-x)^2$, $0 \le x \le 1$.

A similar analysis appears in the solution to problem 43 of Fred Mosteller's Fifty Challenging Problems in Probability (1965). It asks for the expected values of the smallest, middle, and largest pieces.
| null |
CC BY-SA 2.5
| null |
2011-03-09T21:13:30.397
|
2011-03-10T01:29:29.180
|
2011-03-10T01:29:29.180
|
919
|
919
| null |
8102
|
2
| null |
8072
|
6
| null |
Your problem can be solved using the info in the [Wikipedia article for 'Margin of error'](http://en.wikipedia.org/wiki/Margin_of_error#Different_confidence_levels). As it says, the margin of error is largest when the proportion is 0.5. The maximum margin of error at 95% confidence is $m = 0.98 / \sqrt{n}.$ Rearranging gives $n = (0.98/m)^2.$ So if you want a margin of error of 5% = 0.05, $n = (0.98/0.05)^2 = 384$. So you need around 400 subjects for each group in which you want to estimate the proportion of 'yes' responses.
| null |
CC BY-SA 2.5
| null |
2011-03-09T22:35:21.137
|
2011-03-09T22:35:21.137
| null | null |
449
| null |
8103
|
1
|
15249
| null |
2
|
1473
|
Asking this for a colleague:
They have two groups of people with various characteristics and to address the issue of comparability of outcomes they would like to reference a paper or book that discusses the concept and problems of common support.
|
What is a good reference that discusses the problem of common support?
|
CC BY-SA 2.5
|
0
|
2011-03-10T01:12:57.133
|
2011-09-06T13:55:24.433
| null | null |
1144
|
[
"distributions",
"propensity-scores"
] |
8104
|
1
|
8108
| null |
23
|
13186
|
## Background
One of the most commonly used weak prior on variance is the inverse-gamma with parameters $\alpha =0.001, \beta=0.001$ [(Gelman 2006)](http://www.stat.columbia.edu/~gelman/research/published/taumain.pdf).
However, this distribution has a 90%CI of approximately $[3\times10^{19},\infty]$.
```
library(pscl)
sapply(c(0.05, 0.95), function(x) qigamma(x, 0.001, 0.001))
[1] 3.362941e+19 Inf
```
From this, I interpret that the $IG(0.001, 0.001)$ gives a low probability that variance will be very high, and the very low probability that variance will be less than 1 $P(\sigma<1|\alpha=0.001, \beta=0.001)=0.006$.
```
pigamma(1, 0.001, 0.001)
[1] 0.006312353
```
## Question
Am I missing something or is this actually an informative prior?
update to clarify, the reason that I was considering this 'informative' is because it claims very strongly that the variance is enormous and well beyond the scale of almost any variance ever measured.
follow-up would a meta-analysis of a large number of variance estimates provide a more reasonable prior?
---
## Reference
Gelman 2006. [Prior distributions for variance parameters in
hierarchical models](http://www.stat.columbia.edu/~gelman/research/published/taumain.pdf). Bayesian Analysis 1(3):515–533
|
Why is a $p(\sigma^2)\sim\text{IG(0.001, 0.001)}$ prior on variance considered weak?
|
CC BY-SA 2.5
| null |
2011-03-10T03:00:48.210
|
2013-10-23T16:17:08.657
|
2011-04-29T00:40:19.050
|
3911
|
1381
|
[
"bayesian",
"multilevel-analysis",
"prior"
] |
8105
|
2
| null |
8104
|
10
| null |
It's pretty close to flat. Its median is 1.9 E298, almost the largest number one can represent in double precision floating arithmetic. As you point out, the probability it assigns to any interval that isn't really huge is really small. It's hard to get less informative than that!
| null |
CC BY-SA 2.5
| null |
2011-03-10T03:15:51.573
|
2011-03-10T03:15:51.573
| null | null |
919
| null |
8106
|
1
|
8131
| null |
44
|
57218
|
I am currently reading a [paper](http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6VG2-4Y648C8-1&_user=10&_coverDate=11/30/2009&_rdoc=1&_fmt=high&_orig=gateway&_origin=gateway&_sort=d&_docanchor=&view=c&_searchStrId=1672491428&_rerunOrigin=google&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=da95476f2c8535361fb031914a475b32&searchtype=a) concerning voting location and voting preference in the 2000 and 2004 election. In it, there is a chart which displays logistic regression coefficients. From courses years back and a little [reading up](http://en.wikipedia.org/wiki/Logistic_regression), I understand logistic regression to be a way of describing the relationship between multiple independent variables and a binary response variable. What I'm confused about is, given the table below, because the South has a logistic regression coefficient of .903, does that mean that 90.3% of Southerners vote republican? Because of the logistical nature of the metric, that this direct correlation does not exist. Instead, I assume that you can only say that the south, with .903, votes Republican more than the Mountains/plains, with the regression of .506. Given the latter to be the case, how do I know what is significant and what is not and is it possible to extrapolate a percentage of republican votes given this logistic regression coefficient.

As a side note, please edit my post if anything is stated incorrectly
|
What is the significance of logistic regression coefficients?
|
CC BY-SA 2.5
| null |
2011-03-10T03:45:41.767
|
2018-10-30T16:01:00.207
|
2020-06-11T14:32:37.003
|
-1
|
3652
|
[
"regression",
"logistic",
"interpretation"
] |
8107
|
1
|
8162
| null |
11
|
23526
|
In `R` I can perform a Breusch–Pagan test for heteroscedasticity using the `ncvTest` function of the `car` package. A Breusch–Pagan test is a type of chi squared test.
How do I interpret these results:
```
> require(car)
> set.seed(100)
> x1 = runif(100, -1, 1)
> x2 = runif(100, -1, 1)
> ncvTest(lm(x1 ~ x2))
Non-constant Variance Score Test
Variance formula: ~ fitted.values
Chisquare = 0.2343406 Df = 1 p = 0.6283239
> y1 = cumsum(runif(100, -1, 1))
> y2 = runif(100, -1, 1)
> ncvTest(lm(y1 ~ y2))
Non-constant Variance Score Test
Variance formula: ~ fitted.values
Chisquare = 1.191635 Df = 1 p = 0.2750001
```
|
How do I interpret the results of a Breusch–Pagan test?
|
CC BY-SA 2.5
| null |
2011-03-10T04:09:10.763
|
2017-11-12T17:21:11.933
|
2017-11-12T17:21:11.933
|
11887
|
179
|
[
"interpretation",
"heteroscedasticity"
] |
8108
|
2
| null |
8104
|
42
| null |
Using the inverse gamma distribution, we get:
$$p(\sigma^2|\alpha,\beta) \propto (\sigma^2)^{-\alpha-1} \exp(-\frac{\beta}{\sigma^2})$$
You can see easily that if $\beta \rightarrow 0$ and $\alpha \rightarrow 0$ then the inverse gamma will approach the Jeffreys prior. This distribution is called "uninformative" because it is a proper approximation to the Jeffreys prior
$$p(\sigma^2) \propto \frac{1}{\sigma^2}$$
Which is uninformative for scale parameters [see page 18 here for example](http://bayes.wustl.edu/etj/articles/prior.pdf), because this prior is the only one which remains invariant under a change of scale (note that the approximation is not invariant). This has a indefinite integral of $\log(\sigma^2)$ which shows that it is improper if the range of $\sigma^2$ includes either $0$ or $\infty$. But these cases are only problems in the maths - not in the real world. Never actually observe infinite value for variance, and if the observed variance is zero, you have perfect data!. For you can set a lower limit equal to $L>0$ and upper limit equal $U<\infty$, and your distribution is proper.
While it may seem strange that this is "uninformative" in that it prefers small variance to large ones, but this is only on one scale. You can show that $\log(\sigma^2)$ has an improper uniform distribution. So this prior does not favor any one scale over any other
Although not directly related to your question, I would suggest a "better" non-informative distribution by choosing the upper and lower limits $L$ and $U$ in the Jeffreys prior rather than $\alpha$ and $\beta$. Usually the limits can be set fairly easily with a bit of thought to what $\sigma^2$ actually means in the real world. If it was the error in some kind of physical quantity - $L$ cannot be smaller than the size of an atom, or the smallest size you can observe in your experiment. Further $U$ could not be bigger than the earth (or the sun if you wanted to be really conservative). This way you keep your invariance properties, and its an easier prior to sample from: take $q_{(b)} \sim \mathrm{Uniform}(\log(L),\log(U))$, and then the simulated value as $\sigma^{2}_{(b)}=\exp(q_{(b)})$.
| null |
CC BY-SA 3.0
| null |
2011-03-10T04:27:49.310
|
2013-10-23T16:17:08.657
|
2013-10-23T16:17:08.657
|
17230
|
2392
| null |
8109
|
2
| null |
8106
|
22
| null |
The idea here is that in logistic regression, we predict not the actual probability that, say, a southerner votes Republican, but a transformed version of it, the "log odds". Instead of the probability $p$, we deal with $\log p/(1-p)$ and find linear regression coefficients for the log odds.
So for example, let's assume that an urban Northeasterner has probability 0.3 of voting for a Republican. (This would of course be part of the regression; I don't see it reported in this table, although I assume it's in the original paper.) Now, $x = 1/(1+e^{-z})$ gives $z = \log {x \over 1-x}$; that is, $f^{-1}(x) = \log {x \over 1-x}$, the "log odds" corresponding to $x$. These "log odds" are what behaves linearly; the log odds corresponding to $0.3$ are $\log 0.3/0.7 \approx -0.85$. So the log odds for an urban Southerner voting Republican are this (what Wikipedia calls the intercept, $\beta_0$) plus the logistic regression coefficient for the South, $0.903$ -- that is, $-0.85 + 0.904 = 0.05$. But you want an actual probability, so we need to invert the function $p \to \log p/(1-p)$. That gives $f(0.05) \approx 1/(1+e^{-0.05}) \approx 0.51$. The actual odds have gone from $0.43$ to $1$, to $1.05$ to $1$; the ratio $1.05/0.43$ is $e^{0.903}$, the exponential of the logistic regression coefficient.
Furthermore, the effects for, say, region of the country and urban/suburban/rural don't interact. So the log odds of a rural Midwesterner voting Republican, say, are $-0.85 + 0.37 + 0.68 = +0.20$ according to this model; the probability is $f(0.20) = 1/(1+e^{-0.20}) = 0.55$.
| null |
CC BY-SA 2.5
| null |
2011-03-10T04:30:00.697
|
2011-03-10T05:05:51.337
|
2011-03-10T05:05:51.337
|
98
|
98
| null |
8110
|
2
| null |
8106
|
6
| null |
The coefficients in the logistic regression represent the tendency for a given region/demographic to vote Republican, compared to a reference category. A positive coefficent means that region is more likely to vote Republican, and vice-versa for a negative coefficient; a larger absolute value means a stronger tendency than a smaller value.
The reference categories are "Northeast" and "urban voter", so all the coefficients represent contrasts with this particular voter type.
In general, there's also no restriction on the coefficients in a logistic regression to be in [0, 1], even in absolute value. Notice that the Wikipedia article itself has an example of a logistic regression with coefficients of -5 and 2.
| null |
CC BY-SA 2.5
| null |
2011-03-10T05:42:36.100
|
2011-03-10T05:42:36.100
| null | null |
1569
| null |
8111
|
2
| null |
8088
|
3
| null |
I think you are trying to interpret P(A|B) and P(B|A) as if they should be the same thing. There is no reason for them to be equal, because of the product rule:
$$P(AB)=P(A|B)P(B)=P(B|A)P(A)$$
unless $P(B)=P(A)$ then $P(A|B) \neq P(B|A)$ in general. This explains the difference in the "yn" case. Unless you have a "balanced" table (row totals equal to column totals), the conditional probabilities (row and column) will not be equal.
A test for "logical/statistical independence" (but not causal independence) between categorical variables can be given as:
$$T=\sum_{ij} O_{ij} log\Big(\frac{O_{ij}}{E_{ij}}\Big)$$
Where $ij$ indexes the cells of the table (so in your example, $ij=11,12,21,22$). $O_{ij}$ is the observed value in the table, and $E_{ij}$ is what is "expected" under independence, which is simply the product of the marginals
$$E_{ij}=O_{\bullet \bullet}\frac{O_{i \bullet}}{O_{\bullet \bullet}}\frac{O_{\bullet j}}{O_{\bullet \bullet}}
=\frac{O_{i \bullet}O_{\bullet j}}{O_{\bullet \bullet}}$$
Where a "$\bullet$" indicates that you sum over that index. You can show that if you had a prior log-odds value for independence of $L_{I}$ then the posterior log-odds is $L_{I}-T$. The alternative hypothesis is $E_{ij}=O_{ij}$ (i.e. no simplification, no independence), for which $T=0$. Thus T says "how strongly" the data support non-independence, within the class of multinomial distributions. The good thing about this test is that it works for all $E_{ij}>0$, so you don't have to worry about a "sparse" table. This test will still give sensible results.
For the regressions, this is telling you that the average IQ value is different between the two values of sex, although I don't know the scale of the AIC difference (is this "big"?).
I'm not sure how appropriate the AIC is to a binomial GLM. It may be a better idea to look at the ANOVA and deviance tables for the LM and GLM respectively.
Also, have you plotted the data? always plot the data!!! this will be able to tell you things that the test does not. How different do the IQs look when plotted by sex? how different do the sexes look when plotted by IQ?
| null |
CC BY-SA 2.5
| null |
2011-03-10T05:44:26.280
|
2011-03-10T05:51:25.883
|
2011-03-10T05:51:25.883
|
2392
|
2392
| null |
8112
|
2
| null |
8063
|
3
| null |
In addition to Anscombe's quartet as mentioned by Peter Flom, [here](http://ferrari.dmat.fct.unl.pt/personal/mle/GestaoRisco/Embrechts/risk.pdf) is a very nice paper in the risk-management context illustrating the problems of using linear correlation with non-normally distributed variables. In a nutshell, much of our intuition about how correlation behaves -- all values of $\rho \in [-1, 1]$ are possible; an exact monotonic relationship implies $|\rho | = 1$; $\rho = 0$ implies independence; etc, doesn't necessarily apply in the case of non-normality.
| null |
CC BY-SA 2.5
| null |
2011-03-10T06:01:42.227
|
2011-03-10T06:07:40.783
|
2011-03-10T06:07:40.783
|
1569
|
1569
| null |
8113
|
2
| null |
8107
|
5
| null |
First application of `ncvTest` reports that there is no heteroscedasticity, as it should. The second is not meaningful, since your dependent random variable is random walk. [Breusch-Pagan test](http://en.wikipedia.org/wiki/Breusch%E2%80%93Pagan_test) is assymptotic, so I suspect that it cannot be readily applied for random walk. I do not think that there are tests for heteroscedasticity for random walks, due to the fact that non-stationarity poses much more problems than the heteroscedasticity, hence testing for the latter in the presence of the former is not practical.
| null |
CC BY-SA 2.5
| null |
2011-03-10T07:53:02.863
|
2011-03-10T07:53:02.863
| null | null |
2116
| null |
8114
|
1
|
8120
| null |
9
|
8154
|
According to [this document](http://www.stat.psu.edu/online/courses/stat509/09_interim/09_interim_print.htm): The estimated logarithm of the hazard ratio is approximately normally distributed with variance (1/d1) + (1/d2), where d1 and d2 are the numbers of events in the two treatment groups.
Do you have a reference for this statement? Or at least can you say me which estimator is used?
|
The estimated logarithm of the hazard ratio is approximately normally distributed
|
CC BY-SA 4.0
| null |
2011-03-10T08:07:51.637
|
2022-07-12T00:52:14.443
|
2022-07-12T00:52:14.443
|
11887
|
3019
|
[
"survival",
"logarithm",
"hazard"
] |
8115
|
2
| null |
7258
|
1
| null |
If I understand, you have a classification problem where you observe $(X_1,Y_1),\dots,(X_n,Y_n)$ ($Y_i=1$ correspond to default and $Y_i=0$ no default), and you have much more $Y_i=0$ than $Y_i=1$. You want to take a subsample $S$ of the $(X_i,Y_i)$ that have $Y_i=0$.
I am not sure I understand all the points in your question but I will address the question of the effect of subsampling, and the other question of assigning equal/non equal importance to classes in classification.
Forecasting probability of default.
For predictive purpose, if you use the bayes principle, you migh construct, for $i=0,1$ an estimate $\hat{f}_i(x)$ of the density $f_i$ of $(X|Y=i)$, and estimate the probability that $Y=i$ by $\hat{\pi}_i$. This will permit you to estimate the probability of default given $X=x$:
$$\hat{P}(x)=\frac{\hat{\pi}_0\hat{f}_0(x)}{\hat{\pi}_1\hat{f}_1(x)+\hat{\pi}_0\hat{f}_0(x)}$$
If you have a correct subsample and if you know the proportions, you only need to ajust $\hat{\pi}_0$ (resp. $\pi_{1}$) of no default (res. of default). The only consequence of subsampling will then be that $\hat{f}_0$ will not be as good as it should. This will affect the most the probability $\hat{P}$ when $\hat{\pi}_1\hat{f}_1(x)>>\hat{\pi}_0\hat{f}_0(x)$, that is when you are certain that there is no default.
Forecasting default
If you just want a classification rule (i.e. forecast default/ or no default) then can set up a threshold on the preceding probability which will not necessarily be $1/2$. If you think false default alarm costs more than missed default alarm, then you will take a threshold larger than $1/2$, and if it is the contrary you will use a threshold lower than $1/2$. In this case the effect of subsampling can be deduced from the effect in the case of probability forecast.
Hypothesis testing approach
I think it is clear that there are cases when you don't want these errors to be treated with equal importance and setting the threshold in the preceding paragraph my be subjective. I found interesting to use a hypothesis testing approach (i.e. not bayesian) in which you want to construct a decision rule $\psi(x)$ that minimizes
$$ \alpha P_1(\psi(X)= 0) + (1-\alpha) P_0(\psi(X)=1) \;\; [1]$$
where $\alpha$ is a parameter you need to choose to set relative importance of your errors. For example if you don't want to give more importance to one error than to the other you can take $\alpha=1/2$. The rule that minimizes equation [1] above is given by
$$ \psi^*(x)= \left \{ \begin{array}{ccc} 0 & if & \alpha f_0(x)/(\alpha f_0(x)+(1-\alpha)f_1(x))>1/2 \\ 1 & else & \end{array} \right .$$
If you replace, for $i=0,1$ $f_i$ by $\hat{f}_i$ then you get the bayesian classification rule but with the weight $\hat{\pi}_0$ (frequency of default) replace by the real importance $\alpha$ you give to a default.
| null |
CC BY-SA 2.5
| null |
2011-03-10T08:41:57.213
|
2011-03-10T08:41:57.213
| null | null |
223
| null |
8117
|
1
|
8118
| null |
37
|
12598
|
Our small team was having a discussion and got stuck. Does anyone know whether Cox regression has an underlying Poisson distribution. We had a debate that maybe Cox regression with constant time at risk will have similarities with Poisson regression with a robust variance. Any ideas?
|
Does Cox Regression have an underlying Poisson distribution?
|
CC BY-SA 2.5
| null |
2011-03-10T09:17:00.893
|
2013-10-25T22:07:58.210
|
2013-10-25T22:07:58.210
|
7515
|
2961
|
[
"regression",
"poisson-distribution",
"cox-model"
] |
8118
|
2
| null |
8117
|
37
| null |
Yes, there is a link between these two regression models. Here is an illustration:
Suppose the baseline hazard is constant over time: $h_{0}(t) = \lambda$. In that case, the survival function is
$S(t) = \exp\left(-\int_{0}^{t} \lambda du\right) = \exp(-\lambda t)$
and the density function is
$f(t) = h(t) S(t) = \lambda \exp(-\lambda t)$
This is the pdf of an exponential random variable with expectation $\lambda^{-1}$.
Such a configuration yields the following parametric Cox model (with obvious notations):
$h_{i}(t) = \lambda \exp(x'_{i} \beta)$
In the parametric setting the parameters are estimated using the classical likelihood method. The log-likelihood is given by
$l = \sum_{i} \left\{ d_{i}\log(h_{i}(t_{i})) - t_{i} h_{i}(t_{i}) \right\}$
where $d_{i}$ is the event indicator.
Up to an additive constant, this is nothing but the same expression as the log-likelihood of the $d_{i}$'s seen as realizations of a Poisson variable with mean $\mu_{i} = t_{i}h_{i}(t)$.
As a consequence, one can obtain estimates using the following Poisson model:
$\log(\mu_{i}) = \log(t_{i}) + \beta_0 + x_{i}'\beta$
where $\beta_0 = \log(\lambda)$.
| null |
CC BY-SA 2.5
| null |
2011-03-10T09:34:42.957
|
2011-03-10T09:34:42.957
| null | null |
3019
| null |
8119
|
1
|
13846
| null |
5
|
1756
|
I am trying to sample from a Gamma distribution in JAGS
```
gd[i] ~ dgamma(k,r)
```
where k and r are have priors such that they are positive. However I get the following error when I compile the model:
```
Error in node gd[9]
Unobserved node inconsistent with unobserved parents at initialization
```
Any ideas?
EDIT: BACKGROUND
What I am trying to do is to estimate the parameters of the CIR process with Bayesian techniques. In order to assess the estimator, I have simulated some data with given parameters on a quarterly basis (timeStep=90):
```
jdata <- list(timeSerie=c(35000, 47123.3999346341, 55373.904244638, 71755.7487795017, 91246.3370619453, 76969.2358694695, 83390.030308266, 80657.1222156083, 76784.466470129, 83962.4642428193, 85335.5796326587, 87018.8279086056, 74059.9912730579, 69683.5404626939, 90356.0122717258, 87240.7793078735, 74276.2931903815, 39899.5948334073, 37911.738710607, 73811.7536157058, 69334.5390010963, 51326.297331755, 42711.7358571988, 32409.9939465308, 16663.7353817099, 22421.1721803643, 38604.0551895535, 50181.3269231991, 44346.3517857709, 52127.2156791002, 54740.1181745155, 41869.456839522, 50665.5626313423, 51121.4339441224, 39059.432945341, 44350.0205383802, 68629.3291332604, 49933.2126408239, 64358.5131164748, 102975.634845413, 84658.5787221384, 52243.061086781, 56988.7184089767, 43548.6558054764, 38536.145535317, 43115.4155553003, 50328.6208374212, 47191.4284861612, 42603.6038389972, 43749.2977769786, 43583.8022320709, 36672.3782472362, 26377.9033658583, 41746.6472558272, 26086.7868276075, 47228.5390765096, 75765.4757503756, 70939.107521347, 63949.8857556708, 55153.6701237129),timeStep=90)
```
Then I wrote a model with JAGS. The problem is that the CIR process has scaled noncentral chi square distribution and this is not directly supported in JAGS. So inspired by `R` implementation, I came up with
```
model {
for (i in 2:length(timeSerie)){
ncp[i] <- 2*c*et*timeSerie[i-1]
pd[i] ~ dpois(ncp[i]/2)
cd[i] ~ dgamma(pd[i]+1E-10,c)
gd[i] ~ dgamma(df/2,4*c)
timeSerie[i] ~ dsum(cd[i],gd[i])
}
#parameters
et <- exp(-kappa*timeStep)
c <- 2*kappa/(sigma^2*(1-et))
df <- 4*kappa*mu/sigma^2
#CIR parameter priors
lkappa ~ dunif(-10,-2)
kappa <- exp(lkappa)
mu ~ dunif(30000,100000)
lsigma ~ dunif(0,5)
sigma <- exp(lsigma)
}
```
Trying to run the model (`jags.model` in `R`), I get the following error
```
Error in node gd[18]
Unobserved node inconsistent with unobserved parents at initialization
```
(Now it is 18 instead of 8, because I use other simulated data).
I tried to debug JAGS (gdb and valgrind) to no avail: I don't have enough knowledge to do it...
|
Jags error with dgamma
|
CC BY-SA 2.5
| null |
2011-03-10T10:09:33.283
|
2011-11-02T14:22:11.550
|
2011-03-11T08:03:16.257
|
1443
|
1443
|
[
"bayesian",
"markov-chain-montecarlo",
"jags"
] |
8120
|
2
| null |
8114
|
10
| null |
The fact that this is approximately normally distributed relies on the central limit theorem (CLT), so will be a better approximation in large samples. The CLT works better for the log of any ratio (risk ratio, odds ratio, hazard ratio..) than for the ratio itself.
In suitably large samples, I think this is a good approximation to the variance in two situations:
- The hazard in each group is constant over time (regardless of the hazard ratio)
- The proportional hazards assumption holds and the hazard ratio is close to 1
I think it may become a fairly crude assumption in situations far from these, i.e. if the hazards vary considerably over time and the hazard ratio is far from 1. Whether you can do better depends on what information is available. If you have access to the full data you can fit a proportional hazards model and get the variance of the log hazard ratio from that. If you only have the info in a published paper, various other approximations have been developed by meta-analysts. These two references are taken from the [Cochrane Handbook](http://cochrane-handbook.org):
- M. K. B. Parmar, V. Torri, and L. Stewart (1998). "Extracting summary statistics to perform meta-analyses of the published literature for survival endpoints." Statistics in Medicine 17 (24):2815-2834.
- Paula R. Williamson, Catrin Tudur Smith, Jane L. Hutton, and Anthony G. Marson. "Aggregate data meta-analysis with time-to-event outcomes". Statistics in Medicine 21 (22):3337-3351, 2002.
In Parmar et al, the expression you give would follow from using observed numbers in place of expected in their equation (5), or combining equations (6) and (12). Equations (5) and (6) are based on [logrank](http://en.wikipedia.org/wiki/Logrank_test) methods. They reference [Kalbfleisch & Prentice](http://books.google.com/books?id=pivvAAAAMAAJ) for equation (12) but I don't have that to hand, so maybe someone who does might like to check it and add to this.
| null |
CC BY-SA 2.5
| null |
2011-03-10T10:38:25.433
|
2011-03-10T10:44:15.927
|
2011-03-10T10:44:15.927
|
449
|
449
| null |
8121
|
2
| null |
8103
|
1
| null |
If I understand, you have a classification problem where you observe $(X_1,Y_1),\dots,(X_n,Y_n)$. Denoting by $\mathcal{X}$ the space where $X$ takes values, you assume the existance of $\mathcal{X}_{10}$ the smallest closed subset of $\mathcal{X}$ such that:
$\mathcal{X}\setminus \mathcal{X}_{10}$ can be partinioned into $\mathcal{X}_{1\setminus 0}\cup \mathcal{X}_{0\setminus 1}$ with $P_{0}(X\in \mathcal{X}_{1\setminus 0})=0$, $P_{1}(X\in \mathcal{X}_{0\setminus 1})=0$ (for $i=0,1$ $P_i$ is the distribution of $(X|Y=i)$)
I think there are two problems that may arise from your question:
- How do you justify a decomposition of the distribution to be able to solve the problem only on $\mathcal{X}_{10}$ (Theoretical problem)
- How do you estimate $\mathcal{X}_{1\setminus 0}$, $\mathcal{X}_{0\setminus 1}$ and$\mathcal{X}_{0 1}$ from data (practical estimation problem)
Theoretical problem
In this case (and under suitables conditions) you can decompose $P_{1}$ and $P_0$ using the Radon-Nikodym theorem (This [version](http://fr.wikipedia.org/wiki/Th%C3%A9or%C3%A8me_de_Radon-Nikodym-Lebesgue#Th.C3.A9or.C3.A8me_de_Radon-Nikodym)) into:
$$P_{1}=P_{1,\mathcal{X}_{1\setminus 0}}+P_{1,\mathcal{X}_{0 1}}$$
$$P_{0}=P_{0,\mathcal{X}_{0\setminus 1}}+P_{0,\mathcal{X}_{0 1}}$$
with $P_{0,\mathcal{X}_{0 1}}\sim P_{1,\mathcal{X}_{0 1}}$ (mutually absolutly continuous)
$P_{1,\mathcal{X}_{0 1}} \bot P_{1,\mathcal{X}_{1\setminus 0}} \bot P_{0,\mathcal{X}_{0\setminus 1}}$ (mutually singular)
Note that distributions in the decomposition are not probabilities, and you need to renormalize (i.e. compute $P_{i}(X\in A)$ when $A$ is one of the three above mentionned sets). This renormalization is also required from a practical point of view.
Once you have done the renormalization you can construct a classification rule on $\mathcal{X}_{0 1}$ using renormalized version of $P_1$ and $P_0$ 's restrictions. You can extend this classification rule in the obvious way to the whole $\mathcal{X}$.
Estimation problem
If you want to "learn" the above mentionned rule from the data, you need to estimate the corresponding sets I guess there is a large litterature in "support" estimation but in view of the classification problem at the end, it is not a big deal since the places where you might be wrong in terms of support estimation are does where you don't have much "mass" and hence does that are not really important for the classification errors at the end... at least from a bayesian perspective of which error is important... (see my answer [here](https://stats.stackexchange.com/questions/7258/information-content-of-examples-and-undersampling/8115#8115))
| null |
CC BY-SA 2.5
| null |
2011-03-10T10:39:31.160
|
2011-03-10T10:39:31.160
|
2017-04-13T12:44:24.677
|
-1
|
223
| null |
8123
|
2
| null |
8106
|
6
| null |
You also asked "how do I know what is significant and what is not." (I assume you mean statistically significant, since practical or substantive significance is another matter.) The asterisks in the table refer to the footnote: some effects are noted as having small p-values. These are obtained using a Wald test of the significance of each coefficient. Assuming random sampling, p<.05 means that, if there were no such effect in the larger population, the probability of seeing a connection as strong as the one observed, or stronger, in a sample of this size would be less than .05. You'll see many threads on this site discussing the subtle but important related point that p<.05 does not mean that there is a .05 probability of there being no connection in the larger population.
| null |
CC BY-SA 2.5
| null |
2011-03-10T13:08:08.867
|
2011-03-10T13:08:08.867
| null | null |
2669
| null |
8124
|
2
| null |
8083
|
3
| null |
Your question contains a "contradiction". The way you have described it, the variable defined as $X$ cannot have a Poisson distribution. I'll explain why.
So, you choose a cut-off point, $y_{0}$, and define the original variables $Y_{i}\:(i=1,\dotsc,r)$. You then take a new variable $X_{i}$ and define it as:
$$
X_{i} \equiv \Bigg(
\begin{matrix}
1 & Y_{i} > y_{0} \\
0 & Y_{i} \leq y_{0}
\end{matrix}
$$
Then $X=\sum_{i} X_{i} \sim Poisson(\lambda)$. [Raikov's theorem](http://en.wikipedia.org/wiki/Raikov%27s_theorem) Says that this implies the individual terms are also poisson $X_{i}\sim Poisson(\frac{\lambda}{r})$. But this is impossible, for as $X_{i}$ has been defined, it can only have two values $(0,1)$, a Poisson has support $(0,1,2,\dots)$. Therefore $X$ cannot be poisson.
From the way it has been defined in the question $X$ is binomially distributed
$$X \sim Bin(r,\theta)$$
where
$$\theta \equiv Pr(Y_{i}>y_{0})=1-F_{Y}(y_{0})$$
What your bootstrap procedure is doing is attempting to estimate $E(X)=r\theta$, but the estimate of $\theta$ is just given by $\hat{\theta}=\frac{X_{obs}}{r}$ and has variance of $\frac{\hat{\theta}(1-\hat{\theta})}{r+I_{B}}$ where $I_{B}$ is what I call the "Bayesian indicator". If you are Bayesian, with haldene prior, $I_{B}=1$, if you are frequentist then $I_{B}=0$.
You could do a bootstrap if you wanted, by because the maths is there, I'm sure you would get the same answers for big enough bootstrap samples (the simulation will only demonstrate the maths). Also note that the binomial is "under-dispersed" with the mean being greater than the variance, as your bootstrap shows. Also your 3 iterations are "approximately" in line with theoretical mean $r\hat{\theta}=2$ and theoretical variance $r\hat{\theta}(1-\hat{\theta})=1.2$. From looking at the bootstrap samples, this is to be expected (your average/variance being below the theoretical average/variance) as you sampled the "high" number disproportionately in 2 of the simulations.
| null |
CC BY-SA 2.5
| null |
2011-03-10T13:31:19.273
|
2011-03-10T13:31:19.273
| null | null |
2392
| null |
8125
|
2
| null |
8012
|
1
| null |
First let us introduce notation $y_{ki}=t_{k,i,2}-t_{k,i,1}$. Then $\bar\tau=\frac{1}{K}\sum_k\frac{1}{n_k}\sum_iy_{ki}$. The variance then is
\begin{align}
Var(\bar\tau)&=E(\bar\tau-E\bar\tau)^2=E\left(\frac{1}{K}\sum_k\frac{1}{n_k}\sum_i(y_{ki}-Ey_{ki})\right)^2\\
&=E\frac{1}{K^2}\sum_k\sum_l\frac{1}{n_kn_l}\sum_i\sum_jcov(y_{ki},y_{lj})
\end{align}
Now the question is what can you say about $cov(y_{ki},y_{lj})$ which converted to original notation is $cov(t_{k,i,2}-t_{k,i,1},t_{l,j,2}-t_{l,j,1})$. If they are not correlated when $k\neq l$ or $i\neq j$ then the variance is simply
\begin{align}
Var(\bar\tau)=\frac{1}{K^2}\sum_k\frac{1}{n_k^2}\sum_i\sigma_{ki}^2.
\end{align}
I would be careful to employ stationarity. Your process has 2 indexes, when stationarity is usually defined for single indexed processes. When you have single index there is natural order of the observations $t_1<t_2<...$ and stationarity means that if you fix times $t_1<...<t_k$ then vectors $(X_{t_1},...,X_{t_k})$ and $(X_{t_1+h},...,X_{t_k+h})$ behave similarly for every shift $h$. If the index is two-dimensional there is no natural order, also shifting is now two-dimensional so there is no one clear way how to define stationarity.
So what process exactly you observe and how $t_{k,i}$ is chosen from it?
| null |
CC BY-SA 2.5
| null |
2011-03-10T13:55:34.027
|
2011-03-10T13:55:34.027
| null | null |
2116
| null |
8126
|
1
|
8128
| null |
9
|
5693
|
I want to calculate the hat matrix directly in R for a logit model. According to Long (1997) the hat matrix for logit models is defined as:
$$H = VX(X'VX)^{-1} X'V$$
X is the vector of independent variables, and V is a diagonal matrix with $\sqrt{\pi(1-\pi)}$ on the diagonal.
I use the `optim` function to maximize the likelihood and derive the hessian. So I guess my question is: how do i calculuate $V$ in R?
Note: My likelihood function looks like this:
```
loglik <- function(theta,x,y){
y <- y
x <- as.matrix(x)
beta <- theta[1:ncol(x)]
loglik <- sum(-y*log(1 + exp(-(x%*%beta))) - (1-y)*log(1 + exp(x%*%beta)))
return(-loglik)
}
```
And i feed this to the optim function as follows:
```
logit <- optim(c(1,1),loglik, y = y, x = x, hessian = T)
```
Where x is a matrix of independent variables, and y is a vector with the dependent variable.
Note: I know that there are canned procedures for doing this, but I need to do it from scratch
|
How to calculate the hat matrix for logistic regression in R?
|
CC BY-SA 2.5
| null |
2011-03-10T14:04:22.007
|
2011-03-10T14:59:47.350
|
2011-03-10T14:55:03.557
|
2704
|
2704
|
[
"r",
"logistic",
"deviance"
] |
8127
|
1
| null | null |
1
|
835
|
I have two variables and 1000 cases. How can I statistically find representative cases from total of 1000, based on statistical properties of both variables and correlation between them. Perhaps something based on T-test and 95% (or 99%) interval but for both variables? I would like to know which statistical method can find cases that have both values (simultaneously) statistically the most significant. I know that this deals with sample distribution and estimating the proportions.
|
How to make representative sample in dataset with two variables?
|
CC BY-SA 3.0
| null |
2011-03-10T14:12:47.033
|
2013-12-22T08:58:57.147
|
2013-12-22T08:58:57.147
|
930
| null |
[
"distributions",
"sample"
] |
8128
|
2
| null |
8126
|
14
| null |
For logistic regression $\pi$ is calculated using formula
$$\pi=\frac{1}{1+\exp(-X\beta)}$$
So diagonal values of $V$ can be calculated in the following manner:
```
pi <- 1/(1+exp(-X%*%beta))
v <- sqrt(pi*(1-pi))
```
Now multiplying by diagonal matrix from left means that each row is multiplied by corresponding element from diagonal. Which in R can be achieved using simple multiplication:
```
VX <- X*v
```
Then `H` can be calculated in the following way:
```
H <- VX%*%solve(crossprod(VX,VX),t(VX))
```
Note Since $V$ contains standard deviations I suspect that the correct formula for $H$ is
$$H=VX(X'V^2X)^{-1}X'V$$
The example code works for this formula.
| null |
CC BY-SA 2.5
| null |
2011-03-10T14:20:46.650
|
2011-03-10T14:59:47.350
|
2011-03-10T14:59:47.350
|
2116
|
2116
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.