Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
614428
|
1
| null | null |
0
|
14
|
Consider the probability $p$ of an event in the full population, and the probability $p_0$ of the same event for a subset of the population. In other words, $p_0$ is a conditional probability (conditioned on belonging to that subset).
As an example, $p$ may be the probability of having a disease, and $p_0$ may be the probability of having that desease for people of a certain ethnicity.
To clarify, $p_0/p$ is akin to relative risk, but not the same. Relative risk would be $p_0/p_1$, where $p_1$ is the probability of having a disease for people that are not of the considered ethnicity. At least that's how I've always seen relative risk defined.
Does the ratio $p_0/p$ have a name? Is there an application field or a specific setting where $p_0/p$ is typically used?
|
Relative risk but using the full population and a subset thereof
|
CC BY-SA 4.0
| null |
2023-04-28T18:45:13.617
|
2023-04-28T19:15:23.140
|
2023-04-28T19:15:23.140
|
28285
|
28285
|
[
"probability",
"terminology",
"relative-risk"
] |
614429
|
1
| null | null |
1
|
31
|
Consider a generalization of [inverse-variance weighting](https://en.wikipedia.org/wiki/Inverse-variance_weighting), where we choose normalized weights
$w_i = \dfrac{\sigma_i^{-p}}{W_p}$ for some $p\geq 1$, where $W_p \equiv \sum\sigma_i^{-p}$.
Then $p=2$ corresponds to inverse-variance weighting. If we have $n$ data points and denote $r_i \equiv \sigma_i^{-1} \in \mathbb{R}$, then we can write
$w_i = \sigma_i^{-p}/W_p = \left(\dfrac{r_i}{||\mathbf{r}||_p}\right)^p$, where $||\cdot||_p$ is the $p$-norm.
Thus, this generalization just reflects a potential choice of norm for $\mathbb{R}^n$, your "inverse error space".
Question (see below for background): Does anyone know how using $p=1$ might be theoretically justified, or if there are other more general methods that would be related to choosing $p=1$? The responses to [this question](https://math.stackexchange.com/questions/2101139/weighted-average-from-errors) seem to indicate "no".
Background: I have some data about a system and its subcomponents (full context at bottom of page). The system AB has subcomponents A and B, both of which have subcomponents A+, A-, B+ and B-. Thus I have 7 data points $x_i \pm \sigma_i$ (scalars) for (AB, A, B, A+, A-, B+, B-). Qualitatively, each component seems to be a weighted average of its subcomponents, with the lower-error subcomponents carrying more weight (i.e. weight $w_i$ is inversely related to $\sigma_i$ in some way).
My naive thinking (being unaware of inverse-variance weighting at the time) was to try that generalized inverse-error weighting above. I tried both $p=1$ and $p=2$ (the latter corresponding to inverse-variance weighting), and found that $p=1$ gave very good agreement in all cases: the weighted averages of both (A, B) and (A+, A-, B+, B-) both agree well with $x_{AB}$, while the weighted averages of (A+, A-) and (B+, B-) respectively agree well with $x_A$ and $x_B$. For $p=2$, I only found agreement with (A, B) adding up to AB; in every other case, the subcomponents underestimate their combined component, due to the lower-error A- and B- components being overweighted. Looking into inverse-variance weighting, the logic of its use doesn't seem to apply to this scenario, since the $x_i$'s are presumably not independent.
When I talked to a professor of mine about the above, they essentially said I shouldn't have even bothered with $p=1$ and that my result isn't worth a mention -- I should just mention the qualitative "weighted-averageness" of the data -- seemingly because I have no theoretical justification for $p=1$. However, the approach seems perfectly reasonable to me given the qualitative look of the data, and the empirical result of all the components being the (weighted) "sum of their parts" feels too good to dismiss like that. My logic was essentially to start with inverse errors, then choose a norm on my inverse-error space. In that sense, it seemed logical to try both $p=1$ (inverse errors weighted equally to determine $w_i$) and $p=2$ (inverse errors weighted by magnitude to determine $w_i$) and see what came of it.
Context: Signal processing in astronomy. I have two very similar high-resolution spectra of a particular galaxy, taken at different times. I am cross-correlating them to look for a small shift, which would correspond to bulk acceleration of gas. I cut up the spectra into different velocity components, to look for a differential shift as a function of velocity.
The full spectrum AB shows two broadened transition lines A and B, each of which have a high (+) and low (-) velocity component (A+, A-, B+, B-). I cross correlated AB and all of its subcomponents separately, for a total of 7 cross-correlations (AB, A, B, A+, A-, B+, B-). The data point $x_i$ is the lag at which the cross-correlation function peaks for component $i$.
Errors $\sigma_i$ are estimated by creating 100 "randomized" versions of both spectra, then correspondingly doing 10000 cross-correlations and generating a distribution of maximizing shifts for each component. We then take the standard deviation of the distribution to be the error in the shift.
To randomize a spectrum, we take the flux at a given wavelength, $f_\lambda \pm \sigma_\lambda$, and assign a new flux $f_{sim} \sim N(f_\lambda,\sigma_\lambda^2)$, where $N(\mu,\sigma^2)$ is the usual normal distribution.
|
Generalizing Inverse Variance Weighting?
|
CC BY-SA 4.0
| null |
2023-04-28T18:50:30.413
|
2023-04-28T18:50:30.413
| null | null |
386260
|
[
"cross-correlation",
"metric",
"weighted-mean"
] |
614430
|
1
|
614443
| null |
2
|
24
|
Using this [study](https://doi.org/10.1136/bmj-2021-069775) as an example, I am wondering if the size of different study groups run in the same models impacts the conclusions we can draw from statistical tests.
In this study, the researchers were looking at pulse oximetry discrepancies between Black and White patients. They found that Black patients were more likely to have something called Occult (or Hidden) Hypoxemia, which is when the pulse oximeter incorrectly measures the peripheral oxygen levels as higher than they truly are. These mistakes are illuminated by running a more invasive test of arterial blood oxygen saturation.
In this study, the study population is 73% White (N = 21,918), 21.6% Black (N = 6,498), and 5.4% Hispanic (N = 1,623). They found that Black patients had a significantly higher probability of being given a mistakenly high reading of peripheral blood oxygen saturation than White and Hispanic patients, which could potentially lead to worse health outcomes for those patients.
My question is, given the large difference in sample sizes of White v. Black and Hispanic patients, how is this accounted for in running statistical tests and drawing conclusions? For example, in this study the researchers used a multivariable logistic regression model to predict
the odds of occult hypoxemia, and adjusted for patient characteristics like age, race, sex, etc. Is this adjusting enough to account for the differences in sample size, or are there other methods one should use to bolster statistical models run on groups of different sizes?
|
How to account for difference in group sizes for logistic regression
|
CC BY-SA 4.0
| null |
2023-04-28T18:59:29.410
|
2023-04-28T22:51:31.110
| null | null |
386807
|
[
"logistic",
"statistical-power",
"sample",
"effect-size"
] |
614431
|
1
| null | null |
0
|
11
|
So I have been trying to reproduce a logistic SEM with a binary outcome (not relevant to this question). The sample size after removing missing values is around 820. One of my predictor latent factors called FR is as follows:
```
cfa_FR = 'FR =~ WRKHSW+WRKFRM+WRKPAY+ABSWRK'
```
Maybe it is relevant to note that all four observed variables are binary. When I perform cfa on this factor, I get the following output:
```
Latent Variables:
Estimate Std.Err z-value P(>|z|) ci.lower ci.upper Std.lv Std.all
Fam_Resp =~
WRKHSW 1.000 1.000 1.000 3.972 0.648
WRKFRM 1.586 0.111 14.225 0.000 1.367 1.804 6.299 0.678
WRKPAY 1.186 0.083 14.351 0.000 1.024 1.348 4.711 0.719
ABSWRK 1.005 0.097 10.371 0.000 0.815 1.194 3.990 0.432
Intercepts:
Estimate Std.Err z-value P(>|z|) ci.lower ci.upper Std.lv Std.all
.WRKHSW 1.696 0.203 8.369 0.000 1.299 2.093 1.696 0.277
.WRKFRM 2.581 0.307 8.406 0.000 1.980 3.183 2.581 0.278
.WRKPAY 2.472 0.217 11.416 0.000 2.048 2.897 2.472 0.377
.ABSWRK 2.899 0.306 9.485 0.000 2.300 3.499 2.899 0.314
FR 0.000 0.000 0.000 0.000 0.000
Variances:
Estimate Std.Err z-value P(>|z|) ci.lower ci.upper Std.lv Std.all
.WRKHSW 21.812 1.397 15.609 0.000 19.073 24.551 21.812 0.580
.WRKFRM 46.620 3.213 14.508 0.000 40.322 52.919 46.620 0.540
.WRKPAY 20.717 1.620 12.789 0.000 17.542 23.892 20.717 0.483
.ABSWRK 69.581 3.541 19.652 0.000 62.642 76.521 69.581 0.814
FR 15.777 1.721 9.168 0.000 12.404 19.149 1.000 1.000
R-Square:
Estimate
WRKHSW 0.420
WRKFRM 0.460
WRKPAY 0.517
ABSWRK 0.186
```
Fit measures are super good as well. However, once I fit this into my SEM model with other latent predictors only (without logistic regression), two of the variables, ABSWRK and WRKPAY load poorly on FR:
```
Latent Variables:
Estimate Std.Err z-value P(>|z|) Std.lv Std.all
SE =~
SE1 1.000 0.626 0.733
SE2 0.641 0.054 11.868 0.000 0.401 0.481
SE3 0.839 0.054 15.592 0.000 0.525 0.665
SE4 0.732 0.057 12.882 0.000 0.458 0.527
SE5 0.644 0.048 13.354 0.000 0.403 0.549
SA =~
SA1 1.000 0.555 0.500
SA2 1.369 0.134 10.196 0.000 0.760 0.569
SA3 1.465 0.135 10.850 0.000 0.813 0.672
SA4 0.576 0.066 8.666 0.000 0.320 0.431
FR =~
ABSWRK 1.000 0.050 0.116
WRKFRM 6.803 2.476 2.748 0.006 0.343 0.691
WRKPAY 1.279 0.520 2.459 0.014 0.064 0.213
WRKHSW 5.698 2.076 2.745 0.006 0.287 0.712
GS =~
GS1 1.000 0.268 0.406
GS2 1.611 0.287 5.620 0.000 0.432 0.391
GS3 1.764 0.317 5.557 0.000 0.473 0.380
GS4. 1.277 0.217 5.898 0.000 0.342 0.547
Covariances:
Estimate Std.Err z-value P(>|z|) Std.lv Std.all
SE ~~
SA. 0.210 0.024 8.570 0.000 0.604 0.604
FR -0.007 0.003 -2.398 0.016 -0.233 -0.233
GS 0.032 0.010 3.136 0.002 0.193 0.193
SA ~~
FR 0.008 0.003 2.448 0.014 0.289 0.289
GS 0.034 0.010 3.357 0.001 0.232 0.232
FR ~~
GS -0.002 0.001 -1.932 0.053 -0.170 -0.170
Intercepts:
Estimate Std.Err z-value P(>|z|) Std.lv Std.all
.SE1 3.286 0.030 111.230 0.000 3.286 3.849
.SE2 3.096 0.029 107.305 0.000 3.096 3.713
.SE3 2.903 0.027 106.183 0.000 2.903 3.675
.SE4 3.399 0.030 112.872 0.000 3.399 3.906
.SE5 3.416 0.025 134.326 0.000 3.416 4.649
.SA1 3.327 0.038 86.654 0.000 3.327 2.999
.SA2 2.781 0.046 60.177 0.000 2.781 2.083
.SA3 2.510 0.042 59.943 0.000 2.510 2.074
.SA4 3.181 0.026 123.878 0.000 3.181 4.287
.ABSWRK 0.254 0.015 16.856 0.000 0.254 0.583
.WRKFRM 0.568 0.017 33.112 0.000 0.568 1.146
.WRKPAY 0.102 0.010 9.728 0.000 0.102 0.337
.WRKHSW 0.796 0.014 57.152 0.000 0.796 1.978
.GS1 2.868 0.023 125.617 0.000 2.868 4.347
.GS2 3.489 0.038 91.181 0.000 3.489 3.155
.GS3 3.434 0.043 79.853 0.000 3.434 2.763
.GS4 2.945 0.022 135.886 0.000 2.945 4.703
SE 0.000 0.000 0.000
SA 0.000 0.000 0.000
FR 0.000 0.000 0.000
GS 0.000 0.000 0.000
Variances:
Estimate Std.Err z-value P(>|z|) Std.lv Std.all
.SE1 0.337 0.025 13.507 0.000 0.337 0.462
.SE2 0.534 0.029 18.671 0.000 0.534 0.768
.SE3 0.348 0.022 15.681 0.000 0.348 0.558
.SE4 0.547 0.030 18.178 0.000 0.547 0.722
.SE5 0.377 0.021 17.899 0.000 0.377 0.699
.SA1 0.923 0.052 17.664 0.000 0.923 0.750
.SA2 1.206 0.073 16.415 0.000 1.206 0.676
.SA3 0.803 0.060 13.467 0.000 0.803 0.549
.SA4 0.448 0.024 18.557 0.000 0.448 0.814
.ABSWRK 0.187 0.009 20.308 0.000 0.187 0.987
.WRKFRM 0.128 0.013 9.629 0.000 0.128 0.522
.WRKPAY 0.087 0.004 19.993 0.000 0.087 0.955
.WRKHSW 0.080 0.009 8.773 0.000 0.080 0.492
.GS1 0.363 0.022 16.354 0.000 0.363 0.835
.GS2 1.036 0.062 16.738 0.000 1.036 0.847
.GS3 1.320 0.078 16.972 0.000 1.320 0.855
.GS4 0.275 0.023 11.749 0.000 0.275 0.701
SE 0.392 0.037 10.664 0.000 1.000 1.000
SA 0.308 0.048 6.450 0.000 1.000 1.000
FR 0.003 0.002 1.392 0.164 1.000 1.000
GS 0.072 0.018 4.049 0.000 1.000 1.000
R-Square:
Estimate
SE1 0.538
SE2 0.232
SE3 0.442
SE4 0.278
SE5 0.301
SA1 0.250
SA2 0.324
SA3 0.451
SA4 0.186
ABSWRK 0.013
WRKFRM 0.478
WRKPAY 0.045
WRKHSW 0.508
GS1 0.165
GS2 0.153
GS3 0.145
GS4 0.299
```
The loadings for ABSWRK and WRKPAY drop considerably. Is there something, after looking at both outputs, signalling to this? Should I try to transform these 4 binary variables? I have tried dropping just one of the two, and it still does not work. Conceptually, they should be linked to each other so I am not sure what other issue it could be.
Sorry if I made any mistakes or missed something basic or major, I am quite new to this! Thanks in advance!
|
Good factor loadings in a one factor model, but poor loadings once model is added to full SEM (lavaan - RStudio)
|
CC BY-SA 4.0
| null |
2023-04-28T19:07:26.367
|
2023-04-28T19:07:26.367
| null | null |
386805
|
[
"r",
"binary-data",
"structural-equation-modeling",
"confirmatory-factor",
"lavaan"
] |
614432
|
1
|
614438
| null |
3
|
91
|
Suppose that $X_{1}, ..., X_{n}$ are independent, identically-distributed random variables with marginal density function $f(x)$. Calculate $P(X_{1} < X_{2} < \cdots < X_{n})$.
I know with 2 independent random variables $X$ and $Y$, $$P(X < Y) = \int_{-\infty}^{\infty}\int_{-\infty}^yf(x)f(y)dxdy.$$
So by the same reasoning, I did $$P(X_{1} < \cdots < X_{n}) = \int_{x_{n}=-\infty}^\infty\int_{x_{n-1}=-\infty}^{x_{n}}\cdots\int_{x{_1}=-\infty}^{x_{2}}f_{X_1}f_{X_2}\cdots\ f_{X_n} dx_1dx_2\cdots dx_n.$$
Is what I have so far correct? If so, what should I do from there?
|
Calculate the probability that $X_1 < X_2 < \cdots < X_n$ where $X_1, \ldots, X_n$ are IID random variables
|
CC BY-SA 4.0
| null |
2023-04-28T19:23:14.640
|
2023-04-29T01:38:23.033
|
2023-04-28T19:45:17.177
|
20519
|
385461
|
[
"probability",
"self-study"
] |
614433
|
1
| null | null |
5
|
102
|
if $X_1,...,X_n$ are independent random variables with noncentral chi distributions (same $df$ but different $\lambda$),
What is the distribution of $\sum_{i=1}^{n}{X_i}$
Just wondering if it can be represented by any popular distribution(which has its page on Wikipedia). And want to clarify that I am asking about Chi distribution, not Chi-square.
|
sum of noncentral Chi random variables
|
CC BY-SA 4.0
| null |
2023-04-28T19:33:33.053
|
2023-04-30T06:46:40.597
| null | null |
386677
|
[
"distributions",
"random-variable",
"chi-squared-distribution",
"non-central"
] |
614435
|
1
|
614442
| null |
3
|
51
|
Let $L_{\boldsymbol{\Delta}}\in \mathbb{R}^{M \times M}$ be the generalized centering matrix given by:
$L_{\boldsymbol{\Delta}} = \boldsymbol{\Delta} - \frac{1}{\text{tr}(\boldsymbol{\Delta})} \boldsymbol{\Delta} \boldsymbol{1}_M \boldsymbol{1}_M^\top \boldsymbol{\Delta} \in \mathbb{R}^{M \times M}$, where $\boldsymbol{\Delta} \in \mathbb{R}^{M \times M}$ is a diagonal matrix of strictly positive entries and $\boldsymbol{1}_M = [1, \ldots, 1]^\top \in \mathbb{R}^{M}$.
For $\boldsymbol{\Delta} = I_{M}$, we recover the classical centering matrix ([https://en.wikipedia.org/wiki/Centering_matrix](https://en.wikipedia.org/wiki/Centering_matrix)).
I have four questions:
- Can we show that $L_{\boldsymbol{\Delta}}$ is semi-positive definite?
- Can we get its eigenvectors and eigenvalues?
- Any hope to get a simple expression for the precision matrix $(\boldsymbol{X} \boldsymbol{L}_{\boldsymbol{\Delta}}\boldsymbol{X}^\top)^{-1}$?
- How do we show that $L_{\boldsymbol{\Delta}}$ removes a weighted average of the samples?
Thank you for your help,
|
Properties of the generalized centering matrix
|
CC BY-SA 4.0
| null |
2023-04-28T19:42:24.373
|
2023-04-28T22:03:46.467
|
2023-04-28T20:09:56.917
|
386806
|
386806
|
[
"mathematical-statistics",
"covariance-matrix",
"matrix",
"matrix-decomposition"
] |
614436
|
1
| null | null |
0
|
14
|
Thank you for all the knowledge that has been shared in this forum so far! I have been going through numerous posts, but couldn't find a solution for my problem, so I'm opening a new question.
I am examining a large set of companies, for which over some years measurements have been taken (unbalanced panel). I am trying to find significant relationships between the "activity" of the companies in a certain field and other company characteristics, provided in a continuous distribution.
The dataset I have been given however doesn't measure this "activity" in absolute numbers, but rather assigns the companies into buckets of 0-4,9%, 5-14,9%, 15-24,9%, 25-49,9% and 50-100%. In a way, this is an ordinary scale, but since the buckets are not of the same size, I am not sure how to handle them.
The dependent variable I am currently considering is a continuous distribution of values between 0 and 1.
I would be incredibly thankful if you could point me into the right direction of how to process and handle my data and which regression model would be suited best.
Also, if I overlooked posts that have been previously discussed or you know good resources for this kind of problem, it would be fantastic if you could post them.
Thank you very much!
|
Ordinal scale with unequal distribution
|
CC BY-SA 4.0
| null |
2023-04-28T19:43:16.870
|
2023-04-28T19:43:16.870
| null | null |
386809
|
[
"regression",
"panel-data",
"nonlinear-regression",
"ordinal-data",
"scales"
] |
614437
|
1
| null | null |
0
|
7
|
I ran a series of OLS and 2SLS regressions to study the impact of several geographic variables on the dependent variable which is a share of workers of a particular type. The issue is that my unit of analysis is a small geographic area (district) but some variables are only available for larger areas (counties, and there are from 5 to 10 districts in a county). What consequences could I expect? Could this introduce bias?
|
Impact of variables available only for large geographic units in OLS and 2SLS
|
CC BY-SA 4.0
| null |
2023-04-28T19:50:36.587
|
2023-04-28T19:50:36.587
| null | null |
333840
|
[
"regression",
"multiple-regression",
"regression-coefficients",
"bias"
] |
614438
|
2
| null |
614432
|
2
| null |
From the symmetry perspective, the answer is clearly $1/n!$, as $X_1 < \cdots < X_n$ is just one permutation of all $n!$ permutations with equal probabilities by the i.i.d. and absolutely continuous assumption.
This intuitive argument can be made rigorous, and you have made a good start. Finish the $n = 2$ case calculation first (where $F$ is the CDF of $X_i$):
\begin{align}
P(X < Y) = \int_{-\infty}^\infty f(y)\int_{-\infty}^yf(x)dxdy = \int_{-\infty}^\infty f(y)F(y)dy \overset{u = F(y)}{=} \int_0^1 udu = \frac{1}{2}.
\end{align}
Let's add in one more term:
\begin{align}
& P[X_1 < X_2 < X_3] \\
=& \int_{-\infty}^\infty\int_{-\infty}^{x_3}\int_{-\infty}^{x_2}f(x_1)dx_1f(x_2)dx_2f(x_3)dx_3 \\
=& \int_{-\infty}^\infty\int_{-\infty}^{x_3}F(x_2)f(x_2)dx_2f(x_3)dx_3 \\
=& \frac{1}{2}\int_{-\infty}^\infty F(x_3)^2f(x_3)dx_3\\
=& \frac{1}{2}\int_0^1u^2du = \frac{1}{6}.
\end{align}
Can you generalize this argument to the case of $n$ random variables? As you may have already noticed, the key recursive relation in the calculation is that
\begin{align}
\int_{-\infty}^{x_{k + 1}}F(x_{k})^{k - 1}f(x_k)dx_k = \int_0^{F(x_{k + 1})}u^{k - 1}du =
\frac{1}{k}F(x_{k + 1})^k.
\end{align}
| null |
CC BY-SA 4.0
| null |
2023-04-28T20:13:58.513
|
2023-04-29T01:38:23.033
|
2023-04-29T01:38:23.033
|
20519
|
20519
| null |
614439
|
1
| null | null |
0
|
21
|
I have read that:
If we have two feature vectors ${x = (x_1,x_2,…,x_D)}$ and $y=(y_1,y_2,…,y_D)$ and we do a degree d polynomial basis expansion to get $f(x)$ and $f(y)$, then to calculate the inner product $f(x)^Tf(y)$ requires time roughly $D^d$.
However with the kernel trick we have a polynomial kernel to where $f(x)^Tf(y) = k(x,y) = (1+x^Ty)^d$.
It then says that this kernel trick only takes time $O(D\text{log}_d)$& and I don’t understand where this comes from.
|
Time Complexity of using the kernel trick on polynomial basis expansion
|
CC BY-SA 4.0
| null |
2023-04-28T21:13:23.370
|
2023-05-02T17:49:47.613
|
2023-04-28T21:31:28.610
|
284660
|
386765
|
[
"machine-learning",
"svm",
"kernel-trick",
"time-complexity"
] |
614441
|
1
| null | null |
1
|
9
|
I'm working with proteomics data in python, and I want to implement an abundance cutoff as described by K. Podwojski, et al (short section attached below). My goal is to quantify the difference in abundance of each phosphopeptide between control and experimental group, then filter out the ones that may not have been biologically relevant differences. We could simply use average log fold change, but my mentor thinks CV would be a better way to compare.
Unfortunately, it's been a few years since I took statistics, and I'm confused about which columns to use for the coefficient of variation. Since I have three groups - a control (WT) and two experimental groups, I had to consider if I should include the control or not. I'm comfortable with the rest of the procedure.
My mentor suggested to calculate the CV using experimental groups only, so I would be calculating the mean and SD for columns O, P, Q, and R or for columns S, T, U, and V. If this is right, I don't understand how I can use CV to compare the differences in abundance, since it has nothing to do with the controls.
It seems like including the control would allow for that comparison. However, for proteins that increased, that would decrease the mean; for proteins that decreased, that would increase the mean. SD would go up in either case. Then the CV would be high for proteins that decrease and low for proteins that increase, which is not my intention.
Can anyone point me in the right direction? Surely I'm misunderstanding something (or a whole lot). Thanks in advance!
---
I've attached an example of my data and the section from Podwojski for reference.
[](https://i.stack.imgur.com/0ZqSz.png)
[](https://i.stack.imgur.com/tQvhy.png)
|
Am I misapplying coefficient of variation?
|
CC BY-SA 4.0
| null |
2023-04-28T22:01:52.167
|
2023-04-28T22:01:52.167
| null | null |
386813
|
[
"statistical-significance",
"mean",
"standard-deviation",
"coefficient-of-variation"
] |
614442
|
2
| null |
614435
|
1
| null |
Just some hint to the first question, the remaining three are tractable, but require more context and answers to them are probably more verbose. As I commented, it is better to ask them in separate posts and show some of your own work.
You can write $\Delta$ more explicitly as $\Delta = \operatorname{diag}(a_1, \ldots, a_M)$ with $a_i > 0, i = 1, \ldots, M$. Denote the trace of $\Delta$ by $T = a_1 + \cdots + a_M$. It can be shown that $L_\Delta = \Delta^{1/2}C\Delta^{1/2}$, where
\begin{align}
C = I_{(M)} - \frac{1}{T}vv', \quad v = \begin{bmatrix}
\sqrt{a_1} \\
\sqrt{a_2} \\
\vdots \\
\sqrt{a_M}
\end{bmatrix}.
\end{align}
Therefore to show $L_\Delta$ is PSD, it suffices to show $C$ is PSD. But $C$ is symmetric and idempotent, hence PSD. This completes the proof.
| null |
CC BY-SA 4.0
| null |
2023-04-28T22:03:46.467
|
2023-04-28T22:03:46.467
| null | null |
20519
| null |
614443
|
2
| null |
614430
|
2
| null |
Try this experiment. Double the size of your data set by exactly duplicating every case. Then rerun your logistic regression. You should find that for every predictor term the estimates of coefficient and odds ratio turn out the same as before. What will change are the p-values and the standard errors on which they depend. These standard errors will all be smaller than before by a factor of the square root of 2.
So the procedure adjusts standard errors in light of sample size; it does not change estimates of the relationships of interest. And when I say "sample size" that includes the sample sizes for the subgroups defined by your predictor variables. Each comparison of one subgroup to another will have its standard error affected by those subgroups' sizes. If your study's group of Hispanic patients had numbered 160 instead of 1,623, you would have seen dramatically larger standard errors and p-values for the corresponding comparisons.
There are some statistical procedures for which differently sized subgroups typically require special design considerations. Logistic regression is not one of them.
| null |
CC BY-SA 4.0
| null |
2023-04-28T22:51:31.110
|
2023-04-28T22:51:31.110
| null | null |
2669
| null |
614444
|
2
| null |
614420
|
0
| null |
- You get it by using the percentile method. Got 95% ci will be specified by 2.5 and 97.5 percentiles of your bootstrap distribution. In general this will need much more bootstrap replications than the standard error method. In Efron's bootstrap book it is said that for SE estimation you need 20-200 replications whole for confidence intervals you need thousands.
- You just calculate ci for each variable in your vector separately. You can also get something like a confidence region by taking covariance of these predictions into account, but i don't know how it what it would be useful for, but it should not be difficult to do
| null |
CC BY-SA 4.0
| null |
2023-04-28T22:53:39.977
|
2023-04-28T22:53:39.977
| null | null |
53084
| null |
614446
|
1
| null | null |
1
|
508
|
I'm using Microsoft® Excel® for Microsoft 365 MSO (Version 2302 Build 16.0.16130.20374) 32-bit
I have 31 numbers from 6 different centers.
I need an average for each center and an average for all.
If all 31 numbers are averaged together in col A, using =AVERAGE(A1:A31) I get 104.
Center 1 average was 156
Center 2 average was 194
Center 3 average was 100
Center 4 average was 59
Center 5 average was -70
Center 6 average was 125
The individual averages looked wonky, so to double check I tried averaging those 6 totals and got 94 instead of 104. I know there is some rounding but I didn't expect a difference of 10.
I tried it with numbers from different departments within the same 6 centers and the averages equaled each other.
[These averages were fine](https://i.stack.imgur.com/Nz98M.png)
[Why are these averages different](https://i.stack.imgur.com/afF6V.png)
|
Why is the mean of the means not equal to the grand mean (average)?
|
CC BY-SA 4.0
| null |
2023-04-28T23:34:49.817
|
2023-04-29T04:34:26.783
|
2023-04-29T04:34:26.783
|
362671
| null |
[
"mean",
"excel"
] |
614449
|
2
| null |
614432
|
0
| null |
It's a good exercise to fill the missing details in the following argument. If the probability of ties between the $X_i$'s is equal to zero (for instance, if the $X_i$'s are "continuous" random variables), then the sure event $\Omega$ can be written as the union of $n!$ disjoint and (by the symmetry implied by the IID condition) equiprobable events of the form $\{X_{\pi(1)}<X_{\pi(2)}<\dots<X_{\pi(n)}\}$, in which $\pi:\{1,\dots,n\} \stackrel{\cong}{\longrightarrow} \{1,\dots,n\}$ is a permutation. Hence, $P\{X_1<X_2<\dots<X_n\}=1/n!$.
| null |
CC BY-SA 4.0
| null |
2023-04-29T00:36:57.157
|
2023-04-29T00:36:57.157
| null | null |
9394
| null |
614450
|
2
| null |
614446
|
18
| null |
The average of averages is not the same as the grand average if group sizes are different (unless you have a very specific situation), which it very much seems is the case here.
Suppose group 1 has two entries, 100 and 200. The average is 150. Next, suppose group 2 has a single entry, 250. Its average thus is also 250. The average of averages is 200. However, the overall average is 183.
If you want to average averages in a way to obtain the overall average, you need to weight each "sub-average" by the number of entries in the group.
| null |
CC BY-SA 4.0
| null |
2023-04-29T00:55:28.117
|
2023-04-29T01:22:54.660
|
2023-04-29T01:22:54.660
|
22311
|
1352
| null |
614453
|
2
| null |
613922
|
3
| null |
Provided the cards are well-shuffled, here's a fair version of the game:
>
One-king game
Only one winning card in the stack ("the king").
Number of cards in the stack is a multiple of the number of players.
In a [Spanish-suited deck](https://en.wikipedia.org/wiki/Spanish-suited_playing_cards) of [40 cards](https://en.wikipedia.org/wiki/Stripped_deck), you can achieve (1) by declaring only one card, e.g. the king of swords, is "the king".
[](https://i.stack.imgur.com/f91Zt.jpg)
Standard gameplay has an "early stopping" rule: players immediately declare if they drew a king, and the game ends as soon as someone does. Here's a rule variation that picks the same winner but makes (un)fairness more obvious:
>
"Simultaneous reveal" game
Players take turns to draw cards, each placing their cards face-down in a row in the order they were received.
Once the whole stack is dealt, all cards are revealed, and the winner identified.
The rows of players' cards make a grid, a rearrangement of the deck photographed above. In this one-king, simultaneous reveal game we discover, once the cards are turned over, player 2 actually won on their 7th draw:
[](https://i.stack.imgur.com/T00X6.png)
With early stopping, this game would end after the 26th card in the stack is drawn. But the king was equally likely to be in any of the 40 positions in the stack, so equally likely to be allocated to any position in the grid. Since all rows have the same number of cards, the king is equally likely to end up in any row — hence any player is equally likely to win, and the one-king game is fair. (We needed the "number of cards is multiple of number of players" rule, or else the grid isn't a rectangle and the first player unfairly draws one more card than the last player.)
What if there are multiple kings in the stack? Although the king of swords, king of cups, king of coins, and king of clubs are each distributed uniformly through the grid, just like in the one-king game, the distribution of the first-drawn king is not uniform. It is more likely to appear in an earlier draw (left of grid more likely than right) and drawn by an earlier player (top of grid more likely than bottom). This is clear from the way we determine the simultaneous reveal game's winner:
>
Simultaneous reveal game: rules for multiple kings
The "winning draw" is the leftmost column of the grid which contains a king. If only one player received a king in the winning draw, they are the winner.
When there are multiple kings in the winning draw, the players who received them go into a tie-breaker to decide who wins:
-player 1 beats all 3 other players,
-player 2 beats 2 other players (players 3 and 4),
-player 3 beats 1 other player (player 4),
-player 4 always loses.
That's obviously unfair — but these rules pick the same winner as early stopping does, so standard rules are biased in favour of early players when there are multiple kings in the stack! In the grid below, the first-drawn king was the 10th card in the stack. The 3rd draw is the winning draw; player 2 wins by rule (3) as no other player got a king in that draw. Under early stopping, we'd stop after drawing 10 cards and player 2 still wins on their 3rd draw.
[](https://i.stack.imgur.com/zX9qV.png)
In the arrangement of the stack below, the earliest kings are the 17th and 20th cards. Both appear in the winning (5th) draw. Under simultaneous reveal rule (4), player 1 beats player 4 by the tie-breaker "player 1 beats all other players". Under early stopping, player 1 beats player 4 because the game ends once the 17th card is drawn — player 4 never even gets the chance to see if they would get a king on their 5th draw, as player 1 has won already! Clearly early stopping and simultaneous reveal rules are equivalent.
[](https://i.stack.imgur.com/BYAT6.png)
Rule (3) is actually fair. Imagine all arrangements of the stack with 1 king in the winning draw. We group similar-looking grids together if they agree in all columns except the winning draw. The arrangement we saw with 1 king in the 3rd draw is part of this set of 4:
[](https://i.stack.imgur.com/6wkUb.png)
There are 4 ways to allocate 1 king between 4 players in the winning draw, so all such sets contain 4 arrangements. Each player wins exactly one arrangement in each set, as similar arrangements only differ in who gets the winning king. So each player wins in one quarter of the arrangements which have one king in the winning draw, as is fair.
The simultaneous reveal game's unfairness is entirely due to the rule (4) tiebreaker applying when multiple kings are in the winning draw. Our example with two kings in the winning draw is part of a set of six similar arrangements of the stack because there are ${4 \choose 2}=6$ ("[4 choose 2](https://en.wikipedia.org/wiki/Combination)") ways to allocate 2 kings between 4 players in the winning draw.
[](https://i.stack.imgur.com/slqjc.png)
In each such set of 6 equally likely arrangements, player 1 wins 3 grids, player 2 wins 2 grids, player 3 wins 1 grid, and player 4 wins 0 grids. Each set contains all possible pairings of which two players get kings in the winning draw, and the tiebreak (or early stopping) rule means player $i$ wins the $4-i$ of their match-ups that come against later players. Given that the winning draw contains 2 kings, player $i$ wins with [conditional probability](https://en.wikipedia.org/wiki/Conditional_probability)$\frac{4-i}{6}$, and these probabilities decline linearly in the order players take their turns: $\frac 3 6=\frac 1 2$, $\frac 2 6=\frac 1 3$, $\frac 1 6$, and $\frac 0 6=0$ for players 1, 2, 3, and 4 respectively. This also biases players' overall probabilities of winning, in a linear way: i.e. player 1's advantage over player 2 equals player 2's advantage over player 3, and so on.
Similarly, three kings in the winning draw can happen in ${4 \choose 3}=4$ ways, or all four kings can appear but only in ${4 \choose 4}=1$ way. Possible arrangements of the winning draw are:
[](https://i.stack.imgur.com/ozQEt.png)
Given that there are 3 kings in the winning draw, the conditional probabilities of winning are $\frac{3}{4}$ for player 1, $\frac{1}{4}$ for player 2, and 0 for the others. If instead we are given that there are 4 kings in the winning draw, player 1 is certain to win. The possibility of three or four kings being allocated to the winning draw biases the players' overall winning probabilities in a non-linear way: player 1 gets a particular boost, but these cases are equally bad for players 3 and 4.
Let's write the event "player $i$ wins" as $W_i$, and the number of kings in the winning draw as the [discrete random variable](https://www.statlect.com/glossary/discrete-random-variable) $K$. In summary:
Table 1: Probability of each player winning, given number of kings in winning draw
\begin{array}{|c|c|c|c|c|}
\hline
k&\Pr(W_1|K=k)&\Pr(W_2|K=k)&\Pr(W_3|K=k)&\Pr(W_4|K=k)\\ \hline
1&\frac{1}{4}&\frac{1}{4}&\frac{1}{4}&\frac{1}{4}\\ \hline
2&\frac{1}{2}&\frac{1}{3}&\frac{1}{6}&0\\ \hline
3&\frac{3}{4}&\frac{1}{4}&0&0\\ \hline
4&1&0&0&0 \\ \hline
\end{array}
To find how likely each player is to win overall, we need the probability distribution of $K$, i.e. the probabilities of $K=1,2,3,4$. Let's start by finding the probability of the first king appearing in the third draw (which I'll denote by $D=3$), and this being the only king in that draw ($K=1$). In our grid, we want the probability of no kings in the first two columns (which I've coloured black), one king in the third column (red) and the remaining three kings in the seven remaining columns (blue).
[](https://i.stack.imgur.com/O2jP4.png)
Overall there are ${40 \choose 4}=91\,390$ ways to arrange the four kings in the stack of forty cards. Out of these, the number of arrangements we want is ${8 \choose 0}{4 \choose 1}{28 \choose 3}$. There's only ${8 \choose 0}=1$ way to arrange the black section of the grid: all eight cards must be non-kings. But there are ${4 \choose 1}=4$ ways to ways to arrange one king (and three non-kings) in the red section of the grid, and ${28 \choose 3}=3\,276$ ways to arrange three kings in the blue section. The three coloured sections can be arranged independently of each other: if we wrote separate lists of the possible arrangements of each section, then we create a valid arrangement of the whole grid by picking one arrangement from the black list, one from the red list, and one from the blue list. The number of ways we can "mix and match" from the three lists is found by multiplying together the number of choices in each list: $1 \times 4 \times 3\,276 =13\,104$.
$$\Pr(D=3, K=1) =\frac{{8 \choose 0}{4 \choose 1}{28 \choose 3}}{{40 \choose 4}}=\frac{13104}{91390}\approx 0.143385 $$
(We can see this as an [urn problem](https://en.wikipedia.org/wiki/Urn_problem): imagine a jar containing balls labelled 1 to 40, representing positions in the stack of cards. We pull out, without replacement, four balls to tell us where to place the kings in the stack. Paint each ball the colour of the grid position it represents. We must select none of the 8 black balls, 1 of the 4 red balls, and 3 of the 28 blue ones. The probability can be found by the [multivariate hypergeometric distribution](https://en.wikipedia.org/wiki/Hypergeometric_distribution#Multivariate_hypergeometric_distribution) formula.)
If the earliest kings are allocated to the fifth draw, in which two kings appear:
[](https://i.stack.imgur.com/ZIsjH.png)
$$\Pr(D=5, K=2) =\frac{{16 \choose 0}{4 \choose 2}{20 \choose 2}}{{40 \choose 4}}=\frac{1140}{91390}\approx 0.012474 $$
To write a general formula for the probability that $D=d$ and $K=k$, we want none of the first $4(d-1)$ cards to be kings, $k$ of the $4$ cards in draw $d$ to be kings, and the remaining $4-k$ kings all to be in the remaining $4(10-d)$ cards:
[](https://i.stack.imgur.com/iN8px.png)
$$\Pr(D=d, K=k) =\frac{{{4(d-1)}\choose 0}{4 \choose k}{{4(10-d)}\choose {4-k}}}{{40 \choose 4}}=\frac{{4 \choose k}{{4(10-d)}\choose {4-k}}}{{40 \choose 4}}$$
Note "anything choose zero makes one", so ${{4(d-1)}\choose 0}=1$. (There's only one way to arrange the black section so it has no kings: make each black card a non-king.)
Using Excel's `COMBIN()` function, and conditional formatting to add a simple [heat map](https://en.wikipedia.org/wiki/Heat_map), I get this table for the [joint probability distribution](https://en.wikipedia.org/wiki/Joint_probability_distribution) of $D$ and $K$ (click image to zoom).
Table 2: $\Pr(D=d, K=k)$
[](https://i.stack.imgur.com/Qksa5.png)
The total row and column, which I [plotted at the side](https://web.archive.org/web/20230326022435/https://a4accounting.com.au/horizontal-or-vertical-progress-bar-in-excel/), are the [marginal distributions](https://en.wikipedia.org/wiki/Marginal_distribution) of $D$ and $K$.
The distribution of $D$ shows why the game is useful to quickly select one person from a group of four: over a third of the time, the process is completed in the players' first draws, and over 60% of the time in the first or second draws. The [expected value](https://en.wikipedia.org/wiki/Expected_value) $\mathbb{E}(D)=\sum_{d=1}^{10}d\Pr(D=d)$ shows the winning player makes, on average, 2.46 draws.
The distribution of $K$ shows the winning king is the only king in that draw about 85% of the time. When this happens we know each player has an equal chance of winning. As this is the case most of the time, our game should be "mostly fair". If, by coincidence, multiple kings were allocated to the winning draw, it's usually only two of them: triple coincidences are very rare, quadruple coincidences even rarer (about a one in nine thousand chance). While our game is biased in favour of earlier players, it's usually by two kings in the winning draw, so the bias should be "mostly linear".
By the [law of total probability](https://en.wikipedia.org/wiki/Law_of_total_probability):
$$\Pr(W_i)=\sum_{k=1}^4 \Pr(W_i | K=k)\Pr(K=k)$$
The first player's probability of winning is:
\begin{align}
\Pr(W_1) &=\frac{1}{4}\times 0.848233\dots+\frac{1}{2}\times 0.143779\dots \\
& \quad+\frac{3}{4}\times 0.007878\dots+1 \times 0.000109\dots \\
\implies \Pr(W_1) &\approx 0.290 \\
\end{align}
Similarly $\Pr(W_2)\approx 0.262$, $\Pr(W_3)\approx 0.236$ and $\Pr(W_4)\approx 0.212$. Although these are all quite near a fair 0.25, the game is biased in favour of earlier players: the first player is 1.37 times more likely to win than the fourth. Each player's winning probability is about 0.025 higher than the next player, so the bias is indeed nearly linear:
[](https://i.stack.imgur.com/RWRBq.png)
## General formula
Suppose we have positive integer values of
- $n_p$ players (number of grid rows when laid out like the simultaneous reveal game),
- $n_c$ cards in the stack per player (number of grid columns; the stack has $n_p n_c$ cards overall),
- $n_k$ "kings" in the stack (number of cards that win the game; not necessarily the 4 cards in a standard deck with a rank of king, as you might restrict only certain suits of king to win, or expand the number of winning cards by allowing other ranks to win).
As before, define
- $W_i$ as the event that player $i$ wins, for $1 \le i \le n_p$,
- $D$ as the number of cards drawn by the winning player, up to and including their first king, for $1 \le D \le n_c$,
- $K$ as the number of kings in the $D$-th grid column, for $1 \le K \le \min(n_p, n_c)$ as there are only $n_p$ cards in the winning draw, and only $n_k$ kings in the stack.
To find the conditional probability $\Pr(W_i | K=k)$, consider the $n_p \choose k$ ways that $k$ kings in the winning draw can be allocated among the $n_p$ players. Out of these, the ways that lead to player $i$ winning require no kings for the first $i-1$ players, one king for player $i$, and $k-1$ kings for the remaining $n_p-i$ players:
$$\Pr(W_i | K=k) =\frac{{{i-1}\choose 0}{1 \choose 1}{{n_p-i}\choose {k-1}}}{n_p \choose k}=\frac{{n_p-i}\choose {k-1}}{n_p \choose k}$$
For our table of the joint distribution of $D$ and $K$:
[](https://i.stack.imgur.com/RvnuZ.png)
$$\Pr(D=d, K=k) =\frac{{{n_p(d-1)}\choose 0}{n_p \choose k}{{n_p(n_c-d)}\choose {n_k-k}}}{{{n_p n_c}\choose n_k}}=\frac{{n_p \choose k}{{n_p(n_c-d)}\choose {n_k-k}}}{{{n_p n_c}\choose n_k}}$$
As before, find the marginal distribution of $K$:
$$\Pr(K=k) =\sum_{d=1}^{n_c}\Pr(D=d, K=k) =\sum_{d=1}^{n_c}\frac{{n_p \choose k}{{n_p(n_c-d)}\choose {n_k-k}}}{{{n_p n_c}\choose n_k}}$$
Then by the law of total probability:
\begin{align}
\Pr(W_i) &=\sum_{k=1}^{\min(n_p, n_c)}\Pr(W_i | K=k)\Pr(K=k)\\
&=\sum_{k=1}^{\min(n_p, n_c)}\left(\frac{{n_p-i}\choose {k-1}}{n_p \choose k}\sum_{d=1}^{n_c}\frac{{n_p \choose k}{{n_p(n_c-d)}\choose {n_k-k}}}{{{n_p n_c}\choose n_k}}\right)\\
\implies \Pr(W_i) &=\sum_{k=1}^{\min(n_p, n_c)}\left(\frac{{n_p-i}\choose {k-1}}{{{n_p n_c}\choose n_k}}\sum_{d=1}^{n_c}{{n_p(n_c-d)}\choose {n_k-k}}\right)\\
\end{align}
## Fairer versions of the game
All the game's unfairness is due to the tiebreaker (equivalent to the early stopping rule) used when multiple kings are allocated to the winning draw. We can address that in several ways.
### Ensure there's only one king in the winning draw
We already saw the game's fair if we only use one "king" in the stack.
Alternatively, replace early stopping with an equal draws rule: the game can only stop once every player has drawn the same number of cards. If only one player got a king in the most recent draw, they win. If zero or several kings appeared, players just continue drawing from the stack (or the stack is reshuffled and play starts again, if all kings have already been found).
These rules are fair, but slow down play. It takes longer to find the only king than the first of four, and the potential need to reshuffle is inconvenient.
### Make multiple kings in the winning draw rarer
Try a larger deck: a 48-card Spanish deck or [52-card French deck](https://en.wikipedia.org/wiki/French-suited_playing_cards) spreads the 4 kings across more columns of the grid ($n_c$ is 12 and 13 respectively, instead of 10), so it's less likely two or more are in the same column. This makes the game fairer, but $\Pr(K=2)$ and the bias in winning probabilities fall only slowly as $n_c$ rises:
Table 3: effect of increasing $n_c$
[](https://i.stack.imgur.com/0d4Sg.png)
Similarly we can reduce bias by using fewer "kings", e.g. only count kings of swords and coins as winning cards. This slows play but not as much as using only one king: on average the number of draws made by the winning player, $\mathbb{E}(D)$, is 2.46, 2.96, 3.81, 5.5 with 1, 2, 3, 4 kings in a 40-card stack.
### Reverse direction of play after every round of drawing
Players 1, 2, 3, 4 draw their first cards in that order; their second draws are made in order 4, 3, 2, 1; the direction of play continues to switch after each draw is completed. This reduces the biasing effect of early stopping when multiple kings are in the same draw, by favouring early players if the winning draw $D$ is odd and late players if $D$ is even. Our table of $\Pr(D=d,K=k)$ shows when $K\ge2$, it's more likely $D$ is odd than even, so early players still have a slight advantage: the players' winning probabilities are 0.2571, 0.2510, 0.2470, 0.2449. The bias was slashed by about 80%, far more effective than reducing the number of kings to 2 or 3:
[](https://i.stack.imgur.com/Q55kT.png)
This lets us keep the fast gameplay of early stopping (winning player still only needs a mean of 2.46 draws) while removing most unfairness. Alternatively, the following variant is totally fair:
>
Two-king game with early stopping and direction-switching
Two winning cards in the stack ("the kings": e.g. use kings of swords and coins; kings of cups and clubs don't count).
Number of cards in the stack is double a multiple of the number of players (so grid has even number of columns, $n_c$).
Players take turns to draw cards, switching direction of play after all have drawn a card (first player to draw a card is last to draw their 2nd card, but first to draw their 3rd, etc).
First player to draw a king wins.
When $K=1$ the game is fair. When $K=2$ the winning draw is equally likely to be in an odd or even column: group "similar" grids based on which rows the kings are in, and each set will have $n_c$ grids with different winning draws, half even and half odd.
[](https://i.stack.imgur.com/Mk8EU.png)
[](https://i.stack.imgur.com/BA40b.png)
In all $n_p \choose 2$ pairings of which 2 players got kings in the winning draw, each player has an equal number of ways to win. So the game is fair when $K=2$ too.
## Approximate formula
Writing combinations in terms of [factorials](https://en.wikipedia.org/wiki/Factorial),
$${n \choose r}=\frac{n!}{r!(n-r)!}=\frac{n(n-1)\dots(n-(r-1))}{r!}$$
This lets us write polynomial approximations for combinations. Consider:
$$\Pr(K=k)=\frac{n_p \choose k}{n_p n_c \choose n_k}\sum_{d =1}^{n_c}{n_p(n_c-d)\choose n_k-k}$$
This depends on $n_c$ only in the sum and denominator. Treating $n_p$, $n_k$ and $k$ as constants, the denominator is polynomial in $n_c$ with degree $n_k$:
$${n_p n_c \choose n_k}=\frac{1}{{n_k}!}(n_p n_c)(n_p n_c-1)\dots (n_p n_c-(n_k-1))$$
In [big O notation](https://en.wikipedia.org/wiki/Big_O_notation),
$${n_p n_c \choose n_k}=\frac{n_p^k}{{n_k}!}n_c^k+O(n_c^{k-1})$$
In $\sum_{d =1}^{n_c}{n_p(n_c-d)\choose n_k-k}$ we sum over the index $d =1, 2, \dots, n_c$. But the [summand](https://en.wiktionary.org/wiki/summand) only depends on $d$ via $n_c-d$, which will run down through the values $n_c-1,n_c-2,\dots,0$. It's convenient to take this sum in reverse order, and run through the values $m-1$ over $m=1,2,\dots,n_c$ where $m-1=n_c-d$ so $m=n_c+1-d$.
\begin{align}
\sum_{d =1}^{n_c}{n_p(n_c-d)\choose n_k-k}&=\sum_{m=1}^{n_c}{n_p(m-1)\choose n_k-k}\\
&=\sum_{m =1}^{n_c}\frac{(n_p(m-1))(n_p(m-1)-1)\dots(n_p(m-1)-(n_k-k-1))}{(n_k-k)!}\\
&=\sum_{m=1}^{n_c}\left(\frac{n_p^{n_k-k}}{(n_k-k)!}m^{n_k-k}+O(m^{n_k-k-1})\right)\\
\end{align}
A consequence of [Faulhaber's formula](https://en.wikipedia.org/wiki/Faulhaber%27s_formula) is that when we sum a polynomial of degree $d$, we increase the power on the leading term by one, divide by the new power, and substitute in the upper limit (quite like integration in calculus), followed by lower order terms:
$$\sum_{m=1}^n \left(a_d m^d+a_{d-1}m^{d-1}+\dots+a_1 m+a_0 \right) =\frac{a_d}{d+1}n^{d+1}+O(n^d)$$
Well-known examples are [triangular](https://en.wikipedia.org/wiki/Triangular_number) and [square pyramidal numbers](https://en.wikipedia.org/wiki/Square_pyramidal_number):
\begin{align}
\sum_{m=1}^n m &=1+2+\dots+n=\frac{n(n+1)}{2}=\frac{1}{2}n^2+O(n)\\
\sum_{m=1}^n m^2 &=1^2+2^2+\dots+n^2 =\frac{n(n+1)(2n+1)}{6}=\frac{1}{3}n^3+O(n^2)\\
\end{align}
We get:
\begin{align}
\sum_{d=1}^{n_c}{n_p(n_c-d)\choose n_k-k}&=\sum_{m =1}^{n_c}\left(\frac{n_p^{n_k-k}}{(n_k-k)!}m^{n_k-k}+O(m^{n_k-k}-1)\right)\\
&=\frac{n_p^{n_k-k}}{(n_k-k)!(n_k-k+1)}n_c^{n_k-k+1}+O(n_c^{n_k-k})\\
&=\frac{n_p^{n_k-k}}{(n_k-k+1)!}n_c^{n_k-k+1}+O(n_c^{n_k-k})
\end{align}
To approximate the probability distribution of the number of kings in the winning draw,
\begin{align}
\Pr(K=k) &=\frac{n_p \choose k}{n_p n_c \choose n_k}\sum_{d =1}^{n_c}{n_p(n_c-d)\choose n_k-k}\\
&={n_p \choose k}\left(\frac{n_p^{n_k-k}}{(n_k-k+1)!}n_c^{n_k-k+1}+O(n_c^{n_k-k})\right)\bigg{/}\left(\frac{n_p^{n_k}}{n_k!}n_c^{n_k}+O(n_c^{n_k-1}) \right)\\
\Pr(K=k) &={n_p \choose k}\frac{{n_k}!}{(n_k-k+1)! n_p^k n_c^{k-1}}+O\left(\frac{1}{n_c^k}\right)\\
\end{align}
So $\Pr(K=k)$ is itself of order $O(1/n_c^{k-1})$, and the first term above gives an estimate with error of order $O(1/n_c^k)$. It makes sense $\Pr(K=1)$ is $O(1)$, since as we increase the stack size (without adding any king cards) $n_c \to \infty$, our kings are stretched across more columns, the chance of two or more kings coincidentally allocated to the winning column falls to zero, and $\Pr(K=1)\to 1$. As expected, $\Pr(K=2)\to0$ as $n_c \to \infty$ since it is $O(1/n_c)$, but this decay is rather slow — hence, in Table 3, expanding the stack from $n_c=10$ to $n_c=13$ only slightly reduced $\Pr(K=1)$ and did little to make the game fairer. Since $\Pr(K=3)$ is $O(1/n_c^2)$ and $\Pr(K=4)$ is $O(1/n_c^3)$, the chances of a triple or quadruple coincidence of kings in the winning draw drop off to zero much faster, again seen in Table 3.
With the standard $n_p=4$, $n_k=4$, $n_c=10$, the first term of this approximation estimates the probabilities of 1, 2, 3, 4 kings in the winning draw as 1, 0.15, 0.0075, 0.0000938 respectively; the correct values are 0.848 , 0.144, 0.00788, 0.000109. Obviously these approximations are not very accurate — their errors were of order $O(1/n_c)$, $O(1/n_c^2)$, $O(1/n_c^3)$ and $O(1/n_c^4)$ respectively. But they do explain why bias in players' winning probabilities, i.e. deviation from a "fair" $1/n_p$ for each player, was mostly driven by the case $K=2$:
\begin{align}
\Pr(W_i)-\frac{1}{n_p}&=\sum_{k=1}^{\min(n_p, n_k)}\Pr(W_i | K=k)\Pr(K=k)-\frac{1}{n_p}\\
&=\sum_{k=1}^{\min(n_p, n_k)}\Pr(W_i | K=k)\Pr(K=k)-\frac{1}{n_p}\left(\sum_{k=1}^{\min(n_p, n_k)}\Pr(K=k)\right)\\
&=\sum_{k=1}^{\min(n_p, n_k)}\left(\Pr(W_i | K=k)-\frac{1}{n_p}\right)\Pr(K=k)\\
&=\sum_{k=\color{red}{2}}^{\min(n_p, n_k)}\left(\frac{{n_p-i}\choose {k-1}}{n_p \choose k}-\frac{1}{n_p}\right)\Pr(K=k)\\
\end{align}
where we used $\sum_{k=1}^{\min(n_p, n_k)}\Pr(K=k) =1$ and the summation can start at $k=2$, since $\Pr(W_i | K =1) =1/n_p$ so the summand is zero for $k=1$. Note that if $\min(n_p,n_k)=1$, this means the entire sum vanishes so the bias is zero.
If $\min(n_p,n_k)\ge 2$, the summand depends on $n_c$ only via $\Pr(K=k)$, which is of order $O(1/n_c^{k-1})$. Hence only the $k=2$ term can make a contribution of order $O(1/n_c)$; other terms are $O(1/n_c^2)$ or smaller, so can be neglected provided the number of cards per player in the stack, $n_c$, is reasonably large:
\begin{align}
\Pr(W_i)-\frac{1}{n_p}&=\left(\frac{{n_p-i}\choose {2-1}}{n_p \choose 2}-\frac{1}{n_p}\right)\Pr(K=2)+O\left(\frac{1}{n_c^2}\right)\\
&=\left(\frac{{n_p-i}\choose 1}{n_p \choose 2}-\frac{1}{n_p}\right)\left({n_p \choose 2}\frac{{n_k}!}{(n_k-2+1)! n_p^2 n_c^{2-1}}\right)+O\left(\frac{1}{n_c^2}\right)\\
&=\left(n_p-i-\frac{1}{n_p}{n_p \choose 2}\right)\left(\frac{{n_k}!}{(n_k-1)! n_p^2 n_c}\right)+O\left(\frac{1}{n_c^2}\right)\\
&=\left(n_p-i-\frac{1}{n_p}\cdot \frac{n_p(n_p-1)}{2}\right)\left(\frac{{n_k}}{n_p^2 n_c}\right)+O\left(\frac{1}{n_c^2}\right)\\
&=\frac{1}{2}\left(2n_p-2i-(n_p-1)\right)\left(\frac{{n_k}}{n_p^2 n_c}\right)+O\left(\frac{1}{n_c^2}\right)\\
\Pr(W_i)-\frac{1}{n_p}&=\frac{{n_k}}{2 n_p^2 n_c}(n_p+1-2i)+O\left(\frac{1}{n_c^2}\right)\\
\end{align}
We see that in general the bias is an approximately linear and decreasing function of $i$, i.e. the bias favours early players with relatively even steps between consecutive players.
Whether $i$ is "early" or "late" depends on $n_p$. The third player was relatively late if there are 4 players, which decreased their probability of winning, $\Pr(W_3)$. But player 3 would be in the middle if $n_p=5$ and gets a slight advantage from being an early player if $n_p=8$. We can define the relative positional advantage of player $i$ so that $a(1) =+1$ for the first player, $a(n_p) =-1$ for the last player, and $a((n_p+1)/2) =0$ if there's a "middle" player (when $n_p$ is odd). Other players should be assigned their advantage in a linear manner, in the range $-1 \le a(i)\le 1$. We can do this by finding the ratio between how far player $i$ is from the middle position, compared to how far player 1 is:
$$a(i)=\frac{(n_p+1)/2-i}{(n_p+1)/2-1}=\frac{n_p+1-2i}{n_p+1-2}=\frac{n_p+1-2i}{n_p-1}$$
Writing bias in terms of $a(i)$:
\begin{align}
\Pr(W_i) &=\frac{1}{n_p}+\frac{n_k}{2 n_p^2 n_c}(n_p-1) a(i)+O\left(\frac{1}{n_c^2}\right)\\
\Pr(W_i) &=\frac{1}{n_p}+\frac{1}{2}\left(\frac{n_k}{n_p n_c}\right)\left(1-\frac{1}{n_p}\right) a(i)+O\left(\frac{1}{n_c^2}\right)\\
\end{align}
Here $n_k/(n_p n_c)$ is the fraction of cards in the stack which are kings, and $(1-1/n_p)$ is a proportionate reduction by the reciprocal of the number of players: 25% reduction for 4 players, about 33% for 3 players, 50% for 2 players. Half their product is, approximately, the maximum bias for or against a player; their product itself is the approximate difference in winning probability between first and last players:
$$\Pr(W_1)-\Pr(W_{n_p})\approx \left(\frac{n_k}{n_p n_c}\right)\left(1-\frac{1}{n_p}\right)$$
Under standard rules, 10% of cards are kings and there are 4 players so a 25% reduction, hence the gap between first and last players' winning probabilities should be about about 0.075; this is reasonably close to the true value of about 0.0779. Estimated winning probabilities of players 1, 2, 3, 4 are 0.2875, 0.2625, 0.2375, 0.2125 respectively; compared to true values 0.2900, 0.2620, 0.2360, 0.2121 we see these are accurate to 2 decimal places and not far wrong in the 3rd digit.
The error is $O(1/n_c^2)$ so is small when $n_c$ is reasonably large — even for 10 cards per player it's quite good, but the graph below shows it's even better when $n_c$ is 15 or 20. We could improve the accuracy of our approximations by calculating, rather than neglecting, the $O(1/n_c^2)$ terms. This would help our approximation capture the extra boost the first player gets from the case $K=3$, which is the most notable error in the approximation in the graphs below. Reducing the error from $O(1/n_c^2)$ to $O(1/n_c^3)$ would also improve the fit for lower values of $n_c$, for which the players' winning probabilities are less linear in $i$.
[](https://i.stack.imgur.com/VmUDF.png)
A 40-card deck can be equally shared between 2, 4, 5, 8 or 10 players. Our approximation works quite well for varying $n_p$, but when many players share the same deck, the number of cards per player drops, so the $O(1/n_c^2)$ error becomes larger:
[](https://i.stack.imgur.com/b4Tha.png)
Increasing the number of players also makes $(1-1/n_p)\to1$, so the linear bias we approximated will rise towards half the proportion of kings in the stack, 0.05 for a 40-card stack with 10 kings. And with more players drawing in each round, $K\ge3$ becomes more likely, increasing the nonlinear bias in favour of early players (which drove the error we saw in the graph above):
[](https://i.stack.imgur.com/IAFNU.png)
So while it's straightforward to expand the game from 4 to 5, 8 or 10 players sharing the same deck, the game gets less fair. You can switch direction of play after each round of draws to counteract this.
| null |
CC BY-SA 4.0
| null |
2023-04-29T03:13:35.307
|
2023-05-09T14:21:08.230
|
2023-05-09T14:21:08.230
|
22228
|
22228
| null |
614454
|
1
| null | null |
0
|
15
|
I'm trying to account for a time correction between two streams of data, let's say Stream A and Stream B. Stream A and Stream B each have different message arrival/latency characteristics. Each stream message has an attribute (Z), and a message type (M1, M2). The attributes are observed and generated independently of each other, and so what I've done is to go through several days of stream data, and computed the M1-M1, M1-M2, M2-M1, and M2-M2 time delta for each attribute (there are roughly 60 attributes). I've computed the Mean and Std for each attribute and inter-message differences.
In order to "correct" for the time-delay differences in stream processing, I've regressed the Mean of each attribute and inter-message delay from Stream A against Stream B. The R-squared is very high (0.8+ - sometimes 0.95 for M1-M1 and M2-M2 messages). I've also regressed the Std of each attribute and inter-message delay from Stream A against Stream B, with similar results.
Now normally, I should just be able to use a linear transformation, but it turns out that the scaling factor for Mean and Std are different.
Mean(B) = C*Mean(A) + D
Std(B) = E*Std(A) + F
So I am stuck because I can't seem to express these two equations as a single equation that would properly adjust for both Mean and Std.
If I perform a scaling with C, and because C <> E, the Std is then incorrect. And vice-versa.
So it looks like A and B are not just scaled by a simple coefficient, but have both different Means and Stds.
|
How to transform one distribution into another using a linear transform of Mean and linear transform of Std?
|
CC BY-SA 4.0
| null |
2023-04-29T03:49:13.953
|
2023-04-29T04:35:05.810
|
2023-04-29T04:35:05.810
|
362671
|
386827
|
[
"distributions",
"data-transformation",
"linear",
"feature-scaling"
] |
614455
|
1
| null | null |
4
|
115
|
I would like to perform a power analysis on my pilot data. My test statistic is a single-sample mean with only 14 observations. The data are non-normal (it's percent vegetation cover, which I think follows a beta distribution?) so I assume that I will be boostrapping, but that's about as far as I've made it.
My thoughts at the moment are that I'll boostrap the data to create a sampling distribution of means.
- Do I then perform a t-test on each bootstrapped mean to find if it is significantly different from the sample mean?
- Is this t-test appropriate because the sampling distribution is normally distributed even though the original sample isn't?
- Would $\mu$ simply be my original sample mean of 0.18?
- Where do I define how comfortable I am with the mean being off. I don't know the term for that; what I mean to say is. I'm comfortable with an error margin of +/- 0.05.
- I will, of course, be extending this to a sample size calculation (using R). Any clarity on the process is much appreciated.
edit to add: the end result I'm looking for is the sample size required to be 90% confident in my estimation of cover and to have that estimation of cover to be +/- 5% of the true cover.
For extra fun, here are my 14 observations in my pilot data: (0, 0, 0.01, 0.01, 0.01, 0.03, 0.09, 0.20, 0.22, 0.23, 0.37, 0.42, 0.47, 0.57)
---
okay, Here's what I came to. I understand that 'power analysis' is the wrong term for what I was looking for. I am really asking, "How much can I trust these 14 observations to tell me the mean?" and secondly, "How many samples do I need to feel confident in my result?".
I first used bootstrap resampling to construct my confidence intervals of the original data with 14 observations. Then I resampled the data 1000 times while simulating capturing 10 through 20 observations, watched my confidence intervals get tighter and my margin of error shrink. With this I can estimate the number of samples I need to determine my mean within the margin of error I'm looking for.
|
Performing a power analysis on finding the mean of a single sample, non-normal dataset
|
CC BY-SA 4.0
| null |
2023-04-29T04:47:58.907
|
2023-05-22T23:23:19.087
|
2023-05-22T23:23:19.087
|
11887
|
216335
|
[
"bootstrap",
"statistical-power",
"beta-distribution"
] |
614456
|
1
| null | null |
0
|
9
|
In the literature, for a binary classification problem, I have come across examples where a One-ClassSVM model is trained using the data for only one of the two training labels and sometimes using both the training labels.
Which is the correct way to do this?
Train the model using both the classes or use only one of the class and test using the other class?
I have found these links to be related but don't answer my question:
(1) [Best way to train one-class SVM](https://stats.stackexchange.com/questions/256725/best-way-to-train-one-class-svm)
(2) [what would be a recommended division of train and test data for one class SVM?](https://stats.stackexchange.com/questions/478963/what-would-be-a-recommended-division-of-train-and-test-data-for-one-class-svm)
|
Inconsistencies with OneClassSVM model training
|
CC BY-SA 4.0
| null |
2023-04-29T05:16:56.023
|
2023-04-29T05:16:56.023
| null | null |
376580
|
[
"svm",
"scikit-learn",
"one-class"
] |
614457
|
2
| null |
614455
|
5
| null |
You don't have a hypothesis anywhere here. Don't test anything unless you do have a suitable hypothesis; this must NOT be based on things you find in the sample.
What are you collecting data for? What were you trying to find out?
>
Is this t-test appropriate because the sampling distribution is normally distributed even though the original sample isn't?
(i) The sampling distribution of the mean is not normal; the average of 14 non-normally distributed quantities (even if some other typically assumed conditions were reasonable) is not normally distributed.
(ii) the derivation of the ratio in a t-statistic being t-distributed relies on more than the distribution of the numerator.
(iii) nevertheless, it's pretty likely that a suitable t-statistic would have a distribution close enough to the t-distribution that the significance level of the test would be very nearly what you selected $\alpha$ to be (quite possibly even at $n=14$). The power would be somewhat reduced compared to a better choice of distributional model, but in many situations probably not enough to worry you.
>
Would μ simply be my original sample mean of 0.18?
No. If you didn't have a $\mu$ before you saw any data, you didn't have a hypothesis to test
>
Where do I define how comfortable I am with the mean being off. I don't know the term for that; what I mean to say is. I'm comfortable with an error margin of +/- 0.05.
It's unclear what you mean. Are you here trying to specify a margin of error for the estimate of the mean, as if you were creating a confidence interval for the mean?
>
Do I then perform a t-test on each bootstrapped mean to find if it is significantly different from the sample mean?
No. You do not compare means of simple bootstrap resamples to the sample mean. That doesn't test anything.
If you were to have a suitable hypothesis, and you specifically wanted a non-parametric test of the mean (not simply that you didn't just want to use an ordinary one-sample t-test), I'd be suggesting a permutation test.
But you don't have a hypothesis. There's literally nothing to test.
>
I will, of course, be extending this to a sample size calculation (using R).
You're going to have to be a lot clearer about your aim. You don't have a test, so no sample size calculation on that basis.
You could specify a margin of error for a CI and get a sample size that way but you haven't really been very clear that this is actually what you want.
>
"(it's percent vegetation cover, which I think follows a beta distribution?)"
No, the beta distribution is one potentially plausible model for a continuous proportion like vegetation cover, but don't confuse the model for the real thing: -
>
Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful.
$-$ George Box
If you believe the beta model would be a pretty good model (and it might, as long as none of your proportions are actually 0 or 1), why would you not directly use the beta model as the basis for a test, confidence interval, or whatever else?
...
Ideally you should have dealt with these issues in detail before gathering even the pilot sample, but at least you're doing it before you collect the main sample, which is important.
| null |
CC BY-SA 4.0
| null |
2023-04-29T05:17:33.480
|
2023-04-29T11:18:27.157
|
2023-04-29T11:18:27.157
|
805
|
805
| null |
614460
|
1
| null | null |
0
|
17
|
For context: For one of my student's projects, we are looking at how simulated eye-height in VR and real-world posture (Sitting, Lying, Standing) interact in the perception of size. Our participants judge the size of a virtual object at different simulated distances in front of them while they are sitting, lying, or standing in the real world, and while we simulate different eye-heights in the VR environment.
Now as for my stats question – for the analysis I have two LMMs:
M1: Perceived Size ~ Posture*EyeHeight + (Distance + Posture+ EyeHeight| Participant)
M2: Perceived Size ~ Posture*EyeHeight + Distance + (Distance + Posture+ EyeHeight| Participant)
Posture is a categorical variable with four levels, and EyeHeight and Distance are continuous.
My problem: M1 and M2 yield vastly different regression coefficients for the main effect of Posture. Anyone got any ideas why? The estimate for EyeHeight and the interaction terms between EyeHeight and Posture are nearly identical between M1 and M2. Judging from the plots and from the previous research, the result from M2 is more likely to be true.
|
Linear Mixed Modelling: Vastly different parameter estimates when adding fixed effect that is already present as random effect
|
CC BY-SA 4.0
| null |
2023-04-29T07:07:09.307
|
2023-04-29T07:21:00.340
| null | null |
322704
|
[
"mixed-model"
] |
614461
|
2
| null |
614460
|
0
| null |
Both these models estimate a separate `Size~Distance` slope for each participant, but M1 forces the mean of these slopes to be zero. That's typically not a sensible assumption; if you think perceived size varies with distance for each participant, it usually makes sense to allow for the slope to be non-zero on average as well.
| null |
CC BY-SA 4.0
| null |
2023-04-29T07:21:00.340
|
2023-04-29T07:21:00.340
| null | null |
249135
| null |
614462
|
1
|
614474
| null |
5
|
131
|
>
Let $X$ be non-negative random variable and $F$ be its distribution function. Prove the following implications:
$$
\mathbb{E}(X) < \infty \Longrightarrow \mathbb{E}(\sqrt{X}) < \infty \Longrightarrow \int_{\mathbb{R}^+} (1 - F(x))^2 dx < \infty
$$
For the first implication, I prove it as follows:
$$
\mathbb{E}(\sqrt{X}) = \int_0^\infty \mathbb{P}(\sqrt{X} > t)dt = \int_0^1 \mathbb{P}(x > t^2)dt + \int_1^\infty \mathbb{P}(X > t^2)dt
$$
For $t \ge 1, \mathbb{P}(X > t^2) \le \mathbb{P}(X > t)$, thus
$$
\int_1^\infty \mathbb{P}(X > t^2) \le \int_1^\infty \mathbb{P}(X > t)dt \le \mathbb{E}(X) < \infty
$$
Now, the second implication is where I'm stuck at. My first attempt is to do the partial integration: Fixed $M > 0$,
$$
\int_0^M [1 - F(x)]^2 dx = (1 - F(M))\left[M - \int_0^M F(t)dt\right] + \int_0^M \left[x - \int_0^x F(t)dt \right]dF
$$
But this seems like a dead end. I try to use the tail probabilities to express $\mathbb{E}(\sqrt{X})$:
$$
\mathbb{E}(\sqrt{X}) = \int_0^\infty \mathbb{P}(\sqrt{X} > t)dt = \int_0^\infty \dfrac{\mathbb{P}(X > u)}{2\sqrt{u}}du
$$
But I still don't have any idea to use this. So, if you have any idea how to show this, I'd love to hear it.
The result has been suggested in [Stochastic Dynamical Systems with weak contractivity properties](https://arxiv.org/abs/1005.2265) - Theorem 5.5.
|
How to deduce $ \mathbb{E}(\sqrt{X}) < \infty \implies\int_{\mathbb{R}^+} (1 - F(x))^2 dx < \infty,~X$ being a non-negative integrable rv?
|
CC BY-SA 4.0
| null |
2023-04-29T08:07:25.820
|
2023-04-29T14:34:37.983
|
2023-04-29T14:09:57.473
|
362671
|
350550
|
[
"probability",
"random-variable",
"expected-value",
"integral"
] |
614463
|
2
| null |
19311
|
0
| null |
[Mathematics for Machine Learning](https://mml-book.github.io) is one of my favorite. The first part gives mathematical foundations such as linear algebra, analytic geometry, matrix decomposition, vector calculus, probability and statistics, continuous optimization, while the second part concerns central machine learning problems. Also, the PDF of the book is freely available.
| null |
CC BY-SA 4.0
| null |
2023-04-29T09:20:45.383
|
2023-04-29T09:20:45.383
| null | null |
368299
| null |
614464
|
1
| null | null |
0
|
10
|
I apologize for the poorly worded title, but I have a Prevalence/incidence question that got me confused
Hello all, I just came out of my epidemiology exam, and there’s this question that’s bothering me nonstop. All of my peers are having a different answer and we can’t agree on a certain concept. Please let me know what you think. I will quote as much as I can remember from that question, and your assistance is highly appreciated.
“In a fictitious population of 1000 healthy men, 100 are diagnosed with prostate cancer in a given year. A researcher started a study and the prevalence at the starting point is zero, predict the prevalence and incidence two years later
A) P= 300 I= 1 (or “.1” I can’t remember)
B) P= 200 I=1 (or “.1” I can’t remember)
C) P=300 I=100
D) P=100 I=300
E) P=300 I= unknown”
I answered it as P=200 and I=1 (or 0.1), I don’t have any logic regarding incidence, but I assumed that if starting point = 0, and we have 100 cases in a year, then after two years we’d have 200. Other colleagues insist that it is E because no enough info is available to calculate incidence.
What do you think?
|
Calculating prevalence and incidence from a given rate
|
CC BY-SA 4.0
| null |
2023-04-29T09:34:20.483
|
2023-04-29T09:34:20.483
| null | null |
386839
|
[
"incidence-rate-ratio",
"prevalence"
] |
614467
|
1
| null | null |
0
|
6
|
I am working on a model (non-linear) of the form:
[](https://i.stack.imgur.com/hBTwb.png)
[](https://i.stack.imgur.com/Lj5yk.png)
Should I have the intercept in the equation, as having one gives a different outcome (level of significance) to my interaction term 1, compared to not having the intercept?
|
Interaction term's significance changes by including the intercept. How to decide if intercept is to be kept or not?
|
CC BY-SA 4.0
| null |
2023-04-29T10:43:20.803
|
2023-04-29T10:43:20.803
| null | null |
369873
|
[
"intercept"
] |
614468
|
1
| null | null |
0
|
11
|
I am trying to normalised an unnormalised posterior density $p_{unnormalised}(\theta | x)$ but unsure the best way to proceed without being caught in an algebraic mess.
The posterior density is given by $p(\theta, \sigma^{2} | x) = \frac{L(\theta, \sigma^{2}| x)\pi(\theta)}{Z}$ and prior $\pi(\theta) = 1$
The data generating model $p(x | \theta, \sigma^{2})$, with degrees of freedom $v = 2$ and $\sigma_{2} = 40^{2}$ is given as a t -distribution like
$p(x | \theta) = t_{v}(x | \theta, 40^{2}) = t_{v}(x | \theta) \propto (1 + \frac{1}{v}(\frac{x- \theta}{\sigma})^{2} ) ^{-(v + 1) / 2}$
But taking the log of $p(x | \theta, 40^{2})$ and letting $v = 2$, the log - likelihood $l(\theta | x) = log L(\theta | x)$ gives $l(\theta |x) = \frac{-3}{2}ln[\frac{2 + (x - \theta)^{2}}{40^{2}}]$
The un - normalised posterior density is
$p_{unnormalised}(\theta | x) \propto l(\theta | x) = \frac{-3}{2}ln[\frac{2 + (x - \theta)^{2}}{40^{2}}]$
To normalise the unnormalised posterior $p_{unnormalised}(\theta | x)$, note that the evidence
$Z = \int_{\theta} p(\theta, x).d\theta$. But $p_{unnormalised}(\theta , x) = p_{unnormalised}(x | \theta) \pi(\theta)$ so that $Z = \int_{\theta} p_{unnormalised}(x | \theta) \pi(\theta).d\theta$.
Does this leaves me to integrate $p_{unnoramlised}(x | \theta) \pi(\theta)$ with respect to $\theta$ where $p_{unnoramlised} = (1 + \frac{1}{v}(\frac{x- \theta}{\sigma})^{2} ) ^{-(v + 1) / 2}$? Is this the best method?
|
Steps to normalise this posterior (if possible) with a t -data distribution likelihood
|
CC BY-SA 4.0
| null |
2023-04-29T11:31:49.167
|
2023-04-29T11:31:49.167
| null | null |
109101
|
[
"self-study",
"bayesian",
"mathematical-statistics",
"approximate-bayesian-computation"
] |
614469
|
1
| null | null |
0
|
19
|
I have a Neural Network (Feed Forward Neural Network) with Softmax as the output. The neural network is maximizing some loss (call it $L$), with respect to the input called VAMP2 (there is not True labels!). The optimization process goes by minimizing the loss as much as possible. The loss is bound by $N$, where $N$ is number of Softmax outputs. So $max\{L\} = N$
The problem is that often Softmax, does maximize the loss only for one or two of its outputs (depending on the random seed). Let's say that the number of outputs $N$ is $N=4$. Then Softmax optimizes only 2 of the outputs, the other 2 always stay 0 or 1. How to prevent such a scenario ? Are there some tricks with Softmax to prevent it ? It does look like Softmax can't get out of some local minima or get stuck there. Could I use some specific regularization for Softmax ? As Entropy ?
Loss definition if one would look at it
$$L = -||C_{00}^{-1/2}C_{01}C_{11}^{-1/2}||^2_F$$
where
$$C_{00}=Cov(\chi_1, \chi_1); C_{01}=Cov(\chi_1, \chi_2); C_{00}=Cov(\chi_2, \chi_2);$$
and $\chi_1$ and $\chi_2$ are outputs of the neural network shifted by time $\tau$. The input and output of the neural network is an ordered time-series. So one can shift it.
|
How to prevent Softmax to be overconfidence/getting stuck in one solution
|
CC BY-SA 4.0
| null |
2023-04-29T11:38:21.297
|
2023-04-29T11:38:21.297
| null | null |
283959
|
[
"neural-networks",
"softmax"
] |
614470
|
1
| null | null |
0
|
25
|
I am testing the association between percentage of impervious surface areas (ISA) and number of eggs per breeding event. This is my model structure:
```
fit=lmer(CS~ISA+Mass_Fem*LayDate+Year+(1|Site),data=DataGT_noNA)
```
Where CS is clutch size, ISA is imperviousness (as %), mass female is the female body mass in grams, Lay date is an ordinal number, and Year is a 5-levels factor. The random effect "Site" has 8 levels.
After obtaining the estimates of loss by a certain %ISA using ggplot and ggeffects as follows:
```
effects_ISA <- effects::effect(term= "ISA", mod= fit, xlevels=list(ISA=c(0,10,20,30,40,50,60,70)))
```
I made a dataframe containing all these values as follows
```
x_ISA <- as.data.frame(effects_ISA)
```
and obtained this table as a result:
[](https://i.stack.imgur.com/8qLTI.jpg)
I would like to obtain the unit decrease (and respective confidence intervals) in terms of number of eggs (including 95% CI) by a given percentage of ISA. See example below:
An increase of 30% of ISA, corresponds to a 0.54 (CI95??) loss in number of eggs. I would like to get the same info as a percentage (% loss and % CI95%)
|
Extrapolate 95% CI for unit decrease and for percentage decrease from lmer outputs
|
CC BY-SA 4.0
| null |
2023-04-29T12:38:50.583
|
2023-04-29T12:38:50.583
| null | null |
380708
|
[
"r",
"confidence-interval",
"percentage",
"ggplot2",
"effects"
] |
614471
|
1
| null | null |
0
|
37
|
I'm learning about Time Series forecasting with Machine Learning models like Neural Networks, XGBoost and others. I'm not gonna use ARIMA, SARIMA, ... So, I have the following dataset:
```
date store item sales
0 2013-01 1 1 13
1 2013-02 1 1 11
2 2013-03 1 1 14
3 2013-04 1 1 13
4 2013-05 1 1 10
... ... ... ... ...
40 2017-01 1 1 18
41 2017-02 1 1 41
42 2017-03 1 1 23
43 2017-04 1 1 67
44 2017-05 1 1 56
... ... ... ... ...
912995 2013-01 2 1 63
912996 2013-02 2 1 59
912997 2013-03 2 1 74
912998 2013-04 2 1 62
912999 2013-05 2 1 82
```
As you can see the dates are repeating like a cycle. This is happening because for each store[1, 10] are records about the sales at that same date interval. [2013, 2017]. The task is predict sales for the next three months. I would like to know what is the best approach to do that considering these repeated dates. I'm considering the following strategies:
- Train one model for each store. But even if I do that I notice one more thing. This strategy wont work either because if the problem is this dates intervals [2013, 2017] repeating for each store I have the same problem with months and days.
- Add lag features one for each of the three months, encode this dates with a function very similar to the one bellow. After that I can treat this dataset as a "regular" one and make the traditional train test split and everything else.
.
```
# not exactly this function. i would change the new features names
def encode_cyclical_atts(dataset, feat_cyclic_names):
cyclic_aux = []
for name in feat_cyclic_names:
column = dataset.loc[:, name]
max_value = column.max()
sin_values = [math.sin((2 * math.pi * x) / max_value) for x in list(column)]
cos_values = [math.cos((2 * math.pi * x) / max_value) for x in list(column)]
cyclic_aux.append(sin_values)
cyclic_aux.append(cos_values)
feat_cyclic_encoded = np.array(cyclic_aux).transpose()
cols_names = ['sin_WindGustDir', 'cos_WindGustDir',
'sin_WindDir9am', 'cos_WindDir9am',
'sin_WindDir3pm', 'cos_WindDir3pm',
'sin_Month', 'cos_Month',
'sin_Day', 'cos_Day']
return pd.DataFrame(data=feat_cyclic_encoded, columns=cols_names)
```
So what do you think about these approaches?
|
Time Series using Machine Learning models: How to deal with repeated dates and cycles
|
CC BY-SA 4.0
| null |
2023-04-29T13:14:51.680
|
2023-04-30T18:52:54.587
|
2023-04-30T18:52:54.587
|
369891
|
369891
|
[
"machine-learning",
"time-series",
"python"
] |
614472
|
1
| null | null |
1
|
22
|
Suppose that Walmart has 1,000 stores. It has a 20% coupon for cereal, and it hypothesizes that the coupon will increase the sales of cereal by 3%.
Walmart wants to put the coupon in $n$ stores on 2023-06-01. Suppose that it has data from 2021-06-01 onward, and that it can install the coupon in those $n$ stores for as long as necessary.
At a significance level of 5%, I want to use [difference-in-difference](https://www.publichealth.columbia.edu/research/population-health-methods/difference-difference-estimation) to estimate the change in sales in the $n$ test stores after 2021-06-01. What should $n$ be?
|
Minimal sample size for difference-in-difference estimation of causal effect with time-series data
|
CC BY-SA 4.0
| null |
2023-04-29T13:34:42.887
|
2023-04-29T13:34:42.887
| null | null |
269172
|
[
"time-series",
"sample-size",
"causality",
"statistical-power",
"difference-in-difference"
] |
614473
|
1
| null | null |
0
|
10
|
Assume that I have a model $f$ whose task is to do entity resolution. Given a name, say Samsung!!, it would (hopefully) return Samsung along with a confidence score in $[0, 1]$.
Now, assume that it has generated 100 results stored in a vector $\vec{x}$ = [("Samsung", 0.94), .., (name_100, score_100)] which has been manually verified to be either correct or incorrect, in another vector $\vec{v}$ = [("Samsung", True), .., (name_100, boolean_100)].
How could one use Bayes Rule to compute the probability of $P(\text{is true}| \text{model score})$, given a previous records of model scores + manual verification?
|
How do I evaluate model performance using Bayes Theorem?
|
CC BY-SA 4.0
| null |
2023-04-29T14:03:26.773
|
2023-04-29T14:03:26.773
| null | null |
386849
|
[
"model-evaluation",
"bayes-rule"
] |
614474
|
2
| null |
614462
|
10
| null |
Interesting problem. The proof goes as follows: let $X, X_1, X_2$ i.i.d. $\sim F$, then
\begin{align}
\int_0^\infty (1 - F(x))^2dx = E[\min(X_1, X_2)] \leq E[\sqrt{X_1}\sqrt{X_2}]
= (E[\sqrt{X}])^2 < \infty.
\end{align}
By the way, the proof to the first implication is a straightforward application of Cauchy-Schwarz inequality:
\begin{align}
E[\sqrt{X}] = E[1 \cdot \sqrt{X}] \leq \sqrt{E[1]E[X]} = \sqrt{E[X]} < \infty.
\end{align}
| null |
CC BY-SA 4.0
| null |
2023-04-29T14:34:37.983
|
2023-04-29T14:34:37.983
| null | null |
20519
| null |
614476
|
1
| null | null |
0
|
13
|
I have odds ratio per unit change in variable. I want to convert it into odds ratio per unit standard deviation change in variable.
|
Compute Odds ratio per standard deviation
|
CC BY-SA 4.0
| null |
2023-04-29T15:09:54.543
|
2023-04-29T15:09:54.543
| null | null |
386853
|
[
"logistic",
"standard-deviation",
"odds-ratio",
"epidemiology"
] |
614477
|
1
| null | null |
2
|
30
|
I have a question regarding mediation analysis. I'm wondering why oftentime the fraction of the total effect which corresponds to the indirect effect, i.e. 1 - NDE/TE, is different from the natural indirect effect, i.e. NIE/TE.
In Pearl's "Primer" book (p121), NIE/TE is considered as the "portion of the effect that can be explained by mediation alone". But isn't it also what 1 - NDE/TE is meant to capture ?
In the Training/Homework/Success example ("Primer" p123-124), NIE/TE=7% whereas 1-NDE/TE=30.4%. According to the formulas, I would have considered NIE as the intrinsic effect of the mediator M on the oputcome Y independently of treatment T (Y0M1 - Y0M0), whereas 1 - NDE/TE would be the effect of treatment T transmitted to Y through M fixed ((1 - (Y1M0 - Y0M0))?
Another way to phrase it would be : is the effect of M on Y only originates from T (M acts as an intermediate transmitter of T to Y) or is M allowed to have its own intrincic effect on Y ? I'm stille confused about it.
Thank you for your response.
|
Difference between NIE and TE - NDE in causal inference?
|
CC BY-SA 4.0
| null |
2023-04-29T15:11:17.303
|
2023-05-02T14:54:02.290
|
2023-05-02T14:54:02.290
|
386854
|
386854
|
[
"causality"
] |
614478
|
1
| null | null |
0
|
15
|
Suppose an input of shape (width x height x channel_num) = (10 x 10 x 15) is obtained from previous convolutional layers, and this input is about to be inserted into a fully connected (fc) layer of K=100.
How can this fc layer be replaced completely?
Method 1:
One way is to replace this fc layer with a convolution of kernel size equal to that of the input kernel size, and the number of output channels equal to K of the fc layer. In this case replace with a convolution of kernel size f=10, stride s=1, padding p=0, output channel number = 100.
This would give an output of shape (1 x 1 x 100), which is somewhat like the expected 1d vector of an fc layer of length 100.
Number of learnable parameters= #weights + #biases = 10x10x15x100 + 100 = 150100
Method 2:
However, how can the fc be replaced by 1x1 convolutions, instead of the fxf convolution?
If we directly convolve the input of shape (10 x 10 x 15) with some (1 x 1 x 15) filter and we use 100 such filters with s=1, p=0, then the output is of shape (10 x 10 x 100), instead of a shape that can be flattened to a 1d length 100 vector like (1 x 1 x 100).
A workaround is reshaping the input like this: (10 x 10 x 15) --> (1 x 1 x 1500), then convolving with 100 (1 x 1 x 1500) filters.
This gives an output of (1 x 1 x 100).
Number of learnable parameters = 1x1x1500x100 + 100 = 150100
It appears both methods give the same number of learnable parameters.
The number of learnable parameters in the normal fc = 10x10x15x100 + 100 = 150100 which matches the number in the two conversions.
Are these calculations correct? The 10x10x15 part stems from the thinking of flattening of the multi-dimension input into a single vector of length 1500. Is this intuition appropriate?
Are both methods correct? Is there any difference? Which one do large CNNs use?
|
Replacing a fully connected layer with a 1x1 convolution vs with fxf convolution
|
CC BY-SA 4.0
| null |
2023-04-29T15:16:12.220
|
2023-04-29T15:16:12.220
| null | null |
373321
|
[
"machine-learning",
"neural-networks",
"conv-neural-network",
"convolution"
] |
614479
|
1
| null | null |
0
|
19
|
I apologize for the poorly worded title, but I have a Prevalence/incidence question that got me confused
Hello all, I just came out of my epidemiology exam, and there’s this question that’s bothering me nonstop. All of my peers are having a different answer and we can’t agree on a certain concept. Please let me know what you think. I will quote as much as I can remember from that question, and your assistance is highly appreciated.
“In a fictitious population of 1000 healthy men, 100 are diagnosed with prostate cancer in a given year. A researcher started a study and the prevalence at the starting point is zero, predict the prevalence and incidence two years later A) P= 300 I= 1 (or “.1” I can’t remember) B) P= 200 I=1 (or “.1” I can’t remember) C) P=300 I=100 D) P=100 I=300 E) P=300 I= unknown”
I answered it as P=200 and I=1 (or 0.1), I don’t have any logic regarding incidence, but I assumed that if starting point = 0, and we have 100 cases in a year, then after two years we’d have 200. Other colleagues insist that it is E because no enough info is available to calculate incidence.
What do you think?
|
Calculating prevalence and incidence based on a given rate
|
CC BY-SA 4.0
| null |
2023-04-29T10:32:54.730
|
2023-05-15T16:00:36.013
|
2023-05-15T16:00:36.013
|
11887
| null |
[
"epidemiology",
"prevalence"
] |
614480
|
2
| null |
108265
|
0
| null |
I am answering this old question so it has an answer.
Given that this is a 'fill the blank exercise', presumably aimed at a beginning student, and we have the `t-test` tag (suggesting the asker has been studying t-tests), then as the comment says a paired t-test seems like the simplest plausible answer.
Presumably, you would measure the blood pressures of the mice with the existing method, and again with the new method (one issue to consider is does the act of measuring blood pressure cause a change in blood pressure - e.g. stressing out the subject).
## How could we formulate the hypothesis?
Our null hypothesis could be that the population mean of the differences is $0$. Our alternate hypothesis would then be that the population mean of the differences is not $0$.
It might be interesting though to think about how the results of the test would be interpreted in practice and what the reason is for doing this test in the first place. Perhaps if you had strong reason to believe you would have a difference in one direction you might have a one sided hypothesis.
## What is an advantage of paired t-tests?
To flesh out this answer and give a bit more context. In a paired t-test we could think of each subject as having their own little control group - it could be themselves (as in this case), or maybe someone else like an identical twin. The advantage to this is that for each subject, all factors will be constant (except anything that changed between the time of measurement) and we (hopefully) have less variation in our sample than if we compare different people between different groups.
## Anything to be mindful of?
One thing to be mindful of with any repeated measures design is 'order effects'. This is best illustrated by an example. I ask my participants to do a task and measure their performance. I then apply some treatment, and ask them to do the task again. It might be the case that they perform better the second time they do the task - and this might be unrelated to the treatment - it could simply be because they have now seen and practiced the task once before.
| null |
CC BY-SA 4.0
| null |
2023-04-29T16:55:12.870
|
2023-04-29T16:55:12.870
| null | null |
358991
| null |
614481
|
1
| null | null |
1
|
57
|
The Variance Inflation Factor (VIF) is defined to be:
$$VIF_p = \frac{1}{1-R_p^2}$$
where $R_p^2$ is the $R^2$ calculated when $X_p$ is the dependent variable, and all the other variables are independent.
Similarly, in Factor Analysis, the Squared Multiple Correlation (SMC) is the $R^2$ of each variable regressed on all the other variables.
In both cases there seems to be a trick where instead of calculating the actual regressions, one can use the inverse of the correlation matrix. The VIFs are the [diagonal of that inverted matrix](https://github.com/statsmodels/statsmodels/issues/2376#issuecomment-97267871), and the SMCs are $1-1/diag$.
How can one show this?
|
VIF, SMC and the inverse of the correlation matrix
|
CC BY-SA 4.0
| null |
2023-04-29T16:56:36.267
|
2023-04-30T21:20:53.603
|
2023-04-30T21:20:53.603
|
117705
|
117705
|
[
"correlation",
"factor-analysis",
"variance-inflation-factor"
] |
614483
|
1
| null | null |
0
|
17
|
I have a paired data with n=50. I am using a paired t-test to test the difference in quiz scores before and after an interval. My data for quiz scores after the interval is showing as not normally distributed. Shapiro-Wilk normality test, p-value = 5.833e-06. However, I need a normal distribution for the paired t-test.
I have tried to square transform the data but it is still not normally distributed, Shapiro-Wilk normality test showing p-value = 0.0003286.
I have tried to subtract each score from the max score and +1, creating positively skewed data, and then performing a log transformation, but it is still not normally distributed, Shapiro-Wilk normality test showing p-value = 0.01641.
I have read some sources online claiming that for some tests which assume normality, such as paired t-test, it is still okay to go ahead with non-normal data as it is related to the residuals and not the actual data? I do not understand this on a deep enough level to make an educated decision.
I am looking for some advice on whether I should go ahead with the paired t-test regardless of the non-normal data, or perform a non-parametric test such as Wilcoxon instead?
|
Paired t-test of non-normal data - help please
|
CC BY-SA 4.0
| null |
2023-04-29T17:25:41.093
|
2023-04-29T17:26:25.150
|
2023-04-29T17:26:25.150
|
386861
|
386861
|
[
"normal-distribution",
"t-test",
"data-transformation",
"wilcoxon-mann-whitney-test",
"logarithm"
] |
614484
|
2
| null |
76517
|
0
| null |
As the comments suggest, one might use the (Welch) t-test.
[Welch](https://en.wikipedia.org/wiki/Welch%27s_t-test)'s t-test is recommended when we don't have the assumption of equal variance.
In Python you could do this as follows:
```
from scipy import stats
Permafresh = [29.9, 30.7, 30.0, 29.5, 27.6]
Hylite = [28.8, 23.9, 27.0, 22.1, 24.2]
print(stats.ttest_ind(Permafresh,Hylite, equal_var=False, alternative="two-sided"))
```
For the second part of the question, it may be interesting to consider the point raised in the comments - if the point is not a data entry error, for what reasons are we justified removing it as an 'outlier'. In a real analysis you would consider this.
For 'fun' (boredom), I plotted the boxplot.
```
import matplotlib.pyplot as plt
plt.boxplot([Permafresh,Hylite])
plt.show()
```
[](https://i.stack.imgur.com/Wy1HL.png)
In the case above, using matplotlib to plot the box plot, outliers are considered as any points beyond the whiskers. That is, $Q3 + 1.5 IQR$. Of course, what is considered an outlier is often subjective and data specific. See the answer [here](https://stackoverflow.com/questions/51622490/are-points-beyond-box-plots-whiskers-outliers).
---
For the t-test, you might like to think about the normality assumption and whether it (roughly) holds. Perhaps the full wording of the question mentions something about this though, for brevity in the answer I will assume it does.
| null |
CC BY-SA 4.0
| null |
2023-04-29T17:45:56.060
|
2023-04-29T17:45:56.060
| null | null |
358991
| null |
614485
|
1
| null | null |
0
|
19
|
I am studying DoE and I have a problem with my coefficient of 2^4 full factorial design.
I have three question:
- When I calculated coefficients using code language (+1,-1) and using real number variables (60°C, 30°C) the signs of 8 coefficients were different (example: b0 by code + 5.78E+05 vs b0 by real numbers - 2.94E+07). I agree with the different magnitude of coefficients but I do not understand why they are different in term of signs! I do not know which type coefficient I have to take into account to continue my response study: coded or real?
1.2) Why are the signs different?
- I tested both the models with a "retro-calculation" of my responses y. The y obtained by code model agree but no the real one!
Example:
experimental results y is 123,1230000....;
coded y 123,12300000....;
real y 123,1230000x.
The last differs a little bit from other. Why?
I checked all the interactions between variables ab;ac;...abc*d; both the inverse matrix and matrix product.. of course, maybe, I am wrong about something.
|
Design of Experiment problem: signs and "retro calculation" problems
|
CC BY-SA 4.0
| null |
2023-04-29T18:20:51.193
|
2023-04-29T18:39:41.480
|
2023-04-29T18:39:41.480
|
22311
|
386864
|
[
"experiment-design"
] |
614486
|
1
| null | null |
0
|
52
|
A few weeks ago, I had an interview for a data science job. Of all the questions they asked me, I was unable to solve the following one. I couldn't even attempt it because I didn't know anything about it. I have researched, but I can't find any book, paper, blog, etc. that shows the question.
So here's what I remember from the question:
The following table has the "feature contribution" of feature i with observation j. Additionally, each feature is divided into different categories. Category 1 has features 1, 2, and 3, Category 2 has features 4 and 5, and Category 3 has only feature 6.
|Observation |F1/C1 |F2/C 1 |F3/C1 |F4/C2 |F5/C2 |F6/C3 |
|-----------|-----|------|-----|-----|-----|-----|
|1 |-4 |4 |1 |2 |-2 |0 |
|2 |2 |1 |-1 |0 |4 |2 |
|3 |1 |4 |5 |3 |3 |1 |
|4 |0 |2 |1 |1 |4 |-3 |
|5 |-3 |-2 |0 |-2 |2 |1 |
If I recall correctly, they asked me to obtain the feature importance of each category. So the expected result was three numbers, one for each category.
During the interview, I attempted to calculate the means and sums and then means again, but the answer was never correct. Do any of you have any references on how to solve this problem?
|
Feature contribution interview question I can't answer
|
CC BY-SA 4.0
| null |
2023-04-29T18:21:10.040
|
2023-04-29T19:28:04.047
| null | null |
289728
|
[
"interaction",
"feature-engineering",
"importance"
] |
614487
|
2
| null |
614110
|
3
| null |
I think you've identified the most likely culprit of misleading conclusions from looking at bivariate correlations: [confounding](/questions/tagged/confounding)
However, performing a regression would only adjust for known and measured confounders in the data. If a variable is omitted and that variable is correlated with drop out and faculty, then the analysis remains biased.
| null |
CC BY-SA 4.0
| null |
2023-04-29T18:40:44.887
|
2023-04-29T18:59:17.867
|
2023-04-29T18:59:17.867
|
22311
|
111259
| null |
614489
|
1
| null | null |
0
|
14
|
Hi all – I’m tasked with a problem to understand productivity rates of two groups. Let’s call them A and B. The ask is to understand if the productivity rates between the two groups is significant and if so, what factors contribute to those differences.
I’m thinking of a conducting a regression analysis predicting productivity rate(Y) and the group(A/B) as categorical variable with other explanatory variables.
Are there any pitfalls here? Also, is there a better approach to do this?
Business context: Hypothesis is that group A is doing better than B. The ask is to understand why and if we can get B up to speed with A
|
How to Identify differences and factors that contribute to difference between two groups
|
CC BY-SA 4.0
| null |
2023-04-29T18:55:32.280
|
2023-04-29T19:02:25.313
|
2023-04-29T19:02:25.313
|
386866
|
386866
|
[
"regression",
"machine-learning",
"hypothesis-testing",
"mathematical-statistics"
] |
614491
|
1
| null | null |
0
|
19
|
I have a numeric integer feature (i.e, age), with an important number of non-valid values (NA, zeroes...). Not enough to discard the variable. I can't easily impute with estimated values.
What would be a way to manage them? Set the values to an 'impossible', value like 999999? Are some algorithms (Random Forest, Gradient Boosting, SVM...) better than other to deal with them?
In the worst case, the only solution I can imagine is create a model with the rest of the variables to 'predict' the missing one, but I think, I the particular case I'm sudying, It wil not be accurate.
|
NA values in numeric values
|
CC BY-SA 4.0
| null |
2023-04-29T19:12:41.657
|
2023-04-29T19:12:41.657
| null | null |
381118
|
[
"r",
"missing-data"
] |
614493
|
2
| null |
614471
|
1
| null |
A few things I can think of here. If you're just trying to predict sales for a specific store, you can sum up each store's item sales and predict total sales. That's one way to do it.
If you're trying to predict specific item sales, then you could group monthly sales by store and item on a monthly basis.
In either case, you can add time series features using a package like [tsfresh](https://tsfresh.readthedocs.io/en/latest/). This gives you a bunch of time series features for each store or store/item combo. Then you use dates for three months out or total of next three month sales as your target variable and train.
This is also a problem in time series forecasting of a top down vs bottom up approach. Predict in aggregate then predict for each item or predict for each item then predict in aggregate. Here a few links to read up on those two strategies. Hope this helps.
[https://towardsdatascience.com/introduction-to-hierarchical-time-series-forecasting-part-i-88a116f2e2](https://towardsdatascience.com/introduction-to-hierarchical-time-series-forecasting-part-i-88a116f2e2)
[https://quickbooks.intuit.com/r/growing-a-business/top-down-vs-bottom-up-which-financial-forecasting-model-works-for-you/](https://quickbooks.intuit.com/r/growing-a-business/top-down-vs-bottom-up-which-financial-forecasting-model-works-for-you/)
| null |
CC BY-SA 4.0
| null |
2023-04-29T19:39:39.210
|
2023-04-29T19:39:39.210
| null | null |
385104
| null |
614494
|
1
| null | null |
0
|
25
|
Given the model $y = f(x) + \epsilon, f(x) = Wx$, I want to find an estimate of $Var(Y)$. Note here I don't account for the randomness in the input $x$, but rather I consider it a deterministic value.
- Since $\epsilon\sim \mathcal{N}(0,\sigma)$ then $Y|X \sim N(f(x),\sigma)$. First, we know that $\mathbb{E}[Y|X] = f(X)$, then $\mathbb{E}[Y] = \mathbb{E}_X\mathbb{E}_Y[Y|X] = \mathbb{E}[f(X)] = f(x)$.
- Finding $\mathbb{E}[Y^2]$: $\mathbb{E}[Y^2] = \mathbb{E}_X[\mathbb{E}[Y^2|X]]$, note that $y^2 = f^2(x) + 2f(x)\epsilon + \epsilon^2$, which means $Y^2|X \sim \mathcal{N}(f^2(X),\sigma (\sigma + 4f^2(X)))$. That is, $\mathbb{E}[Y^2] = \mathbb{E}[f^2(X)] = f^2(x)$
- $\mathbb{E}[(Y - \mathbb{E}[Y])^2] = \mathbb{E}[Y^2] - \mathbb{E}[Y]^2 =f(x)^2 - f(X)^2 $ = 0.
Obviously, the result is wrong, but I cannot capture the error that I made.
Another attempt is to use the law of total variance:
$Var(Y) = \mathbb{E}[Var(Y|X)] + Var(\mathbb{E}(Y|X)) = \mathbb{E}[\sigma] + Var(f(x))$. Note that if we consider x as a random variable, then I guess this refers to the error of the input data (The variance of the input distribution) (please correct me if I am wrong), but since we consider it as a deterministic value, we have $Var(Y) = \sigma$.
I am pretty sure I have done penalties of mistakes that I am not aware of. Can you please help me spotting the mistakes I have made?
|
The variance of the predicted variable in a linear regression problem
|
CC BY-SA 4.0
| null |
2023-04-29T20:22:03.307
|
2023-04-29T20:22:03.307
| null | null |
296047
|
[
"linear"
] |
614495
|
1
| null | null |
1
|
47
|
I want to check if my data's distribution is normal. Thought of visual and formal test.
One thing I'm not sure about – my observations are respiratory measurements of patient. Each patient measure himself for 5 minutes (inhale, exhale). So the observations are not independent. Is it a problem preforming regular normality test (as I mention earlier)?
If so, what would be the solution?
---
tnx all for the answers. i want to check normality, because i want to present a summary value for every measurement (includes around 150 breaths), on a plot over time. so im triyng to decide if a the average will be good fit or maybe the median (?).
|
Normality test for non-independent observations
|
CC BY-SA 4.0
| null |
2023-04-29T20:43:49.037
|
2023-05-01T12:16:46.633
|
2023-05-01T12:16:46.633
|
17230
|
386875
|
[
"hypothesis-testing",
"normality-assumption",
"non-independent"
] |
614498
|
1
| null | null |
0
|
8
|
Supposing I have a Gaussian Random Variable X of zero mean and unit variance and N samples of the variable. On each sample these is also Noise W that is a Gaussian Random Variable with zero mean and unit variance. The result is Y= X + W. So the y vector consists of N observations y1=x+w1, y2=x+w2, ..., yn=x+wn. I have to plot (scatterplot) estimation of X for each sample by applying a simple LMMSE and also estimating the covariance matrices instead of using cov() for my estimations. The estimation of X equals the sum of all samples multiplied by a weight each.
\begin{align}
\ x_{estimation} = \frac{cov(x,y_1)}{\sigma_{y_{1}}^2}y_1 + \frac{cov(x,y_2)}{\sigma_{y_{2}}^2}y_2+...+\frac{cov(x,y_n)}{\sigma_{y_{n}}^2}y_n\\
\end{align}
How do I estimate each Covariance and what does it mean to plot estimation for each sample?
|
Scalar LMMSE based on N Observations
|
CC BY-SA 4.0
| null |
2023-04-29T22:25:53.513
|
2023-04-29T22:25:53.513
| null | null |
386877
|
[
"normal-distribution",
"estimation",
"covariance"
] |
614499
|
2
| null |
561940
|
1
| null |
It seems you are using your fit outside of the expected range. You don't want to extrapolate. Ever. Period. End of story.
Anything extrapolated can not be trusted. The extrapolation results will depend on the coefficients you get when you fit and the functional form of the fit (your model). If your extrapolated data do not behave this way, you will get nonsense (or something unexplainable-- as you have seen)
Please see:
[https://www.sciencedirect.com/science/article/pii/S2772415821000110](https://www.sciencedirect.com/science/article/pii/S2772415821000110)
Section I shows how one can be tricked into thinking their training data cover their desired outcomes.
Section II gives an easy algorithm for people to use to quickly determine if their data point is extrapolated and how much of the possible outcome space is an extrapolation (based on training data).
Section III shows a great example of a great neural network fit, but what happens when you use it as a prediction and the inputs are out of range.
Section IV shows how much data you actually need to fit a deep neural network. I would focus on sections III and IV. Do not trust if out of range!
For those interested, the full citation:
Siegel, Adam. "A parallel algorithm for understanding design spaces and performing convex hull computations." Journal of Computational Mathematics and Data Science 2 (2022): 100021.
| null |
CC BY-SA 4.0
| null |
2023-04-29T22:26:06.153
|
2023-04-30T23:36:30.910
|
2023-04-30T23:36:30.910
|
386879
|
386879
| null |
614500
|
2
| null |
609341
|
1
| null |
For simplicity, I will generalize to $u_t$ ~ N(0,1) and $e_t$ ~ N(0,1).
Cov[$Y_t$,$Y_{t-1}$] =
E[$Y_t$,$Y_{t-1}$] =
E[($e_t$ + $u_t$ + $\theta$$u_{t-1}$) ($e_{t-k}$ + $u_{t-k}$ + $\theta$$u_{t-k-1}$)]
After expanding, you will have a total of 9 terms inside E[]. Let's call this S.
For k=0, S = E[$e_{t}^2$ +$u_{t}^2$+$\theta^2$$u_{t-1}^2$], this is obviously Variance of $Y_t$, where Var[$Y_t$]=2+$\theta^2$
For k=1, S = E[$\theta$ $u_{t-1}^2$], this is so because $e_t$ and $u_t$ are i.i.d. and are mutually independent. This gives you S=$\theta$
For k=2, S = 0, since there are no terms like $u_{t-k}^2$ or $u_{t}^2$ or $e_{t-k}^2$ or $e_{t}^2$.
Autocovariance (or Autocorrelation if you like) collapses to zero at k=2, thus $Y_t$ is a MA(1) process.
To find expression for $\phi$, use
$\phi$/(1+$\phi^2$) = $\theta$/(2+$\theta^2$)
And solve for $\phi$ algebraically
| null |
CC BY-SA 4.0
| null |
2023-04-29T23:15:05.200
|
2023-04-29T23:15:05.200
| null | null |
386880
| null |
614501
|
1
| null | null |
19
|
2290
|
When we fail to reject the null hypothesis in hypothesis testing, which of the below is the best interpretation?
- We have no evidence at our significance level $\alpha$ to reject $H_0$.
- We have insufficient evidence at our significance level $\alpha$ to reject $H_0$.
I have seen no evidence used frequently, but insufficient evidence seems much better to me. Say we get a $p$-value of $p$ in a hypothesis test. If we'd happened to have chosen an $\alpha$ greater than $p$ then we'd have evidence to reject $H_0$ at that $\alpha$, but if we'd happened to have chosen an $\alpha$ less than $p$ we'd fail to reject that that $\alpha$. In both cases, we have the same amount of evidence against $H_0$ (since we have the same $p$-value), we'd just used a different threshold between the two cases. So to me it makes much more sense to say we have sufficient or insufficient evidence at our $\alpha$, rather than no evidence, since our interpretations are $\alpha$ dependent and not entirely $p$-value dependent.
|
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null?
|
CC BY-SA 4.0
| null |
2023-04-29T23:52:14.277
|
2023-05-02T20:38:12.033
|
2023-04-30T06:20:20.620
|
365319
|
365319
|
[
"hypothesis-testing",
"interpretation",
"biostatistics"
] |
614502
|
1
| null | null |
0
|
24
|
In many places its claimed that in order for maximum likelihood to be equivalent to least squares one of the requirements is a linear model (in addition to heterodecasicity, iid and Gaussian noise). I do not understand why linearity is required. For example here it does not appear that linearity is required:
[https://ccrma.stanford.edu/~jos/SpecAnal/Maximum_Likelihood_Sinusoidal_Parameter.html](https://ccrma.stanford.edu/%7Ejos/SpecAnal/Maximum_Likelihood_Sinusoidal_Parameter.html)
Perhaps I am misunderstanding what is meant by linearity? I would think that linearity means the estimated parameters are linearly related to the measured variable (x(n)), but perhaps this is incorrect? Any help would be greatly appreciated!
|
Is linearity required for MLE equivalence to least squares with Gaussian noise?
|
CC BY-SA 4.0
| null |
2023-04-29T23:59:09.697
|
2023-04-29T23:59:09.697
| null | null |
60403
|
[
"maximum-likelihood",
"least-squares"
] |
614503
|
1
| null | null |
1
|
11
|
I'm interested in applying active learning to a computer vision bounding box prediction project, where I don't have a large corpus of unlabeled images available, and instead have to take all pictures myself with a camera.
Usually the active learning / uncertainty sampling approach would be to take a bunch of unlabeled data points, run model inference on them, and select the items that the model is least certain about. However, in my case I don't have access to any unlabeled images unless I take them myself with a camera, which is costly (time consuming).
What are some good approaches to take here? I was thinking I could first roughly map out a feature space of images (e.g. background setting, number of objects, distance from camera, lighting conditions) and collect an initial random dataset that I label and train a model on. I could then just look at what images the model incorrectly classifies, and simply try to take more similar images. Is there anything wrong with this approach?
I'm hesitant because normally active learning relies on the uncertainty scores, but in my case getting unlabeled images itself comes at a high cost, so I don't have the luxury to get model predictions for a large dataset and run inference on them.
I'm also concerned that just generating more and more of the images that the model has a hard time with will eventually get me "stuck" because some configurations are just always going to be inherently hard for the model, no matter how much data I generate. I can see a scenario where my approach would tell me to keep collecting darker and darker images, until it's just a black image with no visibility, because those will always be the images that give me 0 accuracy.
|
Active learning for Computer Vision where I have to generate my own images
|
CC BY-SA 4.0
| null |
2023-04-30T01:23:03.993
|
2023-04-30T01:23:03.993
| null | null |
334501
|
[
"machine-learning",
"inference",
"modeling",
"computer-vision",
"active-learning"
] |
614504
|
1
| null | null |
1
|
9
|
I am trying to use sklearn.cluster.OPTICS to identify outliers, but found an issue:
I use 2 examples with exactly the same data but different orders. They give different results:
- 1st example
////////////////////////////////////////////
from sklearn.cluster import OPTICS
import pandas as pd
import numpy as np
X = np.array([[1], [2], [3],[1],[8], [8], [7],
[100]
])
clust = OPTICS(min_samples=3, metric='euclidean',).fit(X)
clust.labels_
////////////////////////////////////////////
output: array([0, 0, 0, 0, 1, 1, 1, 1])
- 2nd example
////////////////////////////////////////////
from sklearn.cluster import OPTICS
import pandas as pd
import numpy as np
X = np.array([[1], [2], [3],[8], [8], [7],
[100],[1]
])
clust = OPTICS(min_samples=3, metric='euclidean',).fit(X)
clust.labels_
////////////////////////////////////////////
output:
array([ 0, 0, 0, 1, 1, 1, -1, 0])
We can see X has the same data but different orders. The 2nd output is supposed to be correct as [100] should be an outlier. But oddly, if we change the order of the data, the model gave wrong results.
Can anyone help?
thanks
Ya
|
shuffling data change OPTICS outlier results
|
CC BY-SA 4.0
| null |
2023-04-30T01:33:34.823
|
2023-04-30T01:33:34.823
| null | null |
386883
|
[
"clustering",
"scikit-learn",
"outliers"
] |
614507
|
2
| null |
614501
|
14
| null |
In my experience, insufficient evidence is the least ambiguous and most often used way to describe the inability of rejecting $H_0$. The reasoning in my mind being that in statistics, we hardly ever deal with absolutes. That said, this is more an interpretation of language. We can think of a test that fails to reject $H_0$ having no evidence at it's current state (given the current data, specific test, and set thresholds). That said, the problem with this is that at first glance (to someone not too familiar with hypothesis testing for instance) it glosses over that our test is only as precise or correct as our data/test/threshold allows it to be.
That is why I agree with you that "insufficient" is a better way of communicating a failure to reject. That said, this may be a difference in language between different fields.
One thing to note: I do feel that your reasoning of switching $\alpha$ in regards to evidence is not entirely correct. A significance level is set before the test occurs and stays set, otherwise the test's conclusions become muddled. One way to gain more evidence is to find more data related to what is being tested.
| null |
CC BY-SA 4.0
| null |
2023-04-30T02:44:14.153
|
2023-04-30T02:52:22.543
|
2023-04-30T02:52:22.543
|
319471
|
319471
| null |
614509
|
1
| null | null |
0
|
24
|
I fit a model having 12 parameters on different samples by MLE. For each sample, I retrieve the standard error of each parameter with the hessian inverse.
I’m writing a thesis and I’m wondering : how many decimal place should keep when I report the estimate of these parameters ? Intuitively, I thought that I could use the standard error of each parameter and round the estimate to the decimal place of that standard error, plus 1. For example, if an estimate is 3.456 and the standard error is 0.1, then I round to 2 decimal places : 3.46.
However I found nowhere people using this rule so I’m wondering if it’s a good / common practice on research papers. If not, why ? is there any standard rule to choose how many decimal place one should keep for a parameter estimate ?
|
Number of decimal places reported of a parameter estimate depending on standard error : common practice?
|
CC BY-SA 4.0
| null |
2023-04-30T03:49:11.743
|
2023-04-30T15:07:37.233
| null | null |
372184
|
[
"standard-error",
"rounding"
] |
614510
|
1
| null | null |
1
|
28
|
I have 2 object types, A and B
For type A, I observe delivery to demographic groups x1, x2, x3, x4.... xn.
For type B, I observe delivery to the same demographic groups x1, x2, x3, x4.... xn.
x1(a) + x2(a) + x3(a) + .. xn(a) = 100, where a is a sample of type A.
x1(b) + x2(b) + x3(b) + .. xn(b) = 100 where b is a sample of type B.
Each delivery, xi(a) or xi(b) is a percentage value between 0 - 100.
The raw counts are not available.
I have 5000 samples for delivery of type A
I have 13000 samples for delivery of type B
I want to compare the delivery to the same audience group for different object types, i.e compare the distribution of A's delivery to x1 and B's delivery to x1, i.e 5000 samples of x1 from A with 13000 samples of x1 of B. Is such a comparison statistically reasonable? Can I use a 2 sample Z test for proportions here?
I want to compare the delivery to different audience groups for the same object type, i.e compare the distribution of A's delivery to x1 and A's delivery to x2, i.e 5000 samples of x1 with 5000 samples of x2 in A. Is such a comparison statistically reasonable? Can I use a 2 sample Z test for proportions here?
Are there any other tests that you'd think are reasonable given this information and that I am dealing with proportions and not raw counts? Suggestions of both parametric and non-parametric tests are appreciated.
|
Comparing groups based on data containing proportions without raw counts
|
CC BY-SA 4.0
| null |
2023-04-30T03:57:34.200
|
2023-05-02T22:04:38.910
|
2023-05-02T22:04:38.910
|
386889
|
386889
|
[
"statistical-significance",
"t-test",
"proportion",
"z-test"
] |
614511
|
1
| null | null |
1
|
38
|
We have a time series gridded panel dataset (spatio-temporal). The dataset is in 3D, where each (x,y,t) coordinate has a numeric value (such as the sea temperature at that location and at that specific point in time). So we can think of it as a matrix with a temporal component. The dataset is similar to this but with just one channel:
[](https://i.stack.imgur.com/tP1Lz.png)
I am trying to predict/forecast the nth time step values for the whole region (i.e., all x,y coordinates in the dataset) given the values for the n-1 time steps and the uncertainty.
Can you all suggest any model/architecture/approach for the same? I was initially thinking of the [Log-Gaussian Cox Process](https://www.jstor.org/stable/4616515) but it seems it's mostly applicable to point processes, while my dataset is gridded with each grid having a numeric value.
|
Uncertainty in spatio-temporal forecasting
|
CC BY-SA 4.0
| null |
2023-04-30T04:26:24.033
|
2023-04-30T11:21:31.170
|
2023-04-30T09:47:02.850
|
351811
|
351811
|
[
"regression",
"machine-learning",
"time-series",
"stochastic-processes",
"gaussian-process"
] |
614512
|
2
| null |
5845
|
0
| null |
Aside from the excellent and cleverly written Good's book mentioned in the response above, you might find Basso, Pesarin, Salmaso, and Solari "Permutation Tests for Stochastic Ordering and ANOVA Theory and Applications with R", useful.
To be honest, I haven't read it all the way through, but it employs some new models based on likelihood and sufficient statistics (if that makes sense in a nonparametric context). It is simple to read (for someone who knows statistics), but it contains all of the theoretical details as well as some R.
| null |
CC BY-SA 4.0
| null |
2023-04-30T05:14:16.497
|
2023-04-30T05:14:16.497
| null | null |
89490
| null |
614515
|
2
| null |
531074
|
1
| null |
You correct for testing multiple models by adopting an objective, algorithmic model selection path. Such a path may be governed by forward / backward stepwise selection as well as an objective model selection criterion, e.g.
- statistical significance of predictors,
- Akaike Information Criterion (AIC),
- Bayesian Information Criterion (BIC),
- cross-validated predictive performance.
| null |
CC BY-SA 4.0
| null |
2023-04-30T07:20:01.970
|
2023-04-30T07:20:01.970
| null | null |
191472
| null |
614517
|
1
| null | null |
0
|
20
|
I have a the following scenario.
A group of people will be given a coupon. Create a model to estimate how long it will take them (in days i.e. discrete count not continuous) to use the coupon based on a variety of personal characteristics.
I was thinking I could use Poisson regression, because we have the count of how many days from receiving the coupon to using it, but there is censoring because some people do not ever use the coupon.
I have thought perhaps I could use the Cox proportional hazards model, but from what I am reading you need a baseline group and treatment groups. Is that correct?
In my scenario there is no baseline group to compare to. We are not comparing people who didn't receive the coupon to people who did. We are only looking at people who did receive the coupon.
Can I still use the Cox proportional hazard model if there is no baseline group, if not can someone make a recommendation please. Thank you for your help!
|
Can you use Cox proportional hazards model when there is no baseline group?
|
CC BY-SA 4.0
| null |
2023-04-30T09:07:57.457
|
2023-04-30T22:08:15.960
|
2023-04-30T16:58:35.093
|
210907
|
210907
|
[
"multiple-regression",
"survival",
"econometrics",
"poisson-regression"
] |
614518
|
2
| null |
574283
|
1
| null |
You can change the null hypothesis, that you have a fair coin with $p=0.5$ for flipping heads, into an equivalent null hypothesis that you have a fair coin with $p=0.5^5=0.0315$ for flipping five heads in a series of five flips.
If you are making 3 of such series then the distribution under the null hypothesis is
[](https://i.stack.imgur.com/bYeIr.png)
A problem here is to define what observation is considered as a rare event. With the likelihood ratio test one would consider the observations with the lowest [likelihood ratio](https://en.m.wikipedia.org/wiki/Likelihood-ratio_test), which are
$$LH = \begin{cases}
\frac{0.90915}{1} & \approx & 0.90915 & \quad \text{if $X=0$}\\
\frac{0.08798}{0.4444} & \approx & 0.19769 & \quad \text{if $X=1$}\\
\frac{0.00284}{0.4444} & \approx & 0.00639& \quad \text{if $X=2$}\\
\frac{0.00003}{1} & \approx & 0.00003 & \quad \text{if $X=3$}
\end{cases}$$
these denominators are the probabilities if the alternative hypothesis equals the maximum likelihood estimate.
The p-value is the probability of the observed likelihood ratio or higher and is the sum of the probabilities for $X=1, X=2,X=3$ and as you calculated $\text{p-value} \approx 0.09085$.
---
Note, the power of your test can ne very bad for specific values of the alternative hypothesis. One could say that your observation $X=1$ is rare given the null hypothesis, however, it will be much more rare when the null hypothesis is wrong and $p<0.00335$. So the power of your experiment is very bad and when $p<0.00335$ you will be unlikely to reject the null hypothesis.
A likelihood function gives a better view of the result:
[](https://i.stack.imgur.com/f4p4Y.png)
Here you see that the fair coin at $p=0.00325$ is not very well supported. Although the same would be true for other values of $p$, the experiment is not very strong.
| null |
CC BY-SA 4.0
| null |
2023-04-30T09:20:54.407
|
2023-04-30T09:33:06.960
|
2023-04-30T09:33:06.960
|
164061
|
164061
| null |
614519
|
1
| null | null |
0
|
7
|
I have a categorical variable with A LOT of categories (about 2000), in a set with 100000 observations. There are a few representative categories (may be 2000 to 1000 different values), but the rest have a few categories (less than 100). If I group them, at the end I have a VERY disbalanced variable (a huge category with a value called 'Others', with 70000 observations, and 20 categories with different values between 2000-100.
Could it be a problem? What would be a good way to handle it? Some advice to work with them in algorytms like Random Forest, Gradient Bosting or SVM?
Thanks!
|
Disbalanced categorical variable after grouping
|
CC BY-SA 4.0
| null |
2023-04-30T09:42:09.623
|
2023-04-30T09:42:09.623
| null | null |
381118
|
[
"categorical-data",
"many-categories"
] |
614520
|
2
| null |
571009
|
0
| null |
I think what you are asking is, if you have a a set of four numbers, you want to generate some data which has (approximately) these four numbers for the mean, variance, skew and kurtosis.
A Python solution is given in [this](https://stackoverflow.com/questions/60061686/moment-matching-simulating-a-discrete-distribution-with-specified-moments-mea) great answer on stack overflow.
The solution linked to above may be helpful in achieving this.
For example if the set of four numbers we are trying to achieve are `(0,1,-1,3)` then it works quite well.
I ran the code linked above, generated a sample, and for this sample achieved `[-0.012865049352413248 0.9854758243633666 -1.0212929276152714 2.8670673702318163]`
respectively for mean, variance, skew and kurtosis - so quite close to `(0,1,-1,3)`.
It is worth noting that this won't necessarily work with any set of four numbers, chosen arbitrarily.
For example running it with `(50,4,-1,12)` I got `[49.827147444660625 5.961898303494179 -0.29452022237194114 3.1324664040689862]`. Notice that the PDF (in blue) is taking values below 0 (and notice the CDF in orange is decreasing).
[](https://i.stack.imgur.com/ZzEhs.png)
The following note is from the statsmodel [documentation](https://www.statsmodels.org/dev/generated/statsmodels.sandbox.distributions.extras.pdf_mvsk.html) for `pdf_mvsk`
>
In the Gram-Charlier distribituion it is possible that the density
becomes negative. This is the case when the deviation from the normal
distribution is too large.
Code: I took my code directly from the answer above. To produce the plots I just plotted `(x,y)` and `(x,yy)`.
---
One suggestion in the comments was to look at the Pearson distributions. You may have some success adapting the following.
We can specify the mean, skew and standard deviation(hence variance). Suppose we want to achieve `[50,4,-1]` for the mean, variance and skew.
```
import scipy.stats as ss
import numpy as np
#Note scale is standard deviation in scipy stats
sample = ss.pearson3.rvs(loc=50, scale=2, skew=-1, size=1000)
print(np.mean(sample),
np.var(sample),
ss.skew(sample),
ss.kurtosis(sample))
```
Output: `49.96935451394343 4.268526621056802 -1.005394332195325 1.3883750132508892`
Close to what we were hoping for.
If you use R, I think more options are available to you in the [PearsonDS](https://cran.r-project.org/web/packages/PearsonDS/index.html) package. If you must use Python, it is possible to use R within Python to generate your data samples, using the [rpy2 module](https://rpy2.github.io/).
| null |
CC BY-SA 4.0
| null |
2023-04-30T11:00:42.703
|
2023-04-30T11:00:42.703
| null | null |
358991
| null |
614521
|
2
| null |
614511
|
1
| null |
Markov random fields address "neighborhoods", in your case neighbors in time and space. In computer vision, four-neighbor (N-E-S-W) and eight-neighbor models are prevalent (N-NE-E-SE-S-SW-W-NW).
If necessary (start first with a simple model), you could model larger neighborhoods with "filters", using techniques similar to Zhu, Wu, and Mumford. "Filters, random fields and maximum entropy (FRAME): Towards a unified theory for texture modeling." International Journal of Computer Vision 27 (1998): 107-126. I define several hundred element "neighborhoods" with exchange traded funds (ETFs) in my paper Keane "Portfolio Variance Constraints." ICPRAM. 2019.
A nice framework for n-th time step values is West and Harrison. Bayesian forecasting and dynamic models. Springer Science & Business Media, 2006. The framework uses discrete timesteps as you do, time-of-day or time-of-year cyclical components are addressed, as are matrix variate models.
| null |
CC BY-SA 4.0
| null |
2023-04-30T11:21:31.170
|
2023-04-30T11:21:31.170
| null | null |
43149
| null |
614523
|
1
| null | null |
1
|
9
|
Trying to figure out the most appropriate family for a data type I'm not used to.
For each measurement, I have a bunch of attempts, with the outcomes falling into bins: failure, small, medium or large.
Each attempt must result in one of the above (categories are exclusive and exhaustive). In some ways it seems a simple multinomial problem.
However, details of the system make me worry about treating the categories as independent. The data result from animal foraging attempts and indicate prey size:
- prey size distribution declines with size: small prey are commoner than medium, than large.
- catching large prey is harder than catching small ones.
Therefore, intuitively it seems that failure < small < medium < large.
Does it make more sense to treat each attempt as some kind of ordinal regression problem, e.g. adjacent categories? I'm struggling to wrap my head around how each category's probability is related to the others'.
Thanks very much in advance!
(Will be coding this as part of a SEM in brms in R, if anyone has specific suggestions).
|
Counts in ordered categories as response
|
CC BY-SA 4.0
| null |
2023-04-30T12:43:11.860
|
2023-04-30T12:43:11.860
| null | null |
386905
|
[
"regression",
"probability",
"distributions",
"categorical-data",
"ordinal-data"
] |
614524
|
2
| null |
614501
|
8
| null |
It might be helpful to distinguish between the "objective" and "subjective" parts of statistical testing. You assume a null hypothesis $H_0$, observe data, compute a statistic, and obtain a $p$-value. You might not have used the "optimal" statistic, obtained the sharpest probabilistic bounds, etc. but there is a fixed process that transforms the data into a $p$-value based on $H_0$. At this point, the $p$-value is your "evidence," and its strength is inversely proportional to its magnitude.
Now, "rejecting" the null hypothesis based on a pre-chosen value of $\alpha$ is somewhat objective, as it is based on your intuition on "how much evidence is enough evidence". Picking $\alpha$ after seeing the $p$-value is problematic because you willingly influence the outcome by varying $\alpha$, i.e. you are able to "move the goalposts".
Ultimately, I'd agree with 392781's answer, that there is "insufficient evidence," provided you have defined in advance what "sufficient evidence" would look like, in the form of picking $\alpha$. Still, it's helpful to remember that "evidence" is not a perfect word here, because it is often used to refer to discrete, objective reasoning, rather than probabilistic heuristics.
| null |
CC BY-SA 4.0
| null |
2023-04-30T13:07:01.553
|
2023-04-30T13:07:01.553
| null | null |
366672
| null |
614527
|
2
| null |
614501
|
14
| null |
The sentence "... evidence to reject $H_0$" does not make much sense to me because you either reject $H_0$ when $p\leq\alpha$ or you don't. It's your decision to reject or not reject. "Rejection" is not an inherent propery of the $p$-value because it requires an additional criterion set by the researcher.
What makes more sense is to talk about the evidence against the null hypothesis provided by the $p$-value. If we adopt the view$^{[1,2]}$ that the $p$-value is a continuous measure of compatibility between our data and the model (including the null hypothesis), it makes sense to talk about various degrees of evidence against $H_0$. Personally, I like the approach of Rafi & Greenland$^{[1]}$ to transform the $p$-value into (Shannon) surprise as $s=-\log_2(p)$ (aka Shannon information). For an extensive discussion on the distinction of $p$-values for decision and $p$-values as compatibility measures, see the recent paper by Greenland$^{[2]}$. This provides an absolute scale on which to view the information that a specific $p$-value provides. If a single coin toss provides $1$ bit of information, a $p$-value of, say, $0.05$ provides $s=-\log_2(0.05)=4.32$ bits of information against the null hypothesis. In other words: A $p$-value of $0.05$ is roughly as surprising as seeing all heads in four tosses of a fair coin.
This approach makes it very clear that the evidence provided by a $p$-value is nonlinear. For example: A $p$-values $0.10$ provides $3.32$ bits of information whereas a $p$-value of $0.15$ provides $2.74$ bits. The first $p$-value thus provides roughly $21$% more evidence against $H_0$ as the second. In a second example, a $p$-value of $0.001$ provides roughly $132$% more evidence than a $p$-value of $0.051$, despite the absolute difference between them being the same as in the first example ($0.05$). Here is an illustration from paper $[1]$:
[](https://i.stack.imgur.com/e3izT.png)
To answer the question: As long as the $p$-value is smaller than $1$, it provides some evidence against the null hypothesis because it shows some incompatibility between the data and the model. To say "no evidence" would therefore not be entirely accurate.
References
$[1]$: Rafi, Z., Greenland, S. Semantic and cognitive tools to aid statistical science: replace confidence and significance by compatibility and surprise. BMC Med Res Methodol 20, 244 (2020). [https://doi.org/10.1186/s12874-020-01105-9](https://doi.org/10.1186/s12874-020-01105-9)
$[2]$: Greenland, S. (2023). Divergence versus decision P-values: A distinction worth making in theory and keeping in practice: Or, how divergence P-values measure evidence even when decision P-values do not. Scand J Statist, 50( 1), 54– 88. [https://doi.org/10.1111/sjos.12625](https://doi.org/10.1111/sjos.12625)
| null |
CC BY-SA 4.0
| null |
2023-04-30T13:53:32.663
|
2023-05-02T19:24:59.463
|
2023-05-02T19:24:59.463
|
21054
|
21054
| null |
614528
|
2
| null |
614355
|
1
| null |
One option would be to scale and then shift the scores for section 2 such that the standard deviation and mean are the same as for section 1.
Edit:
So for the numbers in the question, you would multiply each section 2 score by $17.51/26.24$. This would make the standard deviations equal and would make the new section 2 mean $49.58$. So you would then add $36.08$ to each new section 2 score to make the mean $86.66$.
| null |
CC BY-SA 4.0
| null |
2023-04-30T14:03:19.237
|
2023-05-02T12:10:53.513
|
2023-05-02T12:10:53.513
|
319175
|
319175
| null |
614529
|
2
| null |
614501
|
4
| null |
The two sentences have nearly the same meaning.
The phrase with 'insufficient' is just placing more stress on the idea that there is a gradual range of evidence, and that there is a 'boundary for the amount of evidence' that has not been passed.
The other phrase can be regarded as a shortened/abbreviated sentence saying more or less the same "We have no evidence (that is sufficient)".
The second case has the same meaning but is just stated in a different way.
| null |
CC BY-SA 4.0
| null |
2023-04-30T14:09:33.493
|
2023-05-01T14:32:02.257
|
2023-05-01T14:32:02.257
|
164061
|
164061
| null |
614530
|
1
| null | null |
0
|
20
|
I’m running an analysis on Y coordinates of mouse clicks on a scale. These Y coordinates range from -200 to +200. I have created a new data frame on R and I have deleted data points that exceed these limits as they mean that participants didn’t click on the scale.
The scale was either presented upright or upside down (Scale 1 and Scale 2 respectively). I thus transformed all Scale 2 scores so that they are comparable to Scale 1 results.
I have since computed a generalised linear model on my Y variable with lmer and set various fixed and random effects. The R squared values show the model has substantial explanatory power and my variables of interest are significant although the part related to the fixed effects isn’t that great.
However, the problem I encounter is that when running my normality checks on my residuals these are not normal.
I have read that this is not really a problem in large samples and I do have data from 115 participants. Would it be ok to use this model and justify it by saying that my sample is large enough?
Are there any alternatives?
Thank you!!!!
[](https://i.stack.imgur.com/ucyo3.jpg)[](https://i.stack.imgur.com/FaxjN.jpg)
[](https://i.stack.imgur.com/xhCPb.jpg)
|
Absence of normality of residuals (lmer)
|
CC BY-SA 4.0
| null |
2023-04-30T14:45:10.903
|
2023-04-30T14:45:10.903
| null | null |
386904
|
[
"generalized-linear-model",
"lme4-nlme"
] |
614531
|
1
|
614597
| null |
1
|
56
|
When I'm using linear regression as an estimation method to infer the average treatment effect (between treatment and outcome), am I assuming there is a linear relation only between treatment and outcome? Or am I assuming a linear relation between treatment, outcome and confounding variable(s)?
|
Linear regression assumptions for causal Inference
|
CC BY-SA 4.0
| null |
2023-04-30T14:51:51.057
|
2023-05-01T14:32:19.157
|
2023-04-30T22:29:26.760
|
53690
|
386911
|
[
"regression",
"inference",
"causality",
"assumptions",
"treatment-effect"
] |
614532
|
1
| null | null |
1
|
35
|
Regarding recommendation systems of bi-partite graphs with PyTorch geometric, most of the tutorials I found about Link prediction using GNNs suggest using negative sampling in an (I guess) random way.
So, when I'm splitting my dataset, I can include some random negative samples in my training data using the following code:
```
transform = T.RandomLinkSplit(
num_val=0.1,
num_test=0.1,
disjoint_train_ratio=0.3,
neg_sampling_ratio=2.0,
add_negative_train_samples=True,
edge_types=("user", "rates", "movie"),
rev_edge_types=("movie", "rev_rates", "user"),
)
train_data, val_data, test_data = transform(data)
```
This will introduce 2:1 negative samples in my training data.
If instead of including negative samples in my training, I can include them in the validation dataset using the loader with code similar as (changing the line of previous code `add_negative_train_samples=False`):
```
val_loader = LinkNeighborLoader(
data=val_data,
num_neighbors=[20, 10],
neg_sampling_ratio=2.0,
edge_label_index=(("user", "rates", "movie"), edge_label_index),
edge_label=edge_label,
batch_size=3 * 128,
shuffle=False,
)
```
My question is: How can I be explicit about my negative samples? I know that if I know the connected AND the not connected nodes in my original dataset, maybe I don't need to use a GNN to predict, but what should I do when I know some of them?
|
How to be explicit about negative sampling for Link prediction using GNN?
|
CC BY-SA 4.0
| null |
2023-04-30T14:54:50.590
|
2023-05-01T07:27:09.673
|
2023-05-01T07:27:09.673
|
103715
|
103715
|
[
"python",
"sampling",
"graph-neural-network"
] |
614533
|
1
|
614534
| null |
0
|
29
|
Imagine I have a large dataset of many variables and many observations. I would like to create a regression model to predict the values of new data.
For the sake of ease, say I find a ridge regression model with some value $\lambda$ serves my needs best.
For my model, I decided to center and scale (standardize) my data before fitting the model. I then divide my dataset into training values and testing values, the model is built on the training values. I then test my model using the testing values and I am satisfied with the outcome.
Now, I observe a new observation and I don't know what the true outcome is, I decide to run it through the model I made. Before I feed the values into the model, I should 'standardize' this observation, correct? I.e. shift it by the mean of the data it was built on and then divide it by the standard deviation (not sure if this is correct, it is part of my question). Then I feed these values standardized values into the model and get an output, only I don't want the output in terms of being standardized, I want to be able to interpret it as it is 'naturally', I assume I multiply it by the standard deviation and then add the mean to it.
I am unsure if this method is correct, can anyone offer some assistance with how we introduce new test values into standardized data?
|
Machine learning regression using centered and scaled data
|
CC BY-SA 4.0
| null |
2023-04-30T14:58:06.443
|
2023-04-30T15:11:28.773
| null | null |
357899
|
[
"regression",
"machine-learning",
"standardization",
"ridge-regression"
] |
614534
|
2
| null |
614533
|
2
| null |
>
Imagine I have a large dataset of many variables and many observations. I would like to create a regression model to predict the values of new data. For the sake of ease, say I find a ridge regression model with some value
serves my needs best.
For my model, I decided to center and scale (standardize) my data before fitting the model. I then divide my dataset into training values and testing values, the model is built on the training values. I then test my model using the testing values and I am satisfied with the outcome.
Technically, there is leakage here. The best approach would be to split, then standardize. Everything you do the raw data is part of The Model (I wrire more about the distinction between a model and The Model [here](https://stats.stackexchange.com/questions/570172/bootstrap-optimism-corrected-results-interpretation/570269#570269)).
>
I should 'standardize' this observation, correct?
Yes, new predictions should be standardized using the mean and standard deviation from the training data.
| null |
CC BY-SA 4.0
| null |
2023-04-30T15:11:28.773
|
2023-04-30T15:11:28.773
| null | null |
111259
| null |
614535
|
2
| null |
613194
|
1
| null |
(We're both assuming whether one machine is broken is independent of whether another is broken, which seems reasonable to me.)
The total number of machines is fixed, so the number of broken machines will follow a binomial distribution, not a Poisson.
| null |
CC BY-SA 4.0
| null |
2023-04-30T16:04:37.213
|
2023-04-30T16:04:37.213
| null | null |
319175
| null |
614536
|
1
|
614551
| null |
0
|
76
|
An ordinary deck of cards is randomly shuffled and then the cards are exposed one at a time.
At some time before all the cards have been exposed you must say “next”, and if the next card
exposed is a spade then you win and if not then you lose.
For any strategy, show that at the moment you call “next” the conditional probability that you win is equal to the conditional probability that the last card is spade.
Conclude from this that the probability of winning is 1/4 for all strategies.
Hint: one approach is to show that the proportion of spades remaining is a martingale.
My attempt to answer this question:
Let $X_n$ indicate if the nth card is spade and $Z_n$ be the proportion of spades in the remaining cards after the nth card. Thus $E|Z_n| < \infty$ But how to prove $\{Z_n\}$ is a martingale by computing $\mathbb{E}[Z_n| Z_{n-1},\dots,Z_1] = Z_{n-1}$
I don't know how to answer this question.
Note: This question is taken from the book titled "Stochastic Processes" written by Professor S. Ross(University of California)
|
How to show that this particular card game is a martingale?
|
CC BY-SA 4.0
| null |
2023-04-30T16:33:36.050
|
2023-05-01T13:59:22.507
|
2023-05-01T06:53:46.477
|
72126
|
72126
|
[
"probability",
"self-study",
"conditional-expectation",
"martingale"
] |
614539
|
1
| null | null |
0
|
19
|
This is a problem discussed here before.
We know the mean times of the runners (μ(i)) and the standard deviations (σ(i))) and we want to find the probabilities p(1), p(2) ... p(N).
Here is a solution:
[https://www.untruth.org/~josh/math/normal-min.pdf](https://www.untruth.org/%7Ejosh/math/normal-min.pdf)
This is a complicated looking integral but with either Simpson's rule or Monte Carlo can be evaluated easily.
My problem is to -perhaps- find a faster way.
So I try to take the runners two at a time and from the pairwise probabilities compute the
final probabilities.
The integral in the link assumes that the normal distribution is applicable, i.e. for each runner the times from start to finish are distributed normally about the mean.
But I 'm not going to use the normal distribution. I use the Laplace distribution which enables me to derive some analytical formulas. Although this changes the Physics of the problem it does n't change the mathematics.
Then for two Laplace distributed variables t(i), t(j) with μ(i), σ(i) and μ(j), σ(j) respectively the following formula holds:
a) If σ(i) = σ(j) = σ
Q(i,j) = 1 - Exp((-z / σ) * (2 + (z / σ)) / 4
b) if σ(i) <> σ(j%)
Q(i,j) = 1 - 0.5 * (Exp(-z / σ(i)) / σ(j) ^ 2 - Exp(-z / σ(j)) / σ(i) ^ 2) / (1 / σ(j) ^ 2 - 1 / σ(i) ^ 2)
where z = μ(j) - μ(i)
Now let's do an example with three participants:
It's a five furlong race (1000 meters) and:
μ(1) = 57.00 secs, σ(1) = 0.50 secs
μ(2) = 57.30 secs, σ(2) = 0.60 secs
μ(3) = 57.70 secs, σ(3) = 1.00 sec
Using the above formula:
Q(1,2) = 0.6311446
Q(2,3) = 0.6207114
Q(1,3) = 0.7100428
(and Q(2,1) = 1 - Q(1,2), Q(3,2) = 1 - Q(2,3), Q(3,1) = 1 - Q(1,3))
From those I mean to compute P(1), P(2) and P(3), the final probabilities of horse 1, horse 2, horse 3 to win the race.
But before I do that, I use Monte Carlo to compute P(1), P(2), P(3) and what Monte Carlo finds is the undisputed true result.
If x stands for the machine generated random number (in (0,1)) the Laplace distributed t variables are:
t(i) = μ(i) + σ(i) . f(x)
where:
f(x) = - ln(2.(1 - x)) for x >= 1/2 , ln(2.x) for x < 1/2
and the shortest t value of the three wins the trial.
I do 1000000 trials and find:
P(1) = 0.49
P(2) = 0.29
P(3) = 0.22
Now back to the pair method.
I reckon the race of the three runners is equivalent to the following type of race:
- No 3 stays out, no 1 and no 2 run against each other and the day after tomorrow the winner of 1 v 2 runs against the 3.
or
- No 2 stays out, no 1 and no 3 run against each other and the day after tomorrow the winner of 1 v 3 runs against the 2.
or
- No 1 stays out, no 2 and no 3 run against each other and the day after tomorrow the winner of 2 v 3 runs against the 1.
As these three cases are equivalent I get:
P(1) = ( 2 . Q(1,2) . Q(1,3) + Q(2,3) . Q(1,2) + Q(3,2) . Q(1,3) ) / 3 = 0.5191164
P(2) = ( 2 . Q(2,1) . Q(2,3) + Q(1,3) . Q(2,1) + Q(3,1) . Q(2,3) ) / 3 = 0.2999295
P(3) = ( 2 . Q(3,1) . Q(3,2) + Q(1,2) . Q(3,1) + Q(2,1) . Q(3,2) ) / 3 = 0.1809541
Those add to 1 but as you can see there is a discrepancy and it's a mystery.
The question is why ?
It's something I don't understand.
For suppose it was not horses but football teams and the Q values were known from ELO
rating considerations.
So with three football teams X-Y-Z it's a three cornered fight, but we 'd have to draw
them X v. Y maybe then winner against Z and the Q values give the probabilities.
It does n't look like a different problem to find the probabilities of X-Y-Z winning the trophy.
|
Win probability of runners in a race
|
CC BY-SA 4.0
| null |
2023-04-30T18:23:40.063
|
2023-04-30T18:23:40.063
| null | null |
253133
|
[
"probability",
"distributions"
] |
614541
|
1
| null | null |
0
|
73
|
I'm trying to solve Homework 4 from professor Ryan Tibshirani's class on "Advanced Topics in Statistical Learning" [[pdf](https://www.stat.berkeley.edu/%7Eryantibs/statlearn-s23/homeworks/homework4.pdf)] at UC Berkeley. It deals with basic facts about CDFs and quantiles.
My question is regarding Exercise 1(f) where the author mentions an auxiliary randomization method which extends the well known result of random variable generation via the inverse CDF to cases where it might fail (I suppose non-continuous pdfs?)
I'll rewrite the method here below: Let $X$ be distributed according to $F$. Define,
$$
F^\star(x; v) = \lim_{y \to x^{-}} F(y) + v \cdot \Big(F(x) - \lim_{y \to x^{-}} F(y)\Big)
$$
where $y \to x^-$ means that $y$ approaches $x$ from the left.
It is possible to show that for $V \sim U(0, 1)$, independent of $X$, and for any $t$, we have
$$
\mathbb{P}(F^\star(X; V) \leq t) = t
$$
Question: Does anyone know of a reference (article? standard book?) where I can read more about this result? I admit that I'm having some trouble to understand the intuition and motivation for creating the auxiliary variable $v$ here. Any help would be greatly appreciated!
|
Auxiliary randomization and the generalized CDF inverse
|
CC BY-SA 4.0
| null |
2023-04-30T18:42:58.707
|
2023-05-01T12:36:07.683
|
2023-05-01T12:36:07.683
|
20519
|
139157
|
[
"probability",
"self-study",
"simulation",
"quantiles",
"cumulative-distribution-function"
] |
614542
|
2
| null |
614501
|
4
| null |
Unless the experiment or study result showed a parameter was exactly equal to the Null Hypothesis, then you do have some evidence against the Null. If you have establish a threshold for the p-value was is "sufficient" evidence and that p-value is greater than you threshold , then you have "insufficient" evidence. The p-value is really a feature of the data.
It is was Neyman and Pearson who formulated hypothesis testing as an accept-reject paradigm. Up to that point (1940's ???) The Fisherian formalism was to report the p-value and let that speak for itself. Fisher attacked the N-P formalism vociferously. And that didn't end the argument because the Bayesians were yet to be heard.
I think it’s interesting that this is generating some discrepant responses. So far 6 upvoted and 7 downvotes. I thought it was a well established principle that the Null was almost never “true”.
| null |
CC BY-SA 4.0
| null |
2023-04-30T18:53:22.893
|
2023-05-02T20:38:12.033
|
2023-05-02T20:38:12.033
|
2129
|
2129
| null |
614543
|
1
|
614545
| null |
1
|
48
|
I have a regression where the dependent variable is the difference in income between towns i and j. The independent variable is a dummy variable which takes value 0 if both towns have the same ruling party and takes value 1 if they are ruled by different parties. There are two political parties A and B in all. How do I interpret the coefficient of the dummy variable. Say it is -10 and significant. On average the income gap between the two town is lesser when they are ruled by different parties. Is this interpretation right? But this cannot be attributed to any one party. Say the income is lower when party A is ruling? I would like to use the difference in income in place of income itself to overcome an endogeneity problem. So I would like some help with interpreting the dummy variable.
|
Interpretation of coefficient of dummy variable in regression
|
CC BY-SA 4.0
| null |
2023-04-30T18:53:34.280
|
2023-04-30T22:04:25.747
|
2023-04-30T22:04:25.747
|
53580
|
294894
|
[
"econometrics",
"regression-coefficients",
"categorical-encoding"
] |
614544
|
1
| null | null |
0
|
14
|
My sample size is 8 banks over 10 years. There is heteroskedasticity but no serial autocorrelation. I use stata 13.
|
Should I use OLS robust standard errors with FGLS or PSCE?
|
CC BY-SA 4.0
| null |
2023-04-30T19:06:44.927
|
2023-04-30T19:09:25.933
|
2023-04-30T19:09:25.933
|
56940
|
386361
|
[
"multiple-regression",
"panel-data",
"autocorrelation",
"heteroscedasticity",
"robust-standard-error"
] |
614545
|
2
| null |
614543
|
1
| null |
Yes, your interpretation is mostly correct. If the coefficient of the dummy variable is -10 and significant, it suggests that, on average, the income gap between two towns is less when they are ruled by different parties compared to when they are ruled by the same party. In other words, there is a smaller income gap when towns have different ruling parties.
However, you are right in noting that this interpretation does not attribute the effect to any specific party. The current model cannot differentiate whether the income is lower when Party A or Party B is ruling. If you want to explore the effect of each party on the income gap more directly, you could include additional independent variables to capture the specific party effects.
One possible approach is to create two new dummy variables:
- A dummy variable for Town i being ruled by Party A (1 if ruled by Party A, 0 otherwise).
- A dummy variable for Town j being ruled by Party A (1 if ruled by Party A, 0 otherwise).
By including these two dummy variables in your regression, you can capture the separate effects of Party A ruling Town i and Party A ruling Town j on the income gap. The coefficients for these variables will indicate the average change in income gap when Party A is ruling one town compared to when Party B is ruling that town. By comparing the coefficients, you can explore the relative impacts of each party on the income gap.
| null |
CC BY-SA 4.0
| null |
2023-04-30T19:22:45.067
|
2023-04-30T19:22:45.067
| null | null |
53580
| null |
614546
|
2
| null |
414503
|
2
| null |
This is an interesting question. I wanted to contribute an answer which shows how we can do this practically in Python, and call out a few interesting things. I hope the interested reader will take the code, modify it and experiment themselves. I give a few suggestions for things to play around with at the end.
## Python Implementation - using Pytorch
The code below creates a neural network using Pytorch. I have used the ReLU function between layers (see comment below).
I have tried to find a balance between a network which is simple and easy to train, but which also does a reasonable job (at least on the interval [0,10], see comments and graph below).
The model is trained on random data from the range [0,10].
### Graphs
This graph shows the predicted (blue) and actual (red) values, for unseen random input data from the range [0,10].
[](https://i.stack.imgur.com/uOoVP.png)
- It is interesting to note how poorly the model performs outside the region on which it is trained.
### Things to experiment with
- Try other activation functions or combinations (like tanh). If I keep everything in the code below identical but change the activation functions to tanh we get.
[](https://i.stack.imgur.com/bgg7s.png)
We can improve the performance with more epochs...
[](https://i.stack.imgur.com/ln0Vs.png)
I also note here that the function $x^2$ is non-linear, so you could use that as your activation function - but I do not think that is in the spirit of this question :D
- See what happens if you use less training data or over a bigger range.
- See what happens if you change the architecture, for example using fewer layers.
## Code
```
import torch
import torch.nn as nn
import torch.optim as optim
import matplotlib.pyplot as plt
# Create training data
X = torch.distributions.uniform.Uniform(0,10).sample([1000,1])
y = X**2
model = nn.Sequential(
nn.Linear(1, 16),
nn.ReLU(),
nn.Linear(16, 16),
nn.ReLU(),
nn.Linear(16, 1),
)
loss_fn = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
n_epochs = 150
batch_size = 50
for epoch in range(n_epochs):
for i in range(0, len(X), batch_size):
Xbatch = X[i:i+batch_size]
y_pred = model(Xbatch)
ybatch = y[i:i+batch_size]
loss = loss_fn(y_pred, ybatch)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f'Finished epoch {epoch}, latest loss {loss}')
#Example, can we square 3 - looks ok
print(model(torch.tensor([3], dtype=torch.float)))
# For all intents and purposes we can assume the data below is all unseen - potentially could be some overlap by random chance with training X
unseenX = torch.distributions.uniform.Uniform(-5,15).sample([1000,1])
predictions_on_unseenX = model(unseenX)
# Plotting
fig, ax = plt.subplots()
plt.scatter(unseenX, unseenX**2, c="red", label="Actual values", s=1)
plt.scatter(unseenX, predictions_on_unseenX.detach(), c="blue", s=1, label="Predictions")
plt.text(0, 100, "Training data was in this range")
plt.title("Using ReLU ")
plt.legend()
ax.axvspan(0, 10, alpha=0.5, color='grey')
```
### Further Reading
[Interesting post](https://stats.stackexchange.com/questions/299915/how-does-the-rectified-linear-unit-relu-activation-function-produce-non-linear?noredirect=1&lq=1) on why ReLU works with the top answer focussing on this specific problem.
[Similar](https://stackoverflow.com/questions/55170460/neural-network-for-square-x2-approximation) post to this one on stack exchange.
| null |
CC BY-SA 4.0
| null |
2023-04-30T19:36:11.797
|
2023-04-30T19:40:04.583
|
2023-04-30T19:40:04.583
|
358991
|
358991
| null |
614548
|
1
|
614578
| null |
1
|
56
|
Suppose I have a model with parameters $\theta = (\theta_1,\theta_2)$ and observed data $x=(x_1,x_2,\ldots,x_k)$. I want to estimate the posterior mean $\hat{\theta}_1 = \mathbb{E}[\theta_1|x]$.
The issue is that I can't sample from $p(\theta_1|x)$ to estimate $\hat{\theta}_1$ by the sample mean because I do not count on an expression for the likelihood $p(x|\theta_1)$. I do, however, count on an expression for $p(x|\theta_1,\theta_2) = p(x|\theta)$ as well as on a prior $\pi(\theta_1)$. I can also find the maximum likelihood estimator for $\theta_2$ (let's call it $\hat{\theta}_2$).
Let $\hat{p}(x|\theta_1)=p\left(x|\theta_1,\hat{\theta}_2\right)$. Then, I could sample from an approximate posterior distribution
$$\hat{p}(\theta_1|x) = \frac{1}{C}\frac{\hat{p}(x|\theta_1)\pi(\theta_1)}{p(x)}$$
where $C$ is some normalization constant.
If I compute the mean of $N$ samples from $\hat{p}(\theta_1|x)$, would that be a "reasonable estimator" (in any sense) for $\mathbb{E}[\theta_1|x]$? Is there any literature I could resort to that does something similar to this?
|
Sampling from an approximate distribution to estimate posterior mean
|
CC BY-SA 4.0
| null |
2023-04-30T20:11:11.077
|
2023-05-02T14:03:33.173
|
2023-05-02T14:03:33.173
|
122077
|
122077
|
[
"bayesian",
"estimation",
"conditional-probability",
"markov-chain-montecarlo",
"posterior"
] |
614549
|
1
| null | null |
0
|
12
|
I am sorry if my question sounds a bit naive, but I am new to IV and have a problem. One of the variables in my regression is endogenous, and I use 2SLS to remove bias. In stage 2 I focused on the coefficient for this particular variable, but also noticed that the coefficients and error terms for some other variables have also changed. Are these estimates for these (exogenous) variables more reliable than those from the OLS?
|
Impact of IV on coefficients for exogenous regressors
|
CC BY-SA 4.0
| null |
2023-04-30T20:28:02.283
|
2023-04-30T20:28:02.283
| null | null |
333840
|
[
"regression",
"multiple-regression",
"regression-coefficients",
"standard-error",
"instrumental-variables"
] |
614550
|
1
| null | null |
0
|
16
|
I have a binary classifier problem with 18 samples to evaluate. To have less variance, I ran the same experiment 3 times, so I have 54 predictions.
Now I want to run a Fisher Exact Test, but is it valid to do it on the 54 predictions(3 runs) that I have?
|
Easy Fisher Exact Test on multiple runs
|
CC BY-SA 4.0
| null |
2023-04-30T20:50:38.330
|
2023-04-30T20:50:38.330
| null | null |
377097
|
[
"hypothesis-testing",
"p-value",
"fishers-exact-test"
] |
614551
|
2
| null |
614536
|
5
| null |
$Z_{n-1}$ is the proportion of spades left in the deck before we make the $n^{th}$ draw; therefore, it is equal to the probability that the next card is a spade. Let us assume there are $k$ spades left in the deck and $K$ total cards left in the deck, so $Z_{n-1} = k/K$.
$$\begin{eqnarray}
\mathbb{E}Z_n &=& {k-1\over K-1}p\left(Z_n = {k-1\over K-1}\right)+{k\over K-1}p\left(Z_n = {k\over K-1}\right) \\
&=& {k-1\over K-1}{k \over K}+{k\over K-1}{K-k\over K}\\
&=&{k^2 - 1\cdot k +kK - k^2 \over (K-1)K} \\
&=&{(K-1)k \over (K-1)K} \\
&=& {k \over K}
\end{eqnarray}$$
as desired.
Since $1/4$ of the cards in the deck are spades, before the game starts, the probability of winning is $1/4$. As there is nothing we can do to affect the probability of drawing a spade on the next card, and the probability process is a martingale, no strategy can affect our initial probability of winning. This is a consequence of the [Martingale Stopping Theorem](https://en.wikipedia.org/wiki/Optional_stopping_theorem).
| null |
CC BY-SA 4.0
| null |
2023-04-30T21:07:46.553
|
2023-05-01T13:59:22.507
|
2023-05-01T13:59:22.507
|
7555
|
7555
| null |
614552
|
2
| null |
614501
|
3
| null |
The accept/reject procedure of a hypothesis test is only designed to yield the long run error rate properties of the test. It deals with 'evidence' in the data only vaguely and only to the extent that it gives a decision that the evidence is strong enough (according to the pre-data specified level of alpha) to require the null hypothesis to be discarded, or not strong enough. It does not, by itself or by design, provide for any evidential assessment beyond that. However...
The hypothesis test method published by Neyman & Pearson did not depend on a p-value (and did not provide one), but modern usage of the hypothesis tests almost always involves comparing a p-value to a threshold rather than looking to see if the test statistic falls in a "critical region". It is the p-value that lets you make statements about the strength of the evidence in the data against the null hypothesis, according to the statistical model.
The p-value is best understood as a product of a (neo-) Fisherian significance test rather than a hypothesis test or the hybrid thing often called 'NHST'.
To some the distinction seems subtle and rather pointless, but if you want to know what the testing procedures allow you to know and the types of inferences that they support I think the distinction is essential. See here for my extended take on the topic: [https://link.springer.com/chapter/10.1007/164_2019_286](https://link.springer.com/chapter/10.1007/164_2019_286)
If you want to talk of evidence and to persist with the conventional accept/reject approach then you need to know that, depending on the alpha that you choose and the experimental design, you may be rejecting the null hypothesis with fairly weak evidence or with very strong evidence.
| null |
CC BY-SA 4.0
| null |
2023-04-30T21:44:58.777
|
2023-04-30T21:44:58.777
| null | null |
1679
| null |
614553
|
2
| null |
614517
|
0
| null |
As you have covariates to evaluate, a Cox model is certainly OK here. The software will choose some combination of covariate values to use as a baseline reference, and then evaluate the "hazards" of having the event for other values of the covariates, relative to that reference set.
If you only have a handful of possible event times, you might be better off using a discrete-time survival model. Cox models assume continuous time, and need to make some adjustments with lots of tied event times. A discrete-time survival model is a binomial regression on a carefully constructed data set, with one row for each individual at risk at each possible event time. [This page](https://stats.stackexchange.com/a/614296/28500) outlines the idea and provides some links for further reference.
| null |
CC BY-SA 4.0
| null |
2023-04-30T22:08:15.960
|
2023-04-30T22:08:15.960
| null | null |
28500
| null |
614554
|
1
| null | null |
2
|
70
|
I have 3 probability distributions derived from three random variables, where the distributions and the variables are independent. How do I combine all 3 distributions, either via direct aggregation of the distributions or through a mixture of distributions with equal weights for all the 3 distributions.
For reference, this is how my 3 distributions are in Python:
```
prob1, _ = np.histogram(valsA, bins=1024, density=True)
prob2, _ = np.histogram(valsB, bins=1024, density=True)
prob3, _ = np.histogram(valsC, bins=1024, density=True)
```
I'm a novice to statistics, so any advice would be valuable.
Also, there is no other way to deal with the 3 variables, I need to retrieve their distributions and then aggregate it.
|
How to combine independent probability distributions?
|
CC BY-SA 4.0
| null |
2023-04-30T22:09:07.503
|
2023-04-30T22:54:27.343
| null | null |
371557
|
[
"machine-learning",
"probability",
"distributions",
"random-variable"
] |
614555
|
2
| null |
614541
|
1
| null |
As I commented under you post, I will use "$f(x-)$" instead of "$f(x+)$" for the left limit of a function $f$ in the proof below.
#### Preparations
The key of the proof is a property of the quantile function of a random variable $X$. In general, if $X$ is a random variable with distribution function $F$, we can define its quantile function (see, e.g., Equation (14.5) in Probability and Measure by Patrick Billingsley) of $X$ as
\begin{align}
\varphi(u) = \inf[x: F(x) \geq u], \quad 0 < u < 1.
\end{align}
For fixed $u \in (0, 1)$, we will need the following property of $\varphi$ (the proof of it can also be found in the reference mentioned above):
\begin{align}
& x \geq \varphi(u) \iff F(x) \geq u, \tag{1} \\
& x < \varphi(u) \iff F(x) < u. \tag{2}
\end{align}
As a rigorous proof, some properties of measure-theoretic integral will also be used tacitly. If you have questions, please comment below the answer.
#### Proof
Denote $F^\star(X; V) = F(X-) + V(F(X) - F(X-))$ by $U$, to show $U \sim U(0, 1)$, it suffices to show for any $u \in (0, 1)$, it holds that $P[U < u] = u$.
To this end, let's denote the set of discontinuities of $F$ by $J$, it then follows by the independence between $X$ and $V$ and $(2)$ that
\begin{align}
& P[U < u] = P[U < u, X \in J] + P[U < u, X \in J^c] \\
=& P[F(X-) + V(F(X) - F(X-)) < u, X \in J] + P[F(X) < u, X \in J^c] \\
=& \int_J P\left[V < \frac{u - F(x-)}{F(x) - F(x-)}\right]dF(x) +
P[X < \varphi(u), X \in J^c] \\
=& \int_J P\left[V < \frac{u - F(x-)}{F(x) - F(x-)}\right]dF(x) +
\int_{J^c \cap (-\infty, \varphi(u))} dF(x). \tag{3}
\end{align}
Let's analyze the integrand of the first term in the right-hand of $(3)$: If $x \in (-\infty, \varphi(u)) \cap J$, then $F(x-) < F(x) < u$ by the monotonicity of $F$ and $(2)$, whence $\frac{u - F(x-)}{F(x) - F(x-)} > 1$ and it then follows by $V \sim U(0, 1)$ that $P\left[V < \frac{u - F(x-)}{F(x) - F(x-)}\right] = 1$; if $x \in (\varphi(u), +\infty) \cap J$, then for every $t \in (\varphi(u), x)$, by $(1)$ we have $F(t) \geq F(\varphi(u)) \geq u$, hence $F(x-) = \lim_{t \to x^-}F(t) \geq u$, whence $\frac{u - F(x-)}{F(x) - F(x-)} \leq 0$ and it then follows by $V \sim U(0, 1)$ that $P\left[V < \frac{u - F(x-)}{F(x) - F(x-)}\right] = 0$. This implies that (decompose $J$ into $J = [J \cap (-\infty, \varphi(u))] \cup [J \cap \{\varphi(u)\}] \cup [J \cap (\varphi(u), +\infty)]$):
\begin{align}
& \int_J P\left[V < \frac{u - F(x-)}{F(x) - F(x-)}\right]dF(x) \\
=& \int_{J \cap (-\infty, \varphi(u))} dF(x) + \int_{J \cap \{\varphi(u)\}}
P\left[V < \frac{u - F(x-)}{F(x) - F(x-)}\right]dF(x). \tag{4}
\end{align}
$(3)$ and $(4)$ together then give
\begin{align}
P[U < u] = \int_{(-\infty, \varphi(u))} dF(x) + \int_{J \cap \{\varphi(u)\}}
P\left[V < \frac{u - F(x-)}{F(x) - F(x-)}\right]dF(x). \tag{5}
\end{align}
To simplify $(5)$, consider the following two cases:
Case 1: $\varphi(u) \in J^c$, i.e., $F$ is continuous at $\varphi(u)$. In this case the second term in the right-hand side of $(5)$ vanishes, and $F(\varphi(u)) = F(\varphi(u)-)$. $(5)$ thus reduces to
\begin{align}
P[U < u] = \int_{(-\infty, \varphi(u))} dF(x) = P[X < \varphi(u)] =
F(\varphi(u)-) = F(\varphi(u)) = u.
\end{align}
Case 2: $\varphi(u) \in J$, i.e., $F$ jumps at $\varphi(u)$. In this case the second term in the right-hand side of $(5)$ becomes
\begin{align}
& \int_{\{\varphi(u)\}}P\left[V < \frac{u - F(x-)}{F(x) - F(x-)}\right]dF(x) \\
=& P\left[V < \frac{u - F(\varphi(u)-)}{F(\varphi(u)) -F(\varphi(u)-)}\right] \times (F(\varphi(u)) -F(\varphi(u)-)) \\
=& u - F(\varphi(u)-).
\end{align}
Therefore $(5)$ reduces to
\begin{align}
P[U < u] = P[X < \varphi(u)] + u - F(\varphi(u)-) =
F(\varphi(u)-) + u - F(\varphi(u)-) = u.
\end{align}
This completes the proof.
| null |
CC BY-SA 4.0
| null |
2023-04-30T22:09:08.120
|
2023-05-01T12:34:49.150
|
2023-05-01T12:34:49.150
|
20519
|
20519
| null |
614556
|
1
| null | null |
0
|
28
|
In my team, we have an experiment where we record each individual in an arena that has 3 separate areas (lets say A, B and C). We have two conditions (let's call them condition 1 and condition 2). Individuals are split in 2 groups, one starting with condition 1 and finishing with condition 2 while the second group does the opposite.
My colleague organized the data as follows (example for two individuals):
[](https://i.stack.imgur.com/nuzLr.png)
My colleague proposes to analyze it with a GLMM. To see the effect of the condition on the proportion spent in each area, the model below was used in R:
```
glmer(cbind(Frames,TotalFrames-Frames)~Area*Condition+(1|Individual), family=binomial(link = "logit"))
```
R gives a warning for singularity when we run this model:
```
boundary (singular) fit: see help('isSingular')
```
My colleague mentioned that it may come from the random effect and ran a GLM instead, which did not give any singularity warning.
On the other hand, I was worried that the problem comes from the fact that the proportions of time spent in each area by each individual are not independent from each other, because one individual cannot be in two areas at the same time. Indeed, when I remove one of the 3 areas from the dataset (making the two areas left in the analysis less dependent on each other), the singularity warning disappears.
Thus, I am wondering if it is correct to compare non-independent proportions of time like my colleague suggested. If so, why do we have the singularity warning (is it really due to the random effect) and is it a problem?
---
To put my mind at ease, I guess I have to simplify my question: so let's imagine that we flip several coins multiple times.
Is it correct to use a GLM to compare
proportions of heads/tails to proportions of tails/heads to conclude that we have significantly more heads than tails.
|
Is it correct to include 3 non-independent proportions of time (counts of video frames) in a glmm to compare them?
|
CC BY-SA 4.0
| null |
2023-04-30T22:16:40.870
|
2023-05-12T12:10:21.667
|
2023-05-12T12:10:21.667
|
386919
|
386919
|
[
"r",
"regression",
"binomial-distribution",
"proportion",
"glmm"
] |
614557
|
2
| null |
614554
|
2
| null |
If all you have to work from are those histograms as density estimates (presumably scaled to integrate to 1) you could estimate the joint density from their product.
$\hat{f}_{X,Y,Z}=\hat{f_X}\,\hat{f_{Y}}\,\hat{f_{Z}}$
| null |
CC BY-SA 4.0
| null |
2023-04-30T22:24:50.803
|
2023-04-30T22:54:27.343
|
2023-04-30T22:54:27.343
|
805
|
805
| null |
614558
|
1
| null | null |
0
|
18
|
Suppose I have a numerical discrete variable that does not apply to all my observations, i.e. 'years_married'. Not all the people in my dataframe is married, so they have an 'NA' registered in this variable.
What would be a correct way to proceed in this case? 'years married' is an important variable for my study (if they are married), so I don't want to discard it.
One idea is to split the dataframe in two, one with this variable (for those who are married), and other without it (for singles), and model them separately, but this would drastically reduce the number of observations (at least in one of the subsets) and the predicion accuracy.
Is there any technique or transformation, or recommend me an algorythm (i.e. Random Forest), that can handle this situation?
Thanks :)
Edit: May be it was not a good example. The exact case is about AGE of a device at the moment of the study. I have the date when data form device was collected, but not in all cases I have the construction date of the device
|
Numeric variable that can't be applied to all my observations, causing NA's
|
CC BY-SA 4.0
| null |
2023-04-30T22:34:51.060
|
2023-05-03T07:42:37.367
|
2023-05-03T07:42:37.367
|
381118
|
381118
|
[
"missing-data",
"discrete-data"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.