Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
613313
|
1
| null | null |
0
|
10
|
Basically I am implementing a CNN on custom data. I cannot go into specifics but I notice that no matter what type of model I use (For example: A custom model, or even Transfer Learning models such as InceptionV3) SGD seems to performing better. From what I knew RMSProp was the preferred choice for Inception but it is performing particularly poorly.
So what exactly influences the choice of Optimizer. I know SGD is performing better, but why?
|
Neural Networks- What influences the choice of Optimizer more? - Specific Model Architecture or the Input Data Distribution(type, preprocessing etc)?
|
CC BY-SA 4.0
| null |
2023-04-18T11:20:36.457
|
2023-04-18T11:20:36.457
| null | null |
385994
|
[
"neural-networks",
"stochastic-gradient-descent",
"transfer-learning"
] |
613314
|
2
| null |
613310
|
3
| null |
Here is an `R` solution applying the $t$-test for paired data.
```
x = scan()
1: 37.14 33.21
3: 34.69 31.25
5: 65.52 43.73
7: 40.56 38.26
9: 41.32 41.72
11: 39.34 38.14
13: 43.95 41.07
15: 44.26 39.49
17: 35.28 28.50
19: 37.12 32.40
21: 37.82 35.87
23: 34.71 33.79
25: 34.08 30.49
27: 33.08 36.29
29: 36.89 33.18
31: 41.38 37.70
33: 39.29 33.86
35: 41.62 39.21
37: 39.36 35.43
xmat <- matrix(x, ncol=2, byrow = TRUE)
t.test(xmat[,1], xmat[,2], paired = TRUE)
Paired t-test
data: xmat[, 1] and xmat[, 2]
t = 3.4757, df = 18, p-value = 0.002698
alternative hypothesis: true mean difference is not equal to 0
95 percent confidence interval:
1.536795 6.233732
sample estimates:
mean difference
3.885263
```
And here is another solution based on the Wilcoxon signed-rank test for two paired samples, aka the Mann-Whitney test:
```
wilcox.test(xmat[,1], xmat[,2], paired = TRUE, conf.int = TRUE,correct = TRUE, )
Wilcoxon signed rank test with continuity correction
data: xmat[, 1] and xmat[, 2]
V = 181, p-value = 0.0005795
alternative hypothesis: true location shift is not equal to 0
95 percent confidence interval:
2.179984 4.350064
sample estimates:
(pseudo)median
3.314607
Warning messages:
1: In wilcox.test.default(xmat[, 1], xmat[, 2], paired = TRUE, conf.int = TRUE, :
cannot compute exact p-value with ties
2: In wilcox.test.default(xmat[, 1], xmat[, 2], paired = TRUE, conf.int = TRUE, :
cannot compute exact confidence interval with ties
```
The two warning messages you see, as they say it, are due to the presence of ties for which the exact sampling distribution doesn't hold and we have to resort to asymptotic results.
Comment. The two confidence intervals are quite different although I'm not sure if this difference is relevant from a clinical point of view. Nevertheless, presumably this difference may be to the presence of a slight departure from normality shown by QQ-plot applied to the difference between the two variables.
```
qqnorm(xmat[,1]-xmat[,2])
qqline(xmat[,1]-xmat[,2])
```
[](https://i.stack.imgur.com/H6RFJ.png)
| null |
CC BY-SA 4.0
| null |
2023-04-18T11:23:12.973
|
2023-04-18T11:41:51.010
|
2023-04-18T11:41:51.010
|
11852
|
56940
| null |
613315
|
1
| null | null |
0
|
18
|
Is it possible to use estimates for chi-square test to estimate sample size for logrank test? I remember that I have seen papers that do it that way.
The problem is that, when you have to estimate sample size for time-to-event outcomes, you usually do not know the distributions of the times in the comparison groups. So the estimates that substitute one or another time distribution may be more wrong than simple estimates employing just chi-square test.
|
Using estimates for chi-square test for sample size estimation for logrank test
|
CC BY-SA 4.0
| null |
2023-04-18T11:30:40.157
|
2023-04-18T11:30:40.157
| null | null |
80704
|
[
"chi-squared-test",
"sample-size",
"logrank-test"
] |
613316
|
1
| null | null |
0
|
15
|
I'm very unfamiliar with Bayesian statistics, so forgive me if this question is obvious. The idea of a posterior predictive distribution seems to be incorporate uncertainty about the parameter $\theta$ (or, I guess, its estimate if you look at it in the classical way) when simulating and/or predicting the distribution of a new observation, and is based on the formula
$$
p(y_+ \mid y_1, \dots, y_n) = \int p(y_+ \mid \theta) p(\theta \mid y_1, \dots, y_n) \, d\theta \,.
$$
However, the sources I've found, (including [answers on this site](https://stats.stackexchange.com/a/335496/30492)) suggest that one should simulate realisations of $Y_+$ by sampling $\theta^{(i)}$ from $\Theta \mid Y_1= y_1, \dots, Y_n = y_n$ and $y_+^{(i)}$ from $Y_+ \mid \Theta = \theta$. Why is it not necessary to average over $\theta$ here? Intuitively, the correct procedure seems to me rather to be:
- Draw $\theta_1, \dots, \theta_M$ from $\Theta \mid Y_1= y_1, \dots, Y_n = y_n$
- For each $\theta_i$, draw $y_+^{(1, \theta_i)}, \dots, y_+^{(N, \theta_i)}$ from $Y_+ \mid \Theta = \theta$
- $y_+^{(j)} := \frac{1}{M}\sum_{i = 1}^M y_+^{(j, \theta_i)}$ for $i = 1, \dots, N$ is a sample from $Y_+ \mid Y_1 = y_1, \dots, Y_n = y_n$
What am I misunderstanding here?
|
Posterior predictive distribution simulation: why not average over parameter values?
|
CC BY-SA 4.0
| null |
2023-04-18T11:31:04.910
|
2023-04-18T11:31:04.910
| null | null |
304924
|
[
"bayesian",
"predictive-models"
] |
613317
|
1
| null | null |
2
|
50
|
I am decently familiar with basic random effect structures for mixed effect models (i.e., random intercepts per id) but struggle to identify the most justifiable structures for more complex study designs (i.e., crossed and/or nested random effects). I have examined most of the other questions/topics here and SO that discuss this concept but it’s’ still not clicking for me. Is there a series of questions / processes one can follow when looking at a naive dataset to understand what random effects may be the best place to start? I am aware that there is no black/white answer here and much of these depends on your research question / what seems to be the most appropriate for a given dataset (i.e., model comparisons)
Here is a theoretical study design I’m struggling with:
- 20 participants (i.e., id)
- response variable is continuous (i.e., response)
- Within each participant, their upper limbs (i.e., limb = dominant/nondominant) are randomized into one of two conditions (i.e., condition = A or B.)
- Each limb is measured on two occasions (i.e., time = pre / post) after receiving the condition
- Finally, each participant completes two “phases”, where after a washout period, they complete the same intervention again, re-randomizing their limbs to the two conditions
- In terms of the research question, I am the most interested in comparing the change scores (after adjusting for phase) between the two conditions. Here is my current thinking on the model that best approaches this study design and research question:
```
lme4::lmer(response ~ condition * time + phase + (1 | id / limb))
```
|
Best practices for determining complex random effects structures
|
CC BY-SA 4.0
| null |
2023-04-18T11:36:10.377
|
2023-04-18T12:47:42.693
|
2023-04-18T12:47:42.693
|
380447
|
380447
|
[
"mixed-model",
"lme4-nlme",
"crossed-random-effects"
] |
613318
|
2
| null |
183473
|
1
| null |
If we think that we should be using a gradient boosting implementation like XGBoost, the answer on when to use [gblinear](https://xgboost.readthedocs.io/en/stable/parameter.html#parameters-for-linear-booster-booster-gblinear) instead of `gbtree` is: "probably never". With `gblinear` we will get an elastic-net fit equivalent and essentially create a single linear regularised model. Unless we are dealing with a task we would expect/know that a LASSO/ridge/elastic-net regression is already competitive, it does not worth our trouble aside maybe if we have already in place a data pipeline serving an XGBoost model and we want to try a GLM quickly. That said though, R, Python, MATLAB, Julia, etc. have better and more well-developed specialised routines to fit elastic-net regression tasks.The CV.SE thread: [Difference in regression coefficients of sklearn's LinearRegression and XGBRegressor](https://stats.stackexchange.com/questions/448879/difference-in-regression-coefficients-of-sklearns-linearregression-and-xgbregre) provides further details on comparing XGBoost's `gblinear` to a standard linear regression.
Note if we believe some linear relation to be present at a low/local level, LightGBM's argument [lineartree](https://lightgbm.readthedocs.io/en/latest/Parameters.html#linear_tree) is emulating the methodology of [Cubist](https://www.rulequest.com/cubist-info.html) (or M5) where a tree is grown such that the terminal leaves contain linear regression models. This is structurally different to `gblinear` though as it is first and foremost a tree rather than a regularised linear model.
| null |
CC BY-SA 4.0
| null |
2023-04-18T11:36:49.463
|
2023-04-18T11:36:49.463
| null | null |
11852
| null |
613319
|
2
| null |
613246
|
3
| null |
The table in the "Approximate significance of smooth terms" section of the output reports on Wald-like tests for individual smooths. Note that unlike in the gam package (and others), {mgcv} doesn't separate the linear component from the wiggly components of the spline. As such, these tests are for the function represented by the spline.
In your model you fitted something like
```
m <- gam(y ~ g + s(x, by = g))
```
which mathematically is
$$
\hat{y}_i = \beta_{g(i)} + f_g(x_i)
$$
where $g(i)$ is a function indexing the group for the $i$th observation. In your case you have 2 smooth functions, and these spline bases used include the linear function within their span. So the test is for the function, and that function could be estimated to be linear.
The null hypothesis in each case is $\text{H}_0 : f_g(x_i) = 0 \; \forall \; x$, i.e. that the null hypothesis is a constant (flat) function- with value 0 for all observed values of $x$.
The p value is the probability of observing the estimate function under the null hypothesis. It is based on the theory of Nychka's (1988) interpretation of confidence intervals for smooths, extended by Marra and Wood (2012). What is done is a slight variant on Wood (2013). These p values have approximately the correct distribution under the null and perform better than a strict frequentist approach.
Note that you have fixed the number of basis functions and hence the EDF of each of your smooths. As such this model is really just a GLM (there is no smoothness selection, and hence no penalisation), but with specialist tests for the smooth functions. As you didn't do smoothness selection, the p values are less approximate than they would be with the defaults.
### References
Marra, G and S.N. Wood (2012) Coverage Properties of Confidence Intervals for Generalized Additive Model Components. Scandinavian Journal of Statistics, 39(1), 53-74.
Nychka (1988) Bayesian Confidence Intervals for Smoothing Splines. Journal of the American Statistical Association 83:1134-1143.
Wood, S.N. (2013) On p-values for smooth components of an extended generalized additive model. Biometrika 100:221-228
| null |
CC BY-SA 4.0
| null |
2023-04-18T11:37:37.240
|
2023-04-18T11:37:37.240
| null | null |
1390
| null |
613320
|
1
| null | null |
0
|
22
|
I have a survey asking customers to rate their satisfaction on 1-5 scale.
I'm interested in learning the margin of error for a derived quantity, namely the "top2box%", i.e. the percentage of respondents who gave a $4$ or a $5$.
How would I go about calculating this? If treated like a simple multiple choice, I'd have (let's say 95% CI)
$$MOE=z_{0.95} \sqrt{\frac{p(1-p)}{n}} * FPC$$
and take $p=0.5$ for the largest margin, so that
$$MOE = \frac{0.98}{\sqrt{n}}*FPC$$
But I think a correction has to be made for the fact that I'm grouping $4$ and $5$ together, so it can't quite be treated like a single percentage?
From what I found, the margin of error should increase for the T2B approach... but if I calculate the variance of %(4 or 5), it's the same formula as variance of %(any choice), namely $p(1-p)$?
So I'm getting the same answer as here: [https://en.wikipedia.org/wiki/Margin_of_error](https://en.wikipedia.org/wiki/Margin_of_error) , despite this being top2 box, which is supposed to have a larger MOE supposedly.
Where's the mistake?
|
Top 2 box (T2B) margin of error calculation. Where's the mistake?
|
CC BY-SA 4.0
| null |
2023-04-18T11:41:15.543
|
2023-04-18T14:26:50.127
|
2023-04-18T14:26:50.127
|
183208
|
183208
|
[
"confidence-interval",
"error",
"frequentist"
] |
613321
|
2
| null |
613275
|
0
| null |
In the typical use of of linear regression (ordinary least squares) normalization does not effect the results. Only the obtained regression coefficents will likely be larger as your normalized topological indices will have smaller values after the transformation.
In case you want to estimate the parameters of your linear regression with some other method it might important that you only use normalized input variables
| null |
CC BY-SA 4.0
| null |
2023-04-18T11:55:51.480
|
2023-04-18T11:55:51.480
| null | null |
220466
| null |
613322
|
2
| null |
587716
|
0
| null |
Maybe a little late to the party.
Based on your comments, it sounds to me that you are more stuck on the term model rather than the QSAR.
A model is not specified by the algorithm that is used to build the model rather a scientific model aims to abstract/explain some natural process or system.
>
Scientific modelling is a scientific activity, the aim of which is to
make a particular part or feature of the world easier to understand,
define, quantify, visualize, or simulate by referencing it to existing
and usually commonly accepted knowledge.
Source: Wikipedia
A QSAR model aims to elucidate the relationship between the structure of the molecule and the activity of a given protein. But the algorithm used to build that QSAR model can be anything.
| null |
CC BY-SA 4.0
| null |
2023-04-18T12:04:16.793
|
2023-04-18T12:04:16.793
| null | null |
220466
| null |
613323
|
1
| null | null |
2
|
41
|
I read everywhere that the ideal way of training a model would be to e.g.: run k-fold learning for hyperparameter optimization on 80-90% of the dataset, then test the best model on the rest. As far as I understand, in each iteration of the k-fold, we receive an estimation of the generalization capabilities of our model. Now, given that the dataset is sufficiently large, this performance estimate will be the same as if the same model is run on the test set, is it not? Focusing on a single fold, we have a split of e.g.: 60-20-20 of train/validation/test sets, where the 20-20 splits are both unused for training and are from the same distribution, given a proper random sampling. So then, if we optimize our hyperparameters for the validation split of 20, it should largely be equivalent to optimizing it for the test split of 20. In the end, we want the best generalizing model, so why bother with the test set at all? Using the test set as the final evaluation of our model's performance will be the same biased estimate as was with the validation set either way. We would need a different dataset altogether from a different source to properly evaluate the true, unbiased generalization of our model, wouldn't we?
In short, why not just run hyperparameter optimization with k-folds on the entire dataset, then pick the best model and publish its k-fold averaged performance with disclaimers that it was not tested on different sets? It sounds more correct to me, than pretending that the test split from the same dataset is a better estimator.
|
The train/validation/test split does not make sense to me in cases where they all originate from a single dataset
|
CC BY-SA 4.0
| null |
2023-04-18T12:05:01.620
|
2023-04-18T12:05:01.620
| null | null |
349552
|
[
"cross-validation",
"train-test-split"
] |
613324
|
2
| null |
611747
|
5
| null |
The empirical state transition matrices may be thought of as a collection of (row) histograms of a counting process given an initial state. You could compare the histograms using [Kullback-Leibler divergence](https://statproofbook.github.io/P/dir-kl.html) for the Dirichlet distribution. The row divergences could be combined using estimated state probabilities. This aggregate metric would allow you to rank the similarity of mating dances.
---
Edit: after further thought, just vectorize the transition count matrices, and do a single KL-divergence computation. Sample MATLAB code is available here for function [KL_dirichlet()](https://bariskurt.com/wp-content/uploads/2013/09/KL_dirichlet.m).
Zeros will remain "problematic" without a prior for each matrix. To resolve this, add pseudocounts to your observed counts, for instance add $\frac{1}{m \times n}$ to each element of a state transition matrix of size $m \times n$. Hopefully you are okay with a Bayesian framework.
---
>
the diagonal is all zeroes
Given this constraint (apparently $m=n$), omit the diagonal elements from the vectorized representation of the observed state transition (count) matrices. No sense modeling and comparing the probability of something that can't happen.
| null |
CC BY-SA 4.0
| null |
2023-04-18T12:32:30.780
|
2023-04-19T02:05:14.083
|
2023-04-19T02:05:14.083
|
43149
|
43149
| null |
613327
|
1
| null | null |
1
|
27
|
I use sklearn by the way, and I deal with this data set
[](https://i.stack.imgur.com/6B3bW.png)
After adding dummy variables, and doing the train test split, I get really bad linear regression results with cross validation.
[](https://i.stack.imgur.com/LrniF.png)
However, with lasso regression, I do not have the same issue
[](https://i.stack.imgur.com/63kPz.png)
What could be causing the linear regression to perform so bad with cross val while lasso doing better?
|
What can negatively affect linear regression predictiion accuracy with cross validation but not regularization models like lasso regression
|
CC BY-SA 4.0
| null |
2023-04-18T13:13:58.153
|
2023-04-19T06:04:37.167
|
2023-04-19T06:04:37.167
|
384910
|
384910
|
[
"regression"
] |
613328
|
2
| null |
472597
|
2
| null |
To my mind there are a few reasons for this:
- We don't know how to train kernel methods efficiently on datasets are large as we can for neural networks
- Similarly, neural networks allow more parallel computation
- Mathematically, many kernel methods (such as kernel ridge regression) satisfy a representer theorem, which says that the learned function lives in the span of a finite set of evaluations of the kernel. I would guess that this is quite restrictive, especially in high dimensions (e.g., for inner product kernels), and that neural networks don't have this drawback.
(It's worth saying that what I've called a drawback above for kernel machines is actually a great strength when it comes to theoretical analysis. It might just not help practically)
| null |
CC BY-SA 4.0
| null |
2023-04-18T13:25:26.527
|
2023-04-18T13:25:26.527
| null | null |
169058
| null |
613329
|
2
| null |
613128
|
0
| null |
We generally assume as the null hypotheses as an old orthodox belief as true even though we do not have sufficient proof of its truth.
and Alternate Hypothesis H1 as a new radical belief that is challenging our old system of belief.
So we need a great level of effort to reject our old belief H0.
We will need a high degree of confidence in H1 to dismantle our old traditional beliefs.
Lets understand this by example.
Aristotle's view of solar system
For nearly 1,000 years, Aristotle’s view of a stationary Earth at the center of a revolving universe dominated natural philosophy
In old times we believed that sun revolves around earth consider this as our old traditional orthodox belief => Null Hypothesis {We will try very hard to stick to this belief}
“We revolve around the Sun like any other planet.” —Nicolaus Copernicus
Our new radical claim is that earth revolves around the sun. => Alternate Hypothesis H1.
So whichever drug company you work in X or Y you will assume your company's drug(H0) is better than the competitor(H1)
[Reference NASA](https://www.earthobservatory.nasa.gov/features/OrbitsHistory)
[Reference Georgia Tech course EDX.org](https://www.edx.org/course/probability-and-statistics-iv-confidence-intervals-and-hypothesis-tests)
Note:- Please correct me if I am wrong I am just a student trying to learn Thanks
| null |
CC BY-SA 4.0
| null |
2023-04-18T13:33:39.187
|
2023-04-18T13:33:39.187
| null | null |
385999
| null |
613331
|
2
| null |
613283
|
2
| null |
Standard deviation and probability density are exactly inversely correlated in one common and important case: scaled distributions, i.e. the distributions of $a\cdot X$ for different $a$ and same random variable $X$. Thomas' answer has a great example of this.
In the more general case, there does not have to be any relation. For example, take the mixture of two Gaussians at $+a$ and $-a$, with stdev $\sigma$ each. As long as $a \gg \sigma$ (so the Gaussians don't overlap), the maximum density is $1/(2\sqrt{2\pi}\, \sigma)$ and thus independent of $a$, while the variance $a^2 + \sigma^2$ depends strongly on $a$.
Edit: If we now increase $a$ while decreasing $\sigma$, variance and peak density both increase simultaneously, making it a true counterexample.
| null |
CC BY-SA 4.0
| null |
2023-04-18T13:42:09.510
|
2023-04-18T20:44:33.877
|
2023-04-18T20:44:33.877
|
335197
|
335197
| null |
613332
|
1
|
613345
| null |
0
|
40
|
I am doing some experimentation with multiple imputation (MI) for prediction, more specifically in the context of binary classification.
I'm doing this because there is not much to be found with regard to the application of MI for prediction.
I would like some pointers to actual research about this topic (multiple imputation in predictive modelling).
Until now, almost all research I've found is not about MI and prediction but rather about the imputation methods themselves.
I wonder why that is.
Just to show I did some effort: I did find [https://arxiv.org/abs/2003.07398](https://arxiv.org/abs/2003.07398) which is about feature selection after MI, which is in the good direction.
|
multiple imputation for prediction
|
CC BY-SA 4.0
| null |
2023-04-18T13:48:27.843
|
2023-04-18T14:48:51.790
|
2023-04-18T14:16:07.473
|
318278
|
318278
|
[
"classification",
"predictive-models",
"multiple-imputation"
] |
613336
|
1
| null | null |
0
|
58
|
Suppose that I have a list of numbers, say [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. These numbers are realisations of a random variable $X$ whose distribution I am interested in.
Suppose I want to estimate the 5th percentile of this distribution (to form a confidence interval for $X$). Intuitively, I would take the lowest number (i.e. 1) as my estimate for this since 90% of the numbers are higher, and 0% are lower (which in some sense 'averages out' to the desired 5%!) However, when I use numpy.percentile (with the default setting), it suggests an estimate of 1.45. Which estimate is better, and why?
Update: To clarify, my goal is to estimate the interval $I=[x,y]$ with $y > x$ such that $\mathbb{P}(X \in I)=0.95$. The numbers $x$ and $y$ are the 5th and 95th percentiles respectively of the distribution of $X$.
|
Percentile of list of numbers
|
CC BY-SA 4.0
| null |
2023-04-18T13:57:24.990
|
2023-04-20T14:55:51.140
|
2023-04-20T14:31:01.120
|
290040
|
290040
|
[
"quantiles"
] |
613337
|
1
|
613429
| null |
1
|
93
|
I'd like to model the optimal quantile regression curves of probabilities as function of xa, using qgam ideally (mgcv-based R package), but not necessarily:
Data:
```
dat <-
structure(list(prob = c(0.043824528975438, 0.0743831343145038,
0.0444802301649798, 0.0184204002808217, 0.012747152819121, 0.109320069103749,
0.868637913750677, 0.389605665620339, 0.846536935687218, 0.104932383728924,
0.000796924809569913, 0.844673988202945, 0.00120791067227541,
0.91751061807481, 0.0140582427585067, 0.61360854266884, 0.55603090737844,
0.0121424615930165, 0.000392412410090414, 0.00731972612592678,
0.450730636411052, 0.0111896050578429, 0.0552971757296455, 0.949825608148576,
0.00216318997302124, 0.620876890784462, 0.00434032271743834,
0.809464444601336, 0.890796570916792, 0.0070834616944228, 0.0563350845256127,
0.913156468748195, 0.00605085671490011, 0.00585882020388307,
0.0139577135093548, 0.0151356267602558, 0.00357231467872644,
0.000268107682417655, 0.047883018897558, 0.137688264298974, 0.846219411361109,
0.455395192661041, 0.440089914302649, 0.312776912863294, 0.721283899836456,
0.945808616162847, 0.160122538485323, 0.274966581834218, 0.223500907500226,
0.957169102670141, 3.29173412975754e-05, 0.920710197397359, 0.752055893010363,
0.204573327883464, 0.824869881489217, 0.0336636091577387, 0.834235793851965,
0.00377210373002217, 0.611370672834389, 0.876156793482752, 0.04563653558985,
0.742493995255321, 0.42035122692417, 0.916359628728296, 0.182755925347698,
0.139504394672643, 0.415836463269909, 0.0143112277191436, 0.00611022961831899,
0.794529254262237, 0.000295836911230635, 0.88504245090271, 0.0320097205131667,
0.386424550101868, 0.724747784339428, 0.0374198694261709, 0.772894216412908,
0.243626917726206, 0.884082536765856, 0.649357153222083, 0.651665475576256,
0.248153637183556, 0.621116026311962, 0.254679380328883, 0.815492354289526,
0.00384382735772974, 0.00098493832845314, 0.0289740210412282,
0.919537164719931, 0.029914235716672, 0.791051705450356, 0.535062926433525,
0.930153425256182, 0.739648381556949, 0.962078822556967, 0.717404075711021,
0.00426200695619151, 0.0688025266083751, 0.30592683399928, 0.76857384388609,
0.817428136470741, 0.0101583095649087, 0.190150584186769, 0.949353043876038,
0.000942385744019884, 0.00752842476126574, 0.451811230189468,
0.878142444707428, 0.085390660867941, 0.705492062082986, 0.00776625091631656,
0.120499683875168, 0.871558791341612, 0.204175216963286, 0.88865934672351,
0.735067195665991, 0.111767657566763, 0.0718305257427526, 0.001998068594943,
0.726375812318976, 0.628064249939129, 0.0163105011142307, 0.585565544471761,
0.225632568540361, 0.914834452659588, 0.755043268549628, 0.44993311080756,
0.876058522964169, 0.876909380258345, 0.935545943209396, 0.856566304797687,
0.891579321327903, 0.67586664661773, 0.305274362445618, 0.0416387565225755,
0.244843991055886, 0.651782914419153, 0.615583040148267, 0.0164959661557421,
0.545479687527543, 0.0254178939123714, 0.00480000384583597, 0.0256296636591875,
0.776444262284288, 0.00686736233661002, 0.738267311816833, 0.00284628668554737,
0.0240371572079387, 0.00549270830047392, 0.91880163437759, 0.336534358175717,
0.276841848679916, 0.718008645244615, 0.0897424253787563, 0.0719730540202573,
0.00215797941000608, 0.0219160132143199, 0.797680147185277, 0.66612383359622,
0.946965411044528, 0.133399527090937, 0.343056247984854, 0.202570454449074,
0.00349712323805031, 0.919979740593237, 0.577123238372546, 0.759418264563034,
0.904569159000302, 0.0179587619909363, 0.785657258439329, 0.235867625712547,
0.959688292861383, 0.668060191654474, 0.0014774986557077, 0.00831528722028647,
0.669655207261098, 0.157824457113222, 0.110637023939517, 0.262525772704882,
0.112654002253028, 0.22606090266161, 0.157513622503487, 0.25688454756606,
0.00201570863346944, 0.70318409224183, 0.25568985167711, 0.810637054896326,
0.92708070974999, 0.608664352336801, 0.707490903842404, 0.00094520948858089,
0.106177223644193, 0.582785205597368, 0.0585327568963445, 0.377814739935042,
0.972447647118833, 0.0111118791692372, 0.58947840090326, 0.0111189166236961,
0.00317374095338712, 0.0664218007312096, 0.00227258301798719,
0.00198861129291917, 0.337443337988163, 0.750708293355867, 0.837530172974158,
0.627428065068903, 0.744110974625108, 0.00320417425932798, 0.871800026765784,
0.613647987816266, 0.808457030433619, 0.00486495461698562, 0.597950577021363,
0.000885253981642748, 0.0800527366346806, 0.00951706823839207,
0.125222576598629, 0.346018567766834, 0.0376933970313487, 0.157903106929268,
0.0371982251307384, 0.00407175432189843, 0.0946588147179984,
0.967274516618573, 0.169109953293894, 0.00124072042059317, 0.00259042255361196,
0.000400511359506596, 0.841289470209085, 0.807106898740506, 0.926962245924993,
0.814160745645036, 0.662558468801531, 0.000288068688170646, 0.698932091902567,
0.00242011818508616, 0.645573844423654, 0.517121859568318, 0.0931231998319089,
0.000877774529895907), xa = c(6.85, 7.65, 7.6, 6.65, 7.35, 8.6,
9.8, 8.25, 9.65, 7.6, 5.95, 11.75, 6.05, 10.75, 7, 8.25, 7.25,
7.55, 6.4, 7.45, 7.15, 7.1, 7.4, 8.85, 6.65, 7.75, 6.95, 7.25,
9.35, 6, 7.5, 9, 7.1, 7.75, 7.55, 6.95, 6.85, 5.8, 7.4, 7.45,
9.7, 8.1, 7.6, 8.1, 8.45, 9.45, 8, 7.25, 7.05, 9.5, 5.05, 10.15,
8.7, 7.7, 8.4, 7.5, 9.25, 6.85, 7.45, 11.85, 7.9, 7.6, 8.3, 10.35,
7.95, 7.9, 8.65, 7.05, 6.9, 9.6, 5.5, 12.2, 7.45, 7.5, 7.2, 7.05,
8.7, 7.25, 8.35, 8.45, 8.05, 8.05, 8.25, 7.7, 9, 6.95, 6.75,
6.55, 8.9, 7.4, 9.35, 8.45, 10.35, 8.65, 9.6, 8.75, 7.05, 7.8,
7.95, 8.4, 8.3, 7.6, 8.3, 8.7, 6.65, 7.1, 7.7, 10.1, 7.75, 9.05,
6.5, 6.3, 9.45, 7.7, 7.65, 8.15, 7.35, 7.6, 7.2, 8.35, 7.65,
6.8, 11.45, 7.35, 12.65, 9.15, 8.15, 10.6, 8.6, 11, 9.85, 9.2,
9, 7.8, 7.25, 7.65, 8.35, 8.4, 7.55, 7.55, 7.55, 6.95, 8.15,
8.65, 6.95, 8.5, 4.75, 7.3, 7.65, 9.15, 7.45, 8.2, 7.8, 7.3,
7.35, 6.1, 7.35, 7.25, 8.15, 9.55, 7.15, 7.15, 7.2, 7.2, 8.25,
8.7, 8.85, 10.35, 7.5, 7.45, 7.05, 15.15, 8.7, 6.15, 6.55, 16.05,
7.6, 6.55, 7.45, 7.6, 8.15, 6.05, 6.55, 6.65, 7.35, 7.3, 9.4,
10.05, 10.85, 8.5, 6.4, 7.15, 7.5, 6.25, 7, 9.55, 6.85, 8.2,
6.7, 7.2, 7.25, 7.05, 7.25, 6.9, 9.1, 9.4, 7.45, 7.8, 5.55, 7.8,
8.7, 7.65, 6.9, 8.25, 6.4, 7.5, 7.55, 7.95, 7.35, 7, 7.3, 6.65,
6.65, 6.9, 8.65, 8.25, 5.95, 6.55, 6.1, 7.7, 10.95, 11.15, 8.85,
7.35, 6, 7.75, 5.45, 7.55, 7.1, 7.35, 6.45)), row.names = c(NA,
-241L), class = "data.frame")
```
Code:
```
library(qgam)
library(mgcViz)
library(ggplot2)
# qgam
qg.5 <- qgamV(prob ~ s(xa, bs="tp", k=20), data = dat, qu = 0.5)
qg.25 <- qgamV(prob ~ s(xa, bs="tp", k=20), data = dat, qu = 0.25)
qg.75 <- qgamV(prob ~ s(xa, bs="tp", k=20), data = dat, qu = 0.75)
# add fitted values to dat
dat$fit_prob.5 <- qg.5[["fitted.values"]]
dat$fit_prob.25 <- qg.25[["fitted.values"]]
dat$fit_prob.75 <- qg.75[["fitted.values"]]
# predict xa at prob = 0.5
xa_at_prob.5 <- with(dat, approx(fit_prob.5, xa, xout=0.5)); xa_at_prob.5
# plot
ggplot(dat, aes(x = xa, y = prob)) +
geom_point() +
geom_line(aes(y = fit_prob.5), lwd = 1.2) +
geom_line(aes(y = fit_prob.25), lwd = 1.2, linetype = "longdash") +
geom_line(aes(y = fit_prob.75), lwd = 1.2, linetype = "longdash") +
geom_vline(xintercept = xa_at_prob.5[["y"]], linetype = "longdash", color = "black", linewidth = 0.6) +
scale_y_continuous(breaks = seq(0, 1, 0.1)) +
scale_x_continuous(breaks = seq(0, 16, 1)) +
geom_hline(yintercept=c(0, 0.5, 1), linetype="solid", color = "black", linewidth = 0.3) +
theme_classic(base_size = 20) +
coord_cartesian(xlim = c(4, 13))
```
However, looking at the graph below, I've two concerns:
Question 1: Some fit_prob.25 values are negative, as shown on the 0.25 quantile curve (blue area 1).
I think this is because qgam does not consider prob values as probabilities, i.e., values that must be constrained between 0 and 1.
Is it the problem? How to solve it?
Question 2: Surprisingly, the curves don't seem to fit the data very well; there are more points in red zone 2 (about n=30) than in zone 3 (about n=10). Slightly more left and steeper quantile curves would seem to fit better (like the approximative red median curve part). Is there an explanation for this inappropriate fitting?
[](https://i.stack.imgur.com/2dLln.png)
Any help, advice or reference would be greatly appreciated
|
Why don't quantile regression curves (qgam / mgcv) fit the probabilities optimally?
|
CC BY-SA 4.0
| null |
2023-04-18T14:07:57.250
|
2023-04-20T11:34:30.987
|
2023-04-20T11:34:30.987
|
307344
|
307344
|
[
"r",
"probability",
"mgcv",
"quantile-regression"
] |
613338
|
1
| null | null |
0
|
22
|
I feel like this question has been asked a billion times but none of the answers I read seem to fit my situation, so sorry if it's redundant and I missed it !
Here's my situation :
I have 2 non-paired groups, A and B
- 13 individuals each
- Continuous variable
- Normal distribution (based on shapiro test)
I want to compare those two groups and see if there is a difference, so I went for a Student test which yield this result :
```
Welch Two Sample t-test
p-value = 0.001857
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
2.845288 10.861379
sample estimates:
mean of A mean of B
13.05333 6.20000
```
So that's great, there's a statistically significative difference between my two groups and I can use the given mean to get that the difference is 6.8533 CI95(2.845, 10.86).
Now, I've been talking with my boss who ask me for a ratio rather than a difference. Again, easy to do, I just divide mean A by mean B, and we get
ratio A/B = 2.1 meaning that group A has 2.1 times more XYZ than group B, easy to understand and to visualize
But from there, I have no idea how I'm supposed to get an SD, SEM or CI for this ratio.
I've seen someone post this link, but note that I'm neither a classicaly trained statistician, even less a mathematician so I don't understand a thing of it =/
[https://en.wikipedia.org/wiki/Ratio_distribution#Means_and_variances_of_random_ratios](https://en.wikipedia.org/wiki/Ratio_distribution#Means_and_variances_of_random_ratios)
If any of you have an (hopefully easy to understand) answer, I would be so grateful
Thank you very much!
|
How to calculate SD, SEM, IC95 of a ratio?
|
CC BY-SA 4.0
| null |
2023-04-18T14:14:00.493
|
2023-04-18T14:14:00.493
| null | null |
286654
|
[
"standard-deviation",
"descriptive-statistics",
"ratio"
] |
613339
|
1
|
613342
| null |
3
|
37
|
The way I see it, is that since samples and features are chosen randomly, there is a chance, small as it may seem, that at the end of it all, some samples and some features might end up not being used at all, and some might be used a lot more than others, on average, which can lead to skewed and biased final results.
Does that make sense? If yes, then is there a way to make sure that at the end, throughout the whole forest, all samples and all features would end up being used in trees on average the same amount of times, ensuring that no sample/feature would be left out?
|
Can the randomness in random forests lead to some samples/features being used more/less than others, resulting in skewed and biased results?
|
CC BY-SA 4.0
| null |
2023-04-18T14:27:27.687
|
2023-04-18T14:43:19.763
| null | null |
386006
|
[
"python",
"random-forest"
] |
613340
|
2
| null |
604797
|
0
| null |
Take a look at this link ([https://github.com/mrborges23/delta_statistic](https://github.com/mrborges23/delta_statistic)) it might be helpful.
It is from this paper: Borges, R., Machado, J. P., Gomes, C., Rocha, A. P., & Antunes, A. (2019). Measuring phylogenetic signal between categorical traits and phylogenies. Bioinformatics, 35(11), 1862-1869.
| null |
CC BY-SA 4.0
| null |
2023-04-18T14:29:43.607
|
2023-04-18T14:29:43.607
| null | null |
386008
| null |
613341
|
1
|
613851
| null |
2
|
49
|
I'm studying simulation and in particular uncertainty in survival curve estimates, by drawing from a distribution of parameters (for the Weibull distribution), using the variance-covariance matrix from the MASS package. In response to post [How to generate simulation paths for Weibull distribution in R?](https://stats.stackexchange.com/questions/613198/how-to-generate-simulation-paths-for-weibull-distribution-in-r), I've come up with the following example. My question is, is this an adequate example for simulating the Weibull distribution and in deriving survival and hazard curve estimates?
I show the output plots below and the code at the bottom of this post.
[](https://i.stack.imgur.com/7J3rg.png)
Code:
```
library(survival)
library(MASS)
# Simulate survival data
n <- 100
t <- rweibull(n, shape = 2, scale = 3) # observed survival times for each individual in the study
cens <- rbinom(n, size = 1, prob = 0.2) # 1 = survival time right-censored (still alive/lost; 0 = event-censored
x1 <- rnorm(n) # x1 & x2 are hypothetical covariates (predictors) for individuals in the study
x2 <- rnorm(n) # see above
data <- data.frame(t, cens, x1, x2)
# Fit Cox PH model
model <- coxph(Surv(t, cens) ~ x1 + x2, data = data)
# Generate random samples from multivariate normal distribution
beta <- coef(model)
vcov <- vcov(model)
samples <- mvrnorm(1000, mu = beta, Sigma = vcov)
# Generate new survival times; new_time = simulated new survival times
data$new_time <- NA
for (i in 1:nrow(data)) {
shape <- exp(samples[i, "x1"])
scale <- exp(samples[i, "x2"])
data$new_time[i] <- qweibull(runif(1), shape = shape, scale = scale)
}
# Generate survival curves
fit <- survfit(Surv(new_time, cens) ~ 1, data = data)
sfit <- summary(fit)
tseq <- seq(min(sfit$time), max(sfit$time), length = 100)
surv <- survfit(Surv(new_time, cens) ~ 1, data = data.frame(new_time = tseq))
haz <- -diff(log(surv$surv))/diff(surv$time)
# Plot survival and hazard curves, and histogram of simulated survival times
par(mfrow=c(1,3))
plot(surv, xlab = "Time", ylab = "Survival Probability", main = "Survival Function")
plot(surv$time[-1], haz, type = "l",xlab = "Time", ylab = "Hazard Function", main = "Hazard Function")
hist(data$new_time, breaks = 20, xlab = "Survival Time", ylab = "Frequency", main = "Simulated Survival Times")
# Calculate average hazard rate per time period
haz_by_time <- split(haz, factor(floor(surv$time[-1])))
mean_haz <- unlist(lapply(haz_by_time, function(x) mean(x, na.rm = TRUE)))
```
|
Is this an adequate example for simulation and modeling uncertainty in survival and hazard curve estimates?
|
CC BY-SA 4.0
| null |
2023-04-18T14:31:38.380
|
2023-04-27T13:02:42.940
| null | null |
378347
|
[
"r",
"survival",
"simulation",
"cox-model",
"weibull-distribution"
] |
613342
|
2
| null |
613339
|
2
| null |
It does not make a lot of sense because if we have enough trees what is described won't happen. The way to ensure this is to monitor how the overall random forest (RF) performance changes as more trees are added (see the CV.SE thread on: [Do we have to tune the number of trees in a random forest?](https://stats.stackexchange.com/questions/348245/do-we-have-to-tune-the-number-of-trees-in-a-random-forest/348246#348246) for an in-depth discussion on that, Sycorax's answer is gold). Notice that RFs primarily reduce variance, not bias, ie. any single base learner is "as biased as" the overall ensemble, just the ensemble's prediction is more stable via [bagging](https://en.wikipedia.org/wiki/Bootstrap_aggregating).
| null |
CC BY-SA 4.0
| null |
2023-04-18T14:43:19.763
|
2023-04-18T14:43:19.763
| null | null |
11852
| null |
613343
|
1
|
613421
| null |
4
|
79
|
By Chebyshev's inequality it is known that
$$\mathbb{P}\left(|X-\mu|<k\sigma\right) \geq 1-\frac{1}{k^2}\,.$$
Then does it follow that
$$\mathbb{P}\left((\mu-k\sigma)^2 \leq X^2 \leq (\mu +k\sigma)^2\right) \geq 1-\frac{1}{k^2}$$
assuming $\mu > k\sigma$?
|
Using Chebyshev's inequality on $X$ to inform distribution on $X^2$
|
CC BY-SA 4.0
| null |
2023-04-18T14:46:13.287
|
2023-04-19T06:44:50.840
|
2023-04-18T16:16:36.277
|
362671
|
386010
|
[
"probability",
"probability-inequalities"
] |
613344
|
1
|
613379
| null |
12
|
1203
|
On wikipedia, the definition of a martingale is given as follows:
>
A basic definition of a discrete-time martingale is a discrete-time stochastic process (i.e., a sequence of random variables) $X_1, X_2, X_3,\dots$ that satisfies for any time $n$,
$$\mathbb{E}(|X_n|) < \infty$$
$$\mathbb{E}(X_{n+1}\mid X_1\dots,X_n) = X_n.$$
That is, the conditional expected value of the next observation, given all the past observations, is equal to the most recent observation.
while in Rick Durrett's textbook Probability: Theory and Examples, the martingale is defined as follows:
>
In this section we will define martingales and their cousins supermartingales and submartingales, and take the first steps in developing their theory. Let $\mathcal{F}_n$ be a filtration, i.e. an increasing sequence of $\sigma$-fields. A sequence $X_n$ is said to be adapted to $\mathcal{F}_n$ if $X_n\in\mathcal{F}_n$ for all $n$. If $X_n$ is a sequence with
$\mathbb{E}|X_n| < \infty$
$X_n$ is adapted to $\mathcal{F}_n$,
$\mathbb{E}(X_{n+1}|\mathcal{F}_n) = X_n$ for all $n$,
then $X$ is said to be a martingale (with respect to $\mathcal{F}_n$). If in the last definition, $=$ is replaced with $\leq$ or $\geq$, then $X$ is said to be a supermartingale or submartingale, respectively.
The wikipedia definition seems quite intuitive and reasonable to me, that is, a martingale is just a process in which the expected value tomorrow given past history is always equal to today's value (with a special example being a random walk process). My questions are:
Why is it necessary to use sigma-fields instead of just past history in the definition? Do we gain anything by using the more sophisticated definition? I simply could not think of a situation in which this more sophisticated definition is needed.
I'm particularly interested in these questions as these two definitions seems non-nested, i.e., the Durrett definition does not contain the wikipedia definition as a special case, as the set of past history $\{X_1,...,X_n\}$ is not a sigma-field, it needs to include at least the empty set too.
It would be great if you could illustrate with a simple example (such as a random walk or other processes).
|
What's the relationship between these two definitions of martingales?
|
CC BY-SA 4.0
| null |
2023-04-18T14:48:31.000
|
2023-04-21T16:30:04.520
|
2023-04-20T01:39:39.460
|
141174
|
224576
|
[
"probability",
"conditional-expectation",
"definition",
"martingale",
"sigma-algebra"
] |
613345
|
2
| null |
613332
|
0
| null |
In principle, the (or at least one) right way to do this is clear (see e.g. [this question & answer](https://stats.stackexchange.com/questions/517400/machine-learning-on-mice-imputed-data/542179#542179)), but this is also time-consuming, which I assume is why it's usually not done: Do multiple imputation, build model separately on each of the multiple imputations (of course being careful to [impute after training-validation-test-splitting](https://stats.stackexchange.com/questions/608120/proper-imputation-of-missing-values-for-machine-learning/608126#608126)), if new data has no missing data then the overall prediction is the average of the predictions from each of these models (e.g. see [example](https://stats.stackexchange.com/questions/605474/can-missing-data-imputations-outperform-default-handling-for-lightgbm/605485#605485) here). If the new data has missing values, then you need ideally should impute them coherently. I.e. create as many multiple imputations and use the implicit parameter values as used in the original multiple imputation for each imputation - that's easiest, if you did your multiple imputation explicitly with a Bayesian model fit using MCMC, a bit less easy for some imputation packages that abstract this away (and don't offer this option - which I think illustrates that mostly people building tools for multiple imputation think of inference on a fixed dataset, rather than about prediction).
Of course, it's not necessary to fit models separately for each imputation, although that's a common approach see e.g. the discussion in [BDA3](http://www.stat.columbia.edu/%7Egelman/book/BDA3.pdf) starting from page 449 (esp. from 452 onwards, see also Zhou, X. and Reiter, J.P., 2010. [A note on Bayesian inference after multiple imputation](https://doi.org/10.1198/tast.2010.09109). The American Statistician, 64(2), pp.159-163.). E.g. in Bayesian MCMC samplers, there's approaches that jointly look at the likelihood over all imputations. Alternatively, I'd guess you could do things like picking a different imputation randomly for each tree (or even for each node split) in tree-based algorithms (like random forest or gradient boosted decision trees), but I've never seen that implemented.
| null |
CC BY-SA 4.0
| null |
2023-04-18T14:48:51.790
|
2023-04-18T14:48:51.790
| null | null |
86652
| null |
613346
|
1
|
613350
| null |
3
|
54
|
For testing distributions existing in base R, say normal or beta distributions, is there any difference if I use a one-sampled or two-sampled Kolmogorov-Smirnov test with same parameters?
For example:
ks.test(x, "pbeta", shape1, shape2)
Or
ks.test(x, rbeta(n, shape1, shape2)
Of course, with n being big enough, and in my case study x having 100 continuous values between 0 and 1.
|
Are there any differences in R if I use a "pnorm" or create a distribution with rnorm for Kolmogorov-Smirnov?
|
CC BY-SA 4.0
| null |
2023-04-18T15:06:45.070
|
2023-04-18T15:45:26.950
|
2023-04-18T15:19:51.247
|
247165
|
260956
|
[
"r",
"distributions",
"kolmogorov-smirnov-test"
] |
613347
|
1
| null | null |
1
|
40
|
I've been trying to understand how to properly conduct a glmer model but I've come across so much contradictory information, I'm hoping someone could help me out.
- What exactly are the assumptions of my glmer model?
- What functions and packages do you recommend for testing them?
- Can I still test the assumptions if my model used for assumption testing doesn't converge?
- What can you do if the model doesn't converge? I'm grateful for any tips.
This is my model btw:
```
m.cluster1 <- glmer(formula = binary_var ~ continous_pred1 + binary_gender + binary_educationlevel2 + binary_educationlevel3 + binary_languagelevel2 + binary_languagelevel3 + binary_locationdensity2 + binary_locationdensity3 + binary_locationdensity4 +
(1|state), weights=final_weights, data=data, family=binomial(link="logit"), control=glmerControl(optCtrl = list(maxfun=2000000000)))
```
Thank you in advance!
|
Assumptions and trouble shooting for glmer (lme4) Generalized Linear Mixed-Effects Model
|
CC BY-SA 4.0
| null |
2023-04-18T14:43:26.787
|
2023-04-18T21:30:12.253
| null | null | null |
[
"r",
"lme4-nlme"
] |
613349
|
1
| null | null |
0
|
19
|
I am trying to develop an app to help users find the best money for the value of cars based on the real price of car ownership, which is a sum of depreciation, gas consumption, maintenance, and insurance.
I need to convert depreciation prices into an equation. The depreciation equation is not linear and may have several gradients, as shown in the following graph. In most depreciation graphs, the curve initially drops fast and becomes more subtle later.
See the following graph:
[](https://i.stack.imgur.com/1N6fM.png)
How such an equation can be made?
|
Creating an equation for car prices depreciation
|
CC BY-SA 4.0
| null |
2023-04-18T15:08:42.723
|
2023-04-21T13:19:41.953
|
2023-04-21T13:19:41.953
|
386011
|
386011
|
[
"regression"
] |
613350
|
2
| null |
613346
|
4
| null |
The second option will depend on the specific data set you generate, so it will be affected by random variation. Chances are that this will imply that the second test has worse statistical properties (power in particular). You could in principle find out about this simulating many data sets and estimating power and type I error probability, but I'm not sure whether that's worthwhile because intuitively it's rather obvious that you sacrifice precision using the second approach.
Chances are also you are right believing that with sample size of the generated random data set n going to infinity you may ultimately get something equivalent, but of course large n means also more computational effort. So whereas this is a fun idea, I have a subjective probability north of of 99% that version 1 will be superior (I don't give it 100% as I can't prove it, at least not in reasonably short time, but...), as version 2 basically does the same thing but with random noise introduced, which won't help matters.
I add that there is another issue with version 2, which is that the rbeta command uses a pseudo random number generator that in fact isn't truly random Beta. Chances are this won't make much of a difference though unless n is extremely large.
PS: whuber in the comments writes: "Imagine, for instance, a proposal in a textbook Binomial(n,p) situation to test the hypothesis p=1/2 by generating a second Binomial(n,1/2) sample and conducting a two-sample t-test, rather than applying the usual one-sample t-test. That should help make the issues more apparent."
| null |
CC BY-SA 4.0
| null |
2023-04-18T15:18:45.510
|
2023-04-18T15:45:26.950
|
2023-04-18T15:45:26.950
|
247165
|
247165
| null |
613351
|
2
| null |
613295
|
1
| null |
Residual connections are used in many different types of architectures (e.g., transformers). Whenever you have a question about whether a technique is going to work, it's best to try it and see if your CV score or whatever you're using improves.
The easiest way to use residual connections in a feed forward is to use the same width for all hidden layers. If you want to use different widths, you can use linear layers to reshape the inputs so that they're conformable with the outputs. For example, if your input $x$ to a layer $f$ is $N$-dimensional, and the output is $M$-dimensional, then you'd want to introduce an $M\times N$ array of weights $W$, an $M$-dimensional bias term $b$, and use them so that your layer output is
$$f(x) + (b + Wx)$$
So it's close to a residual connection, but not exactly the same thing.
Also, you'll likely need to pass your input features through a linear layer so you can add them to the output of your first hidden layer.
| null |
CC BY-SA 4.0
| null |
2023-04-18T15:19:43.203
|
2023-04-19T12:53:02.917
|
2023-04-19T12:53:02.917
|
40513
|
40513
| null |
613352
|
1
| null | null |
1
|
33
|
I have data on every species in a genus, and am interested in how two factors relate to each other specifically within that genus. Should I use a statistical test to do this or does it not make sense? I'm thinking it doesn't make sense, because the point of a (NHST) statistical test is to see how confident you can be that the pattern in your sample isn't due to chance, but I have the full population so I know the true values. Is this correct or should I be running a test for other reasons?
|
Should I use null hypothesis significance testing when I have the full population?
|
CC BY-SA 4.0
| null |
2023-04-18T15:20:58.007
|
2023-04-18T15:32:05.503
| null | null |
158612
|
[
"hypothesis-testing",
"sampling",
"sample",
"population"
] |
613354
|
1
|
613381
| null |
4
|
333
|
In the seminal paper by [Tanner and Wong (1987)](https://www.stat.cmu.edu/%7Ebrian/905-2009/all-papers/tanner-wong-1987-with-disc.pdf) on data augmentation, they describe a method for obtaining the posterior distribution $p(\theta|y)$ by data augmentation. Let $Z$ be a latent variable (which will be the augmenting data) so that the posterior distribution $p(\theta|z,y)$ is tractable. Suppose for simplicity $\theta$ and $Z$ are both univariate.
At a high level, my understanding of the process described in Tanner and Wong is the following (please correct if not right):
$$p(\theta|y)=\int p(\theta|z,y)p(z|y)dz$$
$$p(z|y)= \int_\Theta p(z|\phi,y)p(\phi|y)d\phi$$
up to some constant and assuming conditions that allow the exchange of integrals, we see that
\begin{equation}
p(\theta|y)=\int p(\theta|z,y) \int p(z|\phi,y)p(\phi|y)d\phi dz=\int K(\theta,\phi)p(\phi|y)d\phi \hspace{.2cm} (1)
\end{equation}
$% it's not show equation number$
Under mild conditions, this establishes a recursion,
$$p_{i+1}= \int K p_{i} d\phi$$
whose stationary state is the true distribution $p(\theta|y)$. So you need just run this long enough.
Now in middle term of (1) we see what we need to do. Obtain draws of $z|y$ using draws $\phi^{(s)}$ from $p_{i}(\phi|y)$ and then draws $z^{(s)}$ from $p(z|\phi^{(s)},y)$ (as one would with any posterior predictive distribution that is unobtainable analytically) and then approximate $p_{i+1}(\theta|y)$ using Monte Carlo integration (i.e. a mixture of conditional posteriors). Repeat until $i$ large enough.
This method relies on knowing or being able to draw from $p(\theta|z,y)$ and $p(z|\theta,y)$. So why not just use Gibbs sampling and work with draws of $\theta$ and disregard draws of $z$?
Does Gibbs not work or is it because they serve a different purpose? That is, the Tanner and Wong method gives a mixture distribution which is eventually an approximate of the true posterior $p(\theta|y)$ from which you can work as opposed to Gibbs which just gives you draws from the desired posterior?
|
Monte Carlo Options for Data Augmentation
|
CC BY-SA 4.0
| null |
2023-04-18T15:31:27.097
|
2023-04-19T11:22:04.460
|
2023-04-19T11:22:04.460
|
281443
|
281443
|
[
"bayesian",
"gibbs",
"data-augmentation"
] |
613355
|
2
| null |
613352
|
1
| null |
The point of doing statistical inference of any kind, be it hypothesis testing, creating confidence intervals, or using Bayesian methods, is to use the available data (the known) to infer something about a greater population from which the data are drawn (unknown) and quantify the uncertainty in your inference since you are dealing with the unknown.
If you only have the known, you do not have to infer anything about the unknown, and hypothesis testing is superfluous.
That said, it would be unusual to have the entire population under study. Indeed, the full “population” is typically a data-generating process (DGP) that is infinite. It might be that you really do have the entire population you want to study, but even if you do, it is possible that you really want to know about the DGP, and you can use the data to draw inferences about the DGP.
| null |
CC BY-SA 4.0
| null |
2023-04-18T15:32:05.503
|
2023-04-18T15:32:05.503
| null | null |
247274
| null |
613356
|
2
| null |
613290
|
2
| null |
I think we should have a closer look at your hypothesis:
>
The patients’ decrease in kidney function from baseline to follow-up are caused by a natural/expected decrease, and thus, not their disease.
Given that you are interested in kidney disease vs. natural decrease causing the (extent of) decrease in kidney function, you are interested in a causal question (i.e., does x cause y, or does kidney disease cause more decrease in kidney function as compared to no kidney disease).
In a hypothetical world, we would want to know the decrease in kidney function of an individual when they would have had kidney disease vs. when they would not have had kidney disease. However, an individual has kidney disease or does not have kidney disease. We cannot observe the outcome under both states of the exposure, or in other words, one of the outcomes is counter to fact. Because we do not know these counterfactual outcomes in individuals, we cannot estimate a so called individual causal effect.
Luckily, we can try to get an estimate of a different causal effect; the average causal effect. An average causal effect would be the effect of x on y on a group level, where we compare individuals who are exposed (who do have kidney disease) versus those who are unexposed (who do not have kidney disease). If we meet some assumptions, we could validly estimate the effect of having vs. not having kidney disease on the decrease in kidney function. These assumptions are the following:
- Exchangeability: the two groups you are comparing (kidney disease vs. no kidney disease) have the same distribution of risk factors for the outcome; their chance of their kidney function decreasing by x mL/min/1.73m2 is equal, outside the possibility of kidney disease influencing this.
- Positivity: each individual has a non-zero probability of having kidney disease or not having kidney disease
- Consistency: when we say kidney disease and no kidney disease, it is sufficiently clear what we mean. Kidney disease could mean diabetic nephropathy, glomerulonephritis, etc. In your case you said a specific kidney disease, so possibly within that disease there are further variations, but those variations should not matter for the outcome. An example of no consistency would be studying 'low BMI' as a cause of death, where low BMI could mean high metabolism, amputation of a leg, malnutrition, or other things, that all have very different prognoses for death.
To meet these assumptions, we can choose a design of a study (e.g., a randomized clinical trial, although it is not ethical and rather difficult to randomize individuals to kidney disease or not) and choose specific analyses. Given that a randomized clinical trial would likely not be accepted by any medical ethical review board, you could choose to use observational, non-experimental data. However, in observational data, exchangeability often does not hold and requires analyses such as regression or more advanced methods to approach (although impossible to be certain about).
However, in your question, you mention the data is all on patients with that specific kidney disease, therefore we cannot draw a causal conclusion: we are missing a group without kidney disease to compare the kidney function with and can therefore not draw a causal conclusion.
You could describe the decrease in kidney function and look for other data on a healthy population, but this quickly impedes any causal inference. If you just want to describe the decrease in kidney function, you could indeed use a linear regression model, although you then make the assumption the decrease in kidney function over time follows a linear pattern.
To summarize: you are asking a causal question (does having the kidney disease change the decrease in kidney function when compared to no kidney disease [i.e., natural course]), but do not have the data to answer this causal question.
On a side note: your interpretation of the R-squared is correct: your model explains ~70% of variation in the outcome.
If you do have a control group without kidney function available, you should still be wary of the following:
- Make sure you have an adequate sample size, n = 30 is rather small and will impede many conclusions due to statistical uncertainty
- Be careful in choosing when to start follow-up: there are many biases as a result of picking the wrong moment (see references for more on this)
- Clearly define the question you are interested in: what groups do you want to compare
Here are some references which talk about the above and go more into depth on the topic of causal inference:
- On biases when picking the wrong start of follow-up and other easy to avoid biases when working with observational data: Fu EL, van Diepen M, Xu Y, Trevisan M, Dekker FW, Zoccali C, Jager K, Carrero JJ. Pharmacoepidemiology for nephrologists (part 2): potential biases and how to overcome them. Clin Kidney J. 2020 Dec 14;14(5):1317-1326. doi: 10.1093/ckj/sfaa242. PMID: 33959262; PMCID: PMC8087121.
- Using observational data to emulate a randomized controlled trial: Hernán MA, Robins JM. Using Big Data to Emulate a Target Trial When a Randomized Trial Is Not Available. Am J Epidemiol. 2016 Apr 15;183(8):758-64. doi: 10.1093/aje/kwv254. Epub 2016 Mar 18. PMID: 26994063; PMCID: PMC4832051.
- A book by one of the leaders in causal inference with intuitive and mathematical explanations for drawing causal conclusions: Hernán MA, Robins JM (2020). Causal Inference: What If. Boca Raton: Chapman & Hall/CRC.
- If the book by Hernán is a bit much, Westreich offers a lighter introduction to causal inference: Westreich D (2020). Epidemiology by Design: A Causal Approach to the Health Sciences.
| null |
CC BY-SA 4.0
| null |
2023-04-18T15:35:07.533
|
2023-04-18T15:35:07.533
| null | null |
385890
| null |
613357
|
1
| null | null |
0
|
11
|
I'm coming from [here](https://stats.stackexchange.com/questions/106121/does-it-make-sense-to-combine-pca-and-lda) and [here](https://stats.stackexchange.com/questions/54547). I kind of understand what it does running PCA before LDA. However I don't really understand what I have to perform LDA. In the correlation matrix? Here I have all variables centered and scaled and ready to be "weighted" predictors
Which would be the steps to follow?
|
Does it make sense to use the correlation matrix from PCA in LDA?
|
CC BY-SA 4.0
| null |
2023-04-18T15:46:37.483
|
2023-04-18T15:46:37.483
| null | null |
339186
|
[
"pca",
"discriminant-analysis"
] |
613358
|
2
| null |
613344
|
5
| null |
What does the notation $\mathbb E[X\mid X_1,\cdots,X_n]$ mean?
Go to the basics: consider $X\in \mathcal L_1(\Omega, \boldsymbol{\mathfrak B}, \mathbb P)$ and let $\mathcal G\subset\boldsymbol{\mathfrak B}$ defined as the sigma algebra generated by a list of random variables on the same space, i.e. $\mathcal G:=\sigma(X_i, ~i\in\mathcal I). $ Then the conditional expectation of $X$ w.r.t. $\mathcal G$ is a $\mathcal G$ observable random variable $\mathbb E[X\mid\mathcal G]:=\mathbb E[X\mid X_i, ~i\in\mathcal I].$
(See the German [Wikipedia](https://de.wikipedia.org/wiki/Bedingter_Erwartungswert#Formale_Definition) on the same for the usage of the notation).
More interesting thing is how (sub)sigma algebra comes in play: sigma algebra represents information. What is$^\dagger$ information? It is the ability of deciding if an event $G$ belonging to a certain paving $\mathcal G \subset\boldsymbol{\mathfrak B}$ has been realized or not. But if we are aware of whether $G$ happens or not, then we can also comment on whether $G^\complement$ occurs or not. In the same vein, if we have the ability to know whether $G_i,~i\in\{1,2,\ldots, n\}$ occurs or not, clearly we can tell about the realization of $\cup_{i=1}^n G_i.$ This indicates the paving $\mathcal G$ that is bearing the information must be a sigma algebra.
Now we would like to add a dynamic aspect to this. That is, we would update our information at discrete units of time. Let $\mathcal G_n$ be the sigma-algebra interpreted as the "information available at time $n.$" If $T:=\{n\in\mathbb Z\mid \alpha\leq n\leq\beta\}$ is an integer interval, then a filter with time set $T$ is a list of sigma algebras $\langle \mathcal G_n\rangle_{n\in T}$ such that $$\forall m, n\in T, ~n\leq m, ~\mathcal G_n\subseteq \mathcal G_m\subseteq\boldsymbol{\mathfrak B};$$ the relation implies as time progresses, the information accumulates and increases.
Another relevant aspect needed to be highlighted is that if $\langle X_n, \mathcal G_n\rangle_{n\in T}$ is a martingale, then $\langle X_n, \sigma(X_i\mid \alpha\leq i\leq n)\rangle_{n\in T}$ is a martingale - this follows from the fact that $\sigma(X_i\mid \alpha\leq i\leq n)\subset \mathcal G_n$ and smoothing. We could have worked with $ \sigma(X_i\mid \alpha\leq i\leq n)$ instead of $\mathcal G_n$ then; however it is not worthless to have some "auxiliary information" all along.
---
$^\dagger$ Billingsley explicitly pointed out information used, thus, is "informal, nonmathematical term".
---
## References:
$\rm [I]$ A Probability Path, Sidney Resnick, Birkhäuser, $1999, $ sec. $10.2, ~10.4.$
$\rm [II]$ Probability with a View Toward Statistics, Vol. $\rm I, $ J. Hoffman-Jørgensen, Springer Science$+$Business, $1994, $ sec. $6.1, ~7.1.$
| null |
CC BY-SA 4.0
| null |
2023-04-18T16:13:06.997
|
2023-04-19T19:54:00.943
|
2023-04-19T19:54:00.943
|
362671
|
362671
| null |
613359
|
1
| null | null |
0
|
8
|
I have a question in which I am given the expected results for a weighted lottery in conditional probabilities (based on gender and race different tickets in the lottery), I also have the actual conditional probabilities calculated from the lottery data. How would i conduct a hypothesis test to see if the lottery is rigged or not?
|
Unequally weighted lottery hypothesis test
|
CC BY-SA 4.0
| null |
2023-04-18T16:14:55.963
|
2023-04-18T16:14:55.963
| null | null |
386016
|
[
"hypothesis-testing",
"conditional-probability",
"weighted-sampling"
] |
613360
|
1
| null | null |
0
|
21
|
I have two main questions - but I'll give a brief background first.
I've run a mediation logistic regression using Hayes Process macro (v4.3), model 4. It's a parallel mediation, with a categorical IV and 4 mediators.
My IV is family type (single parent, social care, multigenerational, etc.), with 'both biological parents' as the reference category. They're being regressed onto aggression scores.
My questions are:
- I'm just writing out my results, and I appreciate that the output breaks it down into each category vs. the reference, but I also note that there is an 'omnibus test of direct effect of X on Y'. Would I be right in thinking that this is the overall effect of family type on aggression? So if this is significant, I could say family structure predicts aggression, and then break each categorical output down?
- Is there a way to work out effect sizes for a categorical aggression? I note it isn't an option on v4 of Hayes Process, but was on v3. Can I do this another way?
Thank you so much in advance!
|
Hayes process macro (v4.3) - Multicategorical IV Mediation: omnibus and effect size questions?
|
CC BY-SA 4.0
| null |
2023-04-18T16:15:49.727
|
2023-04-18T16:15:49.727
| null | null |
384898
|
[
"regression",
"spss"
] |
613361
|
1
| null | null |
0
|
43
|
So I am trying to solve a risk problem using Monte Carlo and the inverse normal. After doing 5000 simulations of invNorm$(x,\mu,\sigma)$ where $x$ is a random number from 0 to 1. And then I count to see how many of those 5000 simulations are less than or equal to a requirement of .95 to get a risk score. The thing I am trying to understand is if I just use the cdf Norm$(.95,\mu,\sigma)$, I roughly get the same answer. Why does that happen? And should I just do the normal distribution instead and not have to run 5000 simulations? or is their a better approach?
|
Why when doing a Monte Carlo Simulation using the inverse normal giving me the same answer as just using the normal?
|
CC BY-SA 4.0
| null |
2023-04-18T16:38:14.003
|
2023-04-18T17:15:17.443
| null | null |
386017
|
[
"normal-distribution",
"monte-carlo"
] |
613362
|
2
| null |
613283
|
1
| null |
Response to updated question: If I sample 100 data points from two distributions of the same type, but one with a lower variance and one with a higher variance, would the former one have higher likelihood?
The likelihood $L(\theta|X)$ depends on both the parameter $\theta$ and the random sample $X$, so it's hard to know what is meant by a "higher likelihood." If
- you have one sample $X$ taken from one distribution $P$, and
- all parameters of $P$ are known save the variance $\sigma^2$, and
- you know that $\sigma^2\in\{\sigma^2_1,\sigma^2_2\}$, where $\sigma^2_1<\sigma^2_2$,
then the likelihood ratio $\frac{L(\sigma^2_1|X)}{L(\sigma^2_2|X)}\equiv \prod_{i=1}^{100} \frac{ p(x_i|\sigma^2_1)}{ p(x_i|\sigma^2_2)}$ reflects the evidence in favor of either of the two possible values of $\sigma^2$. A very high dispersion of $X$ will result in $L(\sigma^2_1|X)<L(\sigma^2_2|X)$, but this is a higher likelihood only in the sense that $\sigma_2^2$ is more likely than $\sigma_1^2$ to be the true value of $\sigma^2$. If the dispersion of $X$ is very low, of course, you will have $L(\sigma^2_1|X)>L(\sigma^2_2|X)$.
| null |
CC BY-SA 4.0
| null |
2023-04-18T16:47:17.263
|
2023-04-18T16:52:15.327
|
2023-04-18T16:52:15.327
|
384444
|
384444
| null |
613363
|
2
| null |
612775
|
3
| null |
What you're looking for is almost exactly what "Metropolis-Hastings for Ratio-of-Uniforms" tries to do. There are unfortunately no known papers on this topic, but what is available are some slides by Luke Tierney: [http://homepage.stat.uiowa.edu/~luke/talks/bormio05.pdf](http://homepage.stat.uiowa.edu/%7Eluke/talks/bormio05.pdf)
Ratio-of-Uniforms is a way to generate random samples from a given distribution $F$ with density $f$ (need not be defined on any bounded set). However, in order to implement Ratio-of-Uniforms, one needs to obtain uniform draws from a bounded set $\mathscr{A}$ where this is:
$$
\mathscr{A} = \left\{ (u,v): 0 \leq u \leq \sqrt{f(v/u)} \right\}\,.
$$
If $(U,V) \sim \text{Unif}(\mathscr{A})$, the $V/U \sim F$. This is typically done by enclosing the set $\mathscr{A}$ within a rectangle and then implementing iid Accept-Reject sampling. However, this can be fairly inefficient if $\mathscr{A}$ is weird and high-dimensional.
Tierney presents a few different ideas on how Markov chains can be constructed to sample uniformly on this set $\mathscr{A}$ instead of rejection sampling.
There is also this Masters Thesis project that explore this for a specific problem: [https://etda.libraries.psu.edu/files/final_submissions/1226](https://etda.libraries.psu.edu/files/final_submissions/1226)
| null |
CC BY-SA 4.0
| null |
2023-04-18T16:57:30.057
|
2023-04-18T16:57:30.057
| null | null |
31978
| null |
613366
|
2
| null |
613361
|
2
| null |
Indeed, the proportion of samples below $0.95$ is an approximation of the cumulative distribution at that value.
Given a distribution with cumulative distribution function $F$, and given $X \sim \textrm{Uniform}[0, 1]$, then $F^{-1}(X)$, where $F^{-1}$ is the inverse of $F$, has that distribution. Effectively, $X$ is the generated quantile, and applying $F^{-1}$ finds the value associated with that quantile.
In your case, $F \equiv \Phi$ is the normal cumulative distribution function, and (I assume) `invNorm` is just $\Phi^{-1}$, so your samples of $\Phi^{-1}(X)$ are samples from the given normal distribution. The expected proportion of samples below a threshold $t$ is therefore
$$\mathbb{E}([\Phi^{-1}(X) \leq t]; \mu, \sigma) = \mathbb{E}([X \leq \Phi(t)]; \mu, \sigma) = \Phi(t; \mu, \sigma).$$
| null |
CC BY-SA 4.0
| null |
2023-04-18T17:15:17.443
|
2023-04-18T17:15:17.443
| null | null |
37041
| null |
613367
|
2
| null |
426925
|
0
| null |
I do not see the ambiguity in your results. For $X_1$, you get that there is a $60\%$ chance of belonging to the third category. For $X_2$, you get that there is a $40\%$ chance of belonging to the third category. The model thinks that $X_1$ is more like a member of the third category than $X_2$.
| null |
CC BY-SA 4.0
| null |
2023-04-18T17:19:55.660
|
2023-04-18T17:19:55.660
| null | null |
247274
| null |
613368
|
2
| null |
601173
|
0
| null |
>
I am wondering how to explain why neural networks are more accurate than linear when using a particular set of features.
Neural networks allow for an enormous number of interactions and nonlinear transformations of the original features. In some regard, neural networks do the feature engineering for you, so you do not have to figure out that a particular interaction matters or that some feature should be squared.
If you get a good fit with a linear model but reliably get an even better fit with a neural network on those same features, it would seem that those nonlinear features and interactions discovered by the neural network matter to the outcome. The math does not work out as cleanly as it does for nested GLMs, but you can think of this as analogous to fitting a model, fitting a more complex model, testing the added features, and getting a low p-value (e.g., partial F-test, ["chunk" test](https://stats.stackexchange.com/q/27429/247274) in more generality).
To some extent, you are seeing the universal approximation theorems in action. Loosely speaking, the various universal approximation theorems say that a decent$^{\dagger}$ function can be approximated arbitrarily well by a neural network of sufficient size. A linear combination of the raw features has no such guarantee, hence the stronger performance of the neural network.
Where you can get into trouble is that linear models also can involve feature interactions and nonlinear features. The universal approximation theorems say that neural networks can approximate decent functions as well as is desired. The Stone-Weierstrass theorem says about the same about polynomial regressions (which are linear models). However, you have to tell the computer what those polynomial features are. You cannot just change `model.add(tf.keras.layers.Dense(32, activation='relu'))` to `model.add(tf.keras.layers.Dense(320, activation='relu'))` to get more nonlinearity. This makes it quite easy to increase model flexibility in a neural network compared to a linear model, for better or for worse.
[Cheng el. al (2018)](https://arxiv.org/abs/1806.06850) have an interesting arXiv paper on polynomial regression vs neural networks.
REFERENCE
Cheng, Xi, et al. "Polynomial regression as an alternative to neural nets." arXiv preprint arXiv:1806.06850 (2018).
$^{\dagger}$This is deliberately vague.
| null |
CC BY-SA 4.0
| null |
2023-04-18T17:36:17.953
|
2023-04-18T17:36:17.953
| null | null |
247274
| null |
613369
|
1
| null | null |
2
|
19
|
I am sorry I do not have the right terminology to ask as clearly as I would like.
I have three datasets of biomes. Each gives a biome by latitude and longitude (3 biome maps converted to tables). The biome scheme used in each one is the same so they are comparable i.e. biome 1 means the same thing in every dataset. There are up to 27 biomes in each dataset.
I have combined them into one table by latitude and longitude so only lat-lon pairs with data from all three datasets are present (no missing data).
What I want to do is find out which datasets are most alike, or how correlated they are by lat-lon, so the data is paired.
I also want to be able to expand this analysis to include more datasets, as I am waiting on another two. None of the questions I have read seem to match my combination of categorical data, more than two samples, and paired.
Can anyone recommend a type of analysis for this situation? If it is relevant, I am doing the work in R.
Thank you.
|
How to compare three sets of 'paired' categorical data
|
CC BY-SA 4.0
| null |
2023-04-18T17:38:48.017
|
2023-04-18T17:38:48.017
| null | null |
224090
|
[
"categorical-data",
"paired-data"
] |
613370
|
2
| null |
612778
|
1
| null |
Your Proposition 1 is wrong, because calculating the posterior can require an increasing number of computations for every additional data point.
A simple example that demonstrates it is a mixture distribution: suppose that
$$ x \sim pf_1(x|\theta) + (1-p)f_2(x|\varphi).$$
Namely, $x$ comes from a mixture distribution with two components that are parametrized by $\theta$ and $\varphi$, on which you have some prior $\pi(\theta,\varphi)$. The posterior distribution after observing the first sample $x_1$ will become also a mixture distribution with two components:
$$P(\theta,\varphi|x_1) \propto pf_1(x_1|\theta)\pi(\theta,\varphi) + (1-p)f_2(x_1|\varphi)\pi(\theta,\varphi)$$
The posterior distribution after observing the second sample $x_2$ will then become a mixture distribution with four components :
$$P(\theta,\varphi|x_1,x_2) \propto (pf_1(x_1|\theta)\pi(\theta,\varphi) + (1-p)f_2(x_1|\varphi)) \times (pf_1(x_1|\theta)\pi(\theta,\varphi) + (1-p)f_2(x_1|\varphi)\pi(\theta,\varphi))$$
And so on. After observing $n$ samples the posterior will be a mixture of $2^n$ components, so the amount of calculations requires grows exponentioaly.
Since this proposition is wrong, everything else that follows from it is clearly also wrong.
| null |
CC BY-SA 4.0
| null |
2023-04-18T17:47:06.650
|
2023-04-18T17:47:06.650
| null | null |
348492
| null |
613371
|
2
| null |
272708
|
1
| null |
This has issues but could work if you are careful.
On the one hand, this is a form of stepwise regression, which has major drawbacks. In particular, all standard downstream inference is tainted by doing this. If you fit a model, remove insignificant features, and then fit another model on just the significant features, the p-values and confidence intervals lack their standard meaning, as they are calculated without considering the earlier step of feature selection. Even an in-sample measure of model performance like adjusted $R^2$ winds up biased high, since the model degrees of freedom does not account for the variable selection.
For a reference, Frank Harrell discusses a number of issues with stepwise variable selection [here](https://www.stata.com/support/faqs/statistics/stepwise-regression-problems/). The content deals with the mathematics, not the software implementation, so the reference applies whether you use Stata or not.
Further, variable selection is notoriously unstable. If you do cross-validation or bootstrap your data set, you are likely to see selected features come and go.
Finally, what do you do when you fit a model, remove insignificant features, fit a new model on just the significant features, and get that some of those features are insignificant? Do you keep removing insignificant variables? Do you even trust the significance, in light of the above discussion about p-values and confidence intervals lacking their usual meaning?
However, if you do an out-of-sample validation, rather than relying on in-sample measures like the high-biased adjusted $R^2$, [stepwise selection can be competitive with other predictive modeling strategies](https://stats.stackexchange.com/questions/594106/how-competitive-is-stepwise-regression-when-it-comes-to-pure-prediction).
| null |
CC BY-SA 4.0
| null |
2023-04-18T17:48:44.993
|
2023-04-18T17:48:44.993
| null | null |
247274
| null |
613372
|
2
| null |
342800
|
1
| null |
NO
The point of the ROC curve is to assess model performance across a range of thresholds. In that sense, the ROC curve and the area under it are measures of how well the model is able to separate the two classes. This is separate from applying a threshold and going with the classifications that arise from the two-step model that first makes predictions on a continuum and then uses a threshold to decide on the bin to which each prediction corresponds.
It might be that some thresholds are terrible for your task, and that is okay. If you get to a point where you have to use a threshold, you would just use a different one and be okay with the fact that some thresholds do not work. In fact, if you have a logistic regression that makes probability predictions on the interval $[0,1]$, setting a threshold of $-2$ or $+2$ puts you in a position where every observation is classified the same way no matter what modeling you do, which probably is not acceptable performance. In that situation, there is literally nothing you can do to tune the model to make varied categorical predictions.
| null |
CC BY-SA 4.0
| null |
2023-04-18T17:58:40.040
|
2023-04-18T17:58:40.040
| null | null |
247274
| null |
613373
|
2
| null |
613139
|
1
| null |
I will offer a possible conversion formula that you might consider. However, as you suggest, the use of this with a rank biserial correlation is unclear. Also note, that unlike the Cohen's $d$ effect size, this will have a dependence on the sample size(s).
This is based on the idea that the 2 independent samples t-test can be viewed as a point biserial correlation. As such, the relationships between t-ratios and correlations $r$ hold.
The effect size and the t-ratio are related as
$$d = t · \sqrt{\frac{1}{n_1}+\frac{1}{n_2}} = t·\sqrt{\frac{n_1+n_2}{n_1n_2}}$$
Solving this for $t^2$, we obtain
$$t^2=d^2·\frac{n_1n_2}{n_1+n_2}$$
Now, the relationship between correlation and t-ratios (for the 2 independent sample context) is:
$$r = \sqrt{\frac{t^2}{t^2+n_1+n_2-2}}$$
Thus,
$$r = \sqrt{\frac{d^2·\frac{n_1n_2}{n_1+n_2}}{d^2·\frac{n_1n_2}{n_1+n_2}+n_1+n_2-2}}$$
or "simplified"
$$r = \sqrt{\frac{d^2}{d^2+\frac{(n_1+n_2-2)n_1n_2}{n_1+n_2}}}$$
Again, I provide this without any idea if it is indeed the appropriate transformation in this context.
| null |
CC BY-SA 4.0
| null |
2023-04-18T18:05:48.550
|
2023-04-18T18:05:48.550
| null | null |
199063
| null |
613374
|
1
| null | null |
0
|
38
|
I provide you with a small example in which the response variable takes discrete values. There are no covariates, and no different categories (just one, treatment group)
The idea is to observe if one treatment is performing effect through a scale (response variable). All participants are in the treatment group.
Does it make sense to create a treatment variable with one level and time? Or just time alone?
```
#running 2 models with and without categorical
model1 <- lme(value ~ time, random = ~ time| id,
na.action = na.omit)
model2 <- lme(value ~ time + categorical, random = ~ time| id,
na.action = na.omit)
```
Running `model2` seems not to work because it has only one level.
In my mind, `model1` seems to be correct but I was waiting for some advice
```
ex_fib <- structure(list(ID = c(1L, 1L, 1L, 2L, 2L, 2L, 3L, 3L, 3L,
4L, 4L, 4L, 5L, 5L, 5L, 6L, 6L, 6L, 7L, 7L, 7L, 8L, 8L, 8L, 9L,
9L, 9L, 10L, 10L, 10L), Time = c(1L, 2L, 3L, 1L, 2L, 3L, 1L, 2L,
3L, 1L, 2L, 3L, 1L, 2L, 3L, 1L, 2L, 3L, 1L, 2L, 3L, 1L, 2L, 3L,
1L, 2L, 3L, 1L, 2L, 3L), Categorical = c("A", "A", "A", "A", "A",
"A", "A", "A", "A", "A", "A", "A", "A", "A", "A", "A", "A", "A",
"A", "A", "A", "A", "A", "A", "A", "A", "A", "A", "A", "A"),
Value = c(10L, 12L, 14L, 5L, 6L, 8L, 15L, 16L, 18L, 7L, 9L, 11L,
13L, 14L, 16L, 12L, 15L, 18L, 6L, 7L, 9L, 11L, 12L, 13L, 8L, 10L,
11L, 9L, 12L, 14L)), class = "data.frame",
row.names = c(NA, -30L))
```
|
Which is the correct `lme` syntax for this repeated measurements problem?
|
CC BY-SA 4.0
| null |
2023-04-18T17:29:33.453
|
2023-05-21T23:58:57.697
|
2023-05-21T23:58:57.697
|
11887
|
339186
|
[
"r",
"mixed-model",
"repeated-measures"
] |
613375
|
1
| null | null |
0
|
11
|
I am seeing a Forest Plot presentation of the data in a meta-analysis. But the standard deviation for the final "weighted mean difference" is missing. I am wondering if it is possible to calculate the standard deviation of it. So that I can use this SD value in G power to estimate the sample size.
The meta-analysis paper title: Efficacy of combination therapy with ezetimibe and statins versus a double dose of statin monotherapy in participants with hypercholesterolemia: a meta-analysis of literature
I need to find the standard deviation to estimate a sample size for a retrospective research that I am about to conduct.
|
How to calculate the standard deviation for the "final weighted mean difference" as in a meta-analysis for the purpose of calculating sample size?
|
CC BY-SA 4.0
| null |
2023-04-18T18:12:07.860
|
2023-04-18T18:14:22.713
|
2023-04-18T18:14:22.713
|
385427
|
385427
|
[
"sample-size"
] |
613376
|
1
| null | null |
1
|
43
|
I have three contingency tables with the variables month and region that contain the frequency of the sighting of a particular bird. Additionally, the tables differ only by counting frequencies from different years.
I am looking for a way to see if there is a significant change in the frequencies between the tables.
A Chi-squared test seems almost plausible, but there are no expected values and I don't need to see a relationship between the two variables, but rather how the frequencies change over time across the three tables.
Any recommendations are appreciated.
|
Test to compare multiple contingency tables
|
CC BY-SA 4.0
| null |
2023-04-18T18:45:15.593
|
2023-04-25T04:06:58.123
|
2023-04-25T04:06:58.123
|
56940
|
385931
|
[
"r",
"hypothesis-testing",
"generalized-linear-model",
"contingency-tables",
"log-linear"
] |
613377
|
2
| null |
612775
|
3
| null |
Since you want to sample uniformly, $p(\xi) = c I(\xi \in S)$ so the ratio $\frac{p(\xi^*)}{p(\xi)} = I(\xi^* \in S)$ for all proposals $\xi^*$. In MH you accept with $\frac{p^*}{p}\frac{q}{q^*}$
but, with a symmetrical proposal, you only care about $p$ since the factor $\tfrac{q(\xi\leftarrow \xi^*)}{q(\xi^*\leftarrow \xi)} = 1$ cancels out (This is Metropolis without Hastings inspiring work). Therefore all proposals in $S$ will be accepted. The stationary distribution of $\xi$ will follow $p$.
```
from numpy.random import randn as q
xi = {}
G = 10**4
x = y = 0
for g in range(G):
xs, ys = x+q(), y+q() # xi_star ~ q
x, y = ((x,y),(xs,ys))[xs**2 + ys**2 <= 1] # xi = xi_star if xi_star\in S
xi[g] = x, y
```
```
import numpy as np
from matplotlib.pyplot import plot,subplot
xi = np.vstack(xi.values())
subplot(211)
plot(*xi.T, '.')
subplot(212)
plot(xi)
```
Notation:
$p$ target density
$q$ proposal
| null |
CC BY-SA 4.0
| null |
2023-04-18T18:50:05.383
|
2023-05-25T01:51:54.797
|
2023-05-25T01:51:54.797
|
54458
|
54458
| null |
613378
|
2
| null |
613254
|
0
| null |
Basically, you want to visualize the residual variance of the shots over time. You can model them and compute the formal residuals of the model and then use something use something like the square root of the absolute value of the residual, but that's probably not going to be intuitive / useful for the typical pool player who probably isn't a statistician. You want to have a plot that is appropriate for your audience.
Since you are using degrees, I would simply plot the absolute value of the degrees on the Y-axis and the time or shot number on the X-axis. If you want to get fancy for people, you might get away with overlaying a LOWESS fit.
| null |
CC BY-SA 4.0
| null |
2023-04-18T18:54:40.153
|
2023-04-18T18:54:40.153
| null | null |
7290
| null |
613379
|
2
| null |
613344
|
16
| null |
Durrett's definition is the general correct definition of a martingale, while the Wikipedia's definition is at best a "restricted definition". The qualifier "with respect to $\mathcal{F}_n$", although was placed in the parentheses, is essential to accurately define a "martingale": technically speaking, martingale is a sequence of pairs $(X_n, \mathcal{F}_n)$, not of $\{X_n\}$ alone (but the italicized sentence in the quoted paragraph preceding equation $(2)$ below may to some extent justify the latter convention). Patrick Billingsley's book Probability and Measure made this important point more explicit (Section 35):
>
The sequence $\{\color{red}{(X_n, \mathscr{F}_n)}: n = 1, 2, \ldots\}$ is a martingale if these four conditions hold:
$\mathscr{F}_n \subset \mathscr{F}_{n + 1}$;
$X_n$ is measurable $\mathscr{F}_n$;
$E[|X_n|] < \infty$;
with probability 1,
\begin{align}
E[X_{n + 1}|\mathscr{F}_n] = X_n.\tag{1}
\end{align}
He continued to explain the role of $\{\mathscr{F}_n\}$ in the definition as follows:
>
Alternatively, the sequence $X_1, X_2, \ldots$ is said to be a martingale relative to the $\sigma$-fields $\mathscr{F}_1, \mathscr{F}_2, \ldots$. Condition 1 is expressed by saying the $\mathscr{F}_n$ form a filtration and condition 2 by saying $X_n$ are adapted to the filtration.
After that, he illustrated the relationship between Wikipedia's "definition" (which actually is just a special martingale) and the general definition above, which probably can clear out your confusion :
>
The sequence $X_1, X_2, \ldots$ is defined to be a martingale if it is a martingale relative to some sequence $\mathscr{F}_1, \mathscr{F}_2, \ldots$. In this case, the $\sigma$-fields $\mathscr{G}_n = \sigma(X_1, \ldots, X_n)$ always work: Obviously, $\mathscr{G}_n \subset \mathscr{G}_{n + 1}$ and $X_n$ is measurable $\mathscr{G}_n$, and if $(1)$ holds, then $E[X_{n + 1}|\mathscr{G}_n] =
E[E[X_{n + 1}|\mathscr{F}_n]|\mathscr{G}_n] = E[X_n|\mathscr{G}_n] = X_n$ (by tower property of conditional expectation). For these special $\sigma$-fields $\mathscr{G}_n$, $(1)$ reduces to
\begin{align}
E[X_{n + 1} | X_1, \ldots, X_n] = X_n. \tag{2}
\end{align}
Since $\sigma(X_1, \ldots, X_n) \subset \mathscr{F}_n$ if and only if $X_n$ is measurable $\mathscr{F}_n$ for each $n$, the $\sigma(X_1, \ldots, X_n)$ are the smallest $\sigma$-fields with respect to which the $X_n$ are a martingale.
Some additional comments in response to specific questions you raised:
- The "$X_1, \ldots, X_n$" in the notation "$E[X_{n + 1}|X_1, \ldots, X_n]$" should be interpreted as the $\sigma$-field $\sigma(X_1, \ldots, X_n)$, instead of $n$ isolated random variables. In general, "$E[X|Y]$" is a shorthand for the measure-theoretic conditional expectation $E[X|\sigma(Y)]$. The $\sigma$-field $\sigma(X_1, \ldots, X_n)$, known as the $\sigma$-field generated by the random vector $(X_1, \ldots, X_n)$, is the smallest $\sigma$-field in $\mathscr{F}$ with respect to which $(X_1, \ldots, X_n)$ is measurable. Therefore, while your statement "the set of past history $\{X_1, \ldots, X_n\}$ is not a sigma-field" is trivially true (written in this way, it is just a collection of $n$ random variables), it should not be interpreted in this way when
they appeared in equation $(2)$.
- As the quotation block containing equation $(2)$ demonstrates, the Wikipedia's definition is indeed "nested" in Durrett's definition: $\mathscr{G}_n := \sigma(X_1, \ldots, X_n)$ is just one special filtration satisfying Condition 1 and Condition 2. Furthermore, $\mathscr{G}_n$ are the smallest $\sigma$-fields with respect to which the $X_n$ are a martingale. That is, suppose that there exists a filtration $\{\mathscr{F}_n\}$ such that $\{(X_n, \mathscr{F}_n)\}$ is a martingale, then $\{(X_n, \mathscr{G}_n)\}$ must be a martingale as well and $\mathscr{G}_n \subset \mathscr{F}_n$ for each $n$ (recall that in the last bullet, I mentioned that $\mathscr{G}_n$ is the smallest $\sigma$-field in $\mathscr{F}$ with respect to which $(X_1, \ldots, X_n)$ is measurable). For this reason, the filtration $\{\mathscr{G}_n\}$ is referred as a natural filtration in some literature.
- At this point, it should be clear to you that the role of $\sigma$-fields $\mathscr{F}_n$ in martingale's definition is essential, for with the same sequence of random variables $\{X_n\}$, different martingales can be constructed by choosing different filtrations with respect to which $X_n$ are measurable. See Example 35.1 in Billingsley's book for a concrete example, in which he wrote "It is natural and convenient to allow the $\sigma$-fields $\mathscr{F}_n$ larger than the minimal ones ($\sigma(X_1, \ldots, X_n)$)". In other words, the "past history" may well be richer than the sequence of $\{X_1, \ldots, X_n\}$ itself -- it may cover any information up to time $n$ as long as Condition 4 holds.
You asked for an example, Billingsley also provided many good ones (e.g., Example 35.1 mentioned above), in which "gambling/betting system" is quite illuminating (note that, it is said that the word "martingale" may be originated from gambling, see this interesting [vignette](https://www.jehps.net/juin2009/Mansuy.pdf) for details):
>
(p. 458) If $X$ represents the fortune of a gambler after the $n$th play and $\mathscr{F}_n$ represents his information about the game at that time, $(1)$ says that his expected fortune after the next play is the same as his present fortune. Thus a martingale represents a fair game, and sums of independent random variables with mean $0$ give one example.
(p. 463) Consider again the gambler whose fortune after the $n$th play is $X_n$ and whose information about the game at that time is represented by the $\sigma$-field $\mathscr{F}_n$. If $\mathscr{F}_n = \sigma(X_1, \ldots, X_n)$, he knows the sequence of his fortunes and nothing else, but $\mathscr{F}_n$ could be larger.
The last sentence "but $\mathscr{F}_n$ could be larger." tells you that, with an example, that Durrett's definition is the correct one and the Wikipedia's definition is clearly not generalized enough. While he did not gave a specific example of a "larger $\mathscr{F}_n$", you can easily conceive some scenarios (e.g., the gambler happens to be an employee of a casino, so in addition to the basic information he should know, he also knows the secret mechanism of the roulette wheel).
| null |
CC BY-SA 4.0
| null |
2023-04-18T19:03:42.250
|
2023-04-21T16:30:04.520
|
2023-04-21T16:30:04.520
|
20519
|
20519
| null |
613380
|
1
| null | null |
0
|
12
|
So I’m working on a monitoring model and could use some input on what sort of statistical method would be suitable for this particular case.
Imagine you are a financial institution with corporate customers transferring and receiving money to/from countries all over the world. You want to be alerted when customers diverge from their typical/expected behavior. One simple measure could be monthly aggregated transactions where the counter party is located in a tax haven. By “typical/expected behavior” we are thinking in terms of the industry segment the customer belongs to, e.g. the behavior of a retail company is likely to be very different from that of a shipping company.
One thing we are on the lookout for is when a customer “suddenly” has a big increase in transactions with such countries. The absolute amounts are often not as critical as the relative increase (e.g. a customer has a monthly amount of 5x its monthly average for the last three months). But one source of false positives here, when we compare the current month to immediately preceding months, is a “seasonality effect”. Perhaps it is common for one industry segment to have substantially larger transaction amounts in the summer months, another one just before Christmas, etc.
I’m looking for a smart way to figure out threshold values here. For example: If I am running my monitoring model on 1 April 2023, looking at aggregated transactions per customer for March 2023, what should be the relative increase (compared to say the average of the preceding three months) allowed for customers of different segments without triggering an alert? Imagining data access is not an issue (years of transaction data are available), any suggestions on what methods might be most suitable to look into here?
|
How to best capture/identify industry segment specific "seasonality effects" in transaction data
|
CC BY-SA 4.0
| null |
2023-04-18T19:09:55.643
|
2023-04-18T19:09:55.643
| null | null |
386026
|
[
"seasonality"
] |
613381
|
2
| null |
613354
|
6
| null |
As noted p. 530 in the Tanner & Wong (1987) paper, the method applies to
$$p_i(\theta|y) = m^{-1}\sum_{j=1}^m p(\theta|z^{(j)},y)$$
even when $m=1$. In this special case, the algorithm is a Gibbs sampler. The algorithm remains valid in converging to the stationary distribution with the iterations for larger values of $m$, for exactly the same reason, hence it is not more or less approximate than the Gibbs sampler, but the additional cost (171mn for 15 iterations!) offers no clear advantage. Even the "final" approximation of the stationary distribution can be obtained by It is somewhat ironical that the authors dismiss the Markov connection in Remark 4 at the bottom of page 538.
| null |
CC BY-SA 4.0
| null |
2023-04-18T19:13:27.147
|
2023-04-18T20:31:14.017
|
2023-04-18T20:31:14.017
|
7224
|
7224
| null |
613382
|
1
| null | null |
1
|
19
|
Consider a very simple causal DAG representing a randomized experiment of $Treatment \rightarrow Death$ where $Treatment$ is randomly assigned and $Death$ is the outcome of interest. A causal DAG represents ("direct") cause and effect relationships, so if we intervene on $Treatment$, i.e., $do(Treatment=take\ drug)$, then we can expect the distribution of $Death$ to change in some way given that there is a directed edge from $Treatment$ to $Death$ in the DAG. However, clearly, we can not intervene on $Death$.
Similarly, in the DAG: $Age \rightarrow Treatment \rightarrow Death$, the assignment of $Treatment$ may be correlated with a person's $Age$ but we can not intervene on age in reality. Non-intervenable variables are commonly included in causal DAGs in practice, but it's not clear to me what the interpretation of the directed edge between $Age$ and $Treatment$ is when the "causal effect of age on treatment" is clearly a nonsensical estimand?
|
Non-intervenable variables in causal directed acyclic graphs
|
CC BY-SA 4.0
| null |
2023-04-18T19:34:58.673
|
2023-04-19T03:25:29.557
|
2023-04-19T03:25:29.557
|
197219
|
197219
|
[
"causality",
"dag"
] |
613384
|
1
| null | null |
0
|
18
|
My banking app provides some predictive analysis on what upcoming bills I have. This got me thinking on how the bank does this analysis. Is it possible to use fourier analysis with my transaction history as input, to identify recurring transactions and predict upcoming bills?
If so, could someone provide an example in R/Python. Thank you.
|
Can you use fourier analysis to identify recurring transactions?
|
CC BY-SA 4.0
| null |
2023-04-18T20:06:13.747
|
2023-04-18T20:06:13.747
| null | null |
107931
|
[
"fourier-transform"
] |
613385
|
2
| null |
610747
|
1
| null |
Yes you correctly read the meaning of the central line in your plot which is the $\mu(x)$ given your relatively simple model. Whether $\mu$ is a probability depends on what your data represents. Just to make sure: If your data can be represented as the outcomes of Bernoulli experiments, then you should use a logistic (or some other binomial) model.
However the confidence interval around it is probably not what you want, since it only represents the uncertainty around the spline and ignores the uncertainty around the intercept!
To demonstrate and present how to do this better with the `predict` function I'll build on the example from `help(betar)`
```
library(mgcv)
set.seed(3);n<-400
dat <- gamSim(1,n=n)
mu <- binomial()$linkinv(dat$f/4-2)
phi <- .5
a <- mu*phi;b <- phi - a;
dat$y <- rbeta(n,a,b)
# sort by x0 to plot lines
dat <- dat[order(dat$x0), ]
# simplified the model to one spline, so that it fits the question
bm2 <- gam(y~s(x0),family=betar(link="logit"), data=dat)
summary(bm2)
plot(bm2, pages=1, trans=plogis,
shift=coef(bm2)[1],
ylim = c(0.45, .6) # fixing ylims to make plots comparable
)
```
This mirrors your setup:
[](https://i.stack.imgur.com/fS1zk.png)
Now i solve the question using `predict`
```
pred_dat <- predict.gam(bm2, se.fit = T)
z_a <- qnorm(.975)
plot(dat$x0, plogis(pred_dat$fit), type = "l", ylim = c(.45, .6))
lines(dat$x0, plogis(pred_dat$fit + z_a * pred_dat$se.fit), lty = 2)
lines(dat$x0, plogis(pred_dat$fit - z_a * pred_dat$se.fit), lty = 2)
```
Now here the confidence intervals are much wider since we include the uncertainty around the intercept and its covariance with the coefficients of the spline, see `vcov(bm2)`.
[](https://i.stack.imgur.com/Tryxv.png)
If you build a model where the intercept is predetermined instead of estimating it will look a lot like the 1st plot:
```
# -1 in the formula means no intercept
bm3 <- gam(y~s(x0) - 1, offset = rep(coef(bm2)[1], n),family=betar(link="logit"), data=dat)
summary(bm3)
pred_dat <- predict.gam(bm3, se.fit = T)
plot(dat$x0, plogis(pred_dat$fit + coef(bm2)[1]), ylim = c(.45, .6), type = "l")
lines(dat$x0, plogis(pred_dat$fit + coef(bm2)[1] + z_a * pred_dat$se.fit), lty = 2)
lines(dat$x0, plogis(pred_dat$fit + coef(bm2)[1] - z_a * pred_dat$se.fit), lty = 2)
```
[](https://i.stack.imgur.com/eCv31.png)
Looking at your plot specifically I can tell from the dashes into the plot area, that you only have data at the full weeks. If you want to have a nice interpolation you will have to specify the `newdata` argument, if you want a smooth interpolation. Also with only 4 distinct weeks, 3 edf on your spline + an intercept your model is pretty much identical to just using factors.
In regards to the 2nd question, I agree with @ShawnHemelstrand comment.
| null |
CC BY-SA 4.0
| null |
2023-04-18T20:06:50.277
|
2023-04-18T20:06:50.277
| null | null |
341520
| null |
613386
|
2
| null |
493649
|
1
| null |
DAGs are non-parametric. If we assume that all variables in your problem are dichotomous (or categorical) and all assumptions required for valid causal inference are met, you can unbiasedly estimate the causal effect with a saturated model for the outcome adjusting for the 7 variables (alternatively, using inverse probability weighting, or both). Otherwise, and in common applications with limited data, you usually have to make some parametric assumptions to specify a parsimonious model, which can lead to bias due to model misspecification. Fit on observed data is not typically used to guide specification of structural models, although parsimonious models are often preferred for interpretability and reducing model variance.
When using machine learning methods for causal estimation, both regularization and overfitting can result in bias. Naive application of ML methods (i.e. that used for prediction) to estimate causal effects is generally invalid; they can not model individual-level causal effects out of the box. However, there are methods for semi-parametric estimation of causal effects where machine learning methods can be used to model high-dimensional confounders while still modelling the treatment effects parametrically for interpretability.
| null |
CC BY-SA 4.0
| null |
2023-04-18T20:14:44.587
|
2023-04-18T20:14:44.587
| null | null |
197219
| null |
613387
|
2
| null |
613376
|
2
| null |
I would approach this problem by means of a Poisson log-linear model. Without entering into technical details, for which you can check Agresti's book Categorical Data Analysis, Wiley, chap. 8, the model I'd use, or at least a starting point(*), is a Poisson regression. In this model, the statistical unit is the cell of each contingency table and the idea is to model the expected counts as a function of the three available covariates: `month`, `region` and `time`. For instance, if your tables are, say $12 \times 10$, across all the 3 time periods, then you'll have $12*10*3=360$ observations.
The model in `R` should look something like this
```
my_glm <- glm(counts ~ month*region + time, data= mydata, family = poisson())
anova(my_glm)
```
Through the `anova` function you check if there is a significant `time` effect. All the other coefficients are there for flexibility reasons are essentially irrelevant to your aim.
Starting point(*) It may turn out that after some checks the Poisson assumption is not adequate. If that's the case, you could consider fitting an over-dispersed Poisson model (see, e.g. Breslow, N.E. 1984, Extra-Poisson variation in log-linear models, Applied Statistics, 33, 38–44.) an implementation of which can be found in the `glm.poisson.disp` function of the `R` package `dispmod`.
| null |
CC BY-SA 4.0
| null |
2023-04-18T20:17:23.350
|
2023-04-19T18:50:14.997
|
2023-04-19T18:50:14.997
|
56940
|
56940
| null |
613388
|
1
| null | null |
1
|
33
|
We know the result that the OLS estimator with measurement error under the Classical Measurement Error (CEV) assumption is biased and inconsistent, and you can write down the probability limit of $\hat{\beta}_{OLS}$. But what is the distribution of this biased and inconsistent OLS estimator?
I compute and obtain the asymptotic variance, but it depends on $\beta$, which throws me off. I also believe it is normal, but I don't know.
Please assume that the disturbance in the regression equation without measurement error and the measurement error are not necessarily normal, but have zero mean and constant variance.
|
The distribution of biased and inconsistent OLS estimator under CEV assumption
|
CC BY-SA 4.0
| null |
2023-04-18T20:20:58.240
|
2023-04-19T03:56:37.520
|
2023-04-19T03:03:31.843
|
12860
|
12860
|
[
"econometrics",
"measurement-error"
] |
613389
|
2
| null |
272708
|
1
| null |
As you mentioned, `anova` is just conducting likelihood ratio test for you under the hood. This is, of course, a "valid" method for variable selection, but fails to be a "good" method in the large scale setting.
Variable selection methods based on testing mostly fall into the category of "Subset selection", which is usually not favored for high-dimensional data. First, they are usaully conducted in a "stepwise" manner, meaning only examining the significance of one variable at a time. When there are a large number of features, this is clearly inefficient. Second, these methods usually involve fitting model multiple times, one time for each newly selected subset. When the data set is too large to be fitted in acceptable amount of time, the whole variable selection procedure can be incredibly time-consuming.
For high-dimensional data, shrinkage methods(or "regularization methods", "sparse methods") are preferred. In regression setting, some of the classic examples are [Ridge regression](https://en.wikipedia.org/wiki/Ridge_regression), [LASSO](https://en.wikipedia.org/wiki/Lasso_(statistics)) and [Elastic Net](https://en.wikipedia.org/wiki/Elastic_net_regularization). These methods are far more efficient, and can produce models with great generalization abilities. Whether you are modeling for prediction or inferece, these methods are definitely worth trying.
| null |
CC BY-SA 4.0
| null |
2023-04-18T20:49:40.363
|
2023-04-18T20:50:48.340
|
2023-04-18T20:50:48.340
|
384465
|
384465
| null |
613390
|
1
|
613739
| null |
1
|
70
|
My problem: I have a fixed, imbalanced, case-control cohort where I am using two (2) low occurrence predictors (rare variants) to try to find an association between the predictors and disease. I will be doing this on a gene-by-gene basis, but want to figure out the minimum amount of the predictor counts I should see to have a well-powered test and reduce the total amount of tests, as a large portion of the genes will have very low predictor counts. I am using R for this analysis.
- The data is:
- Sample ID: ~2963 cases and ~5015 controls
- Binary outcome: disease, no disease
- Per sample number of rare mutations of type 1 from whole exome sequencing data: technically binomial - any value 0 to infinity ; practically a binomial from 0-2 - one homozygous or one heterozygous variant - and heavily 0 weighted with frequency <=1%)
- Per sample number of rare mutations of type 2 from whole exome sequencing data: with the same characteristics and distribution as mutations of type 1
- Covariates: 4 principal components that are related to ancestry and sex
- My predictors are the type 1 and type 2 mutations and my response is disease (hypothesis is that type 1 or type 2 mutations cause disease under a dominant model - whereby a single mutation (heterozygote) leads to disease)
- I am using a generalized linear model to see if the effect of type 1 mutations + type 2 mutations > 0 (H0 - beta_1 + beta_2 = 0 ; H1 - beta_1 + beta_2 > 0) at a significance level of 0.01 lets say. HOWEVER, we can start with a test statistic based on a single parameter (H0 - beta_1 = 0 ; H1 - beta_1 > 0).
Here is a example of data I expect to see:
```
df <- data.frame(id = 1:8001, disease = 0, type1 = 0, type2 = 0, PC1 = seq(-0.009,0.035, by= 0.0000055), sex =0)
df$disease[sample(nrow(df), 3000)] <- 1
df$type1[sample(nrow(df), 30)] <- 1
df$type2[sample(nrow(df), 60)] <- 1
df$sex[sample(nrow(df), 3511)] <- 1
```
I set up the glz like this:
```
test.model.null <- glm(disease ~ PC1 + sex, family = binomial(link = "logit"), data = df)
test.model.full <- glm(disease ~ type1 + type2 + PC1 + sex, family = binomial(link = "logit"), data = df)
```
Question:
- How can I set up simulations to test for how effect size affects power in these models - translating this to the minimum amount of type1 mutations and/or type2 mutations for a well powered test?
I know I can randomize my covariates and then create the model again over and over, but I do not know how to evaluate false positives (for type 1 error I would do simulations with the null model, but how do I know if I have a false positive?) and false negatives (simulations with my full model?).
My simulations development for based on a simplified model with only a single parameter:
Here is my attempt at simulations with an outcome (0 for controls - `Y0`, 1 for cases - `Y1`) and just a single predictor (`$predictor`), where power is tested at a specific predictor occurrence rate (indicated in the `prob` variable in the `binomial()` command), based on [this link](https://web.archive.org/web/20181211055629/http://egap.org/content/power-analysis-simulations-r) I found in another thread in CrossValidated:
```
alpha <- 0.05 # Standard significance level
sims <- 500 # Number of simulations to conduct for each N
significant.experiments <- rep(NA, sims) # Empty object to count significant experiments
#### Loop to conduct experiments "sims" times ####
for (i in 1:sims){
Y0 <- data.frame(outcome = rep(0, 5015), predictor = rbinom(5015, size=2, prob=2/5015)) # 5015 controls get the predictor (0,1, or 2) pulled from binomial distribution at a particular rate
tau <- 1 # Hypothesize treatment effect - gets disease
Y1 <- data.frame(outcome =rep(tau, 2963), predictor = rbinom(2963, size=2, prob=11/2963)) # 2963 cases get the predictor (0,1, or 2) pulled from binomial distribution at a particular rate
Y.sim <- rbind(Y0, Y1)
fit.sim <- glm(Y.sim$outcome ~ Y.sim$predictor, family = binomial(link = "logit")) # Do analysis (Simple regression)
p.value <- summary(fit.sim)$coefficients[2,4] # Extract p-values
significant.experiments[i] <- (p.value <= alpha) # Determine significance according to p <= 0.05
}
power <- mean(significant.experiments) # store average success rate (power)
power
```
Next, I adjusted my code to test for power across a fixed probability of the predictor in cases (`prob.sim.Y0`), compared to a set of probabilities of that same predictor in controls (`prob.sim.Y1`):
```
# Set the probability of having an allele for cases and controls based on an hypothetical # of alleles
prob.sim.Y0 = 1/5015 # we will fix the probability of an allele in cases
prob.sim.Y1 = seq(from = 2/2963, to = 7/2963, by = 1/2963) #and we will vary the probability in cases
#Set up power object and other parameters
powers <- rep(NA, length(prob.sim.Y1)) # Empty object to collect simulation estimates
alpha <- 0.05 # Standard significance level
sims <- 500 # Number of simulations to conduct for each N
for (j in 1:length(prob.sim.Y1)){ #loop over the number of predictor frequencies
significant.experiments <- rep(NA, sims) # Empty object to count significant experiments
#### Loop to conduct experiments "sims" times ####
for (i in 1:sims){
#Ensure there is at least 1 predictor becuase if there are 0 predictors in
#both cases and controls, the glm does not return a beta coefficent
Y.sim <- data.frame(outcome = 0, predictor = 0)
while(sum(Y.sim$predictor) == 0) {
Y0 <- data.frame(outcome = rep(0, 5015), predictor = rbinom(5015, size=2, prob= prob.sim.Y0)) # 5015 controls get the predictor (0, 1, or 2) pulled from binomial distribution at a particular rate
tau <- 1 # Outcome: Disease status
Y1 <- data.frame(outcome = rep(tau, 2963), predictor = rbinom(2963, size=2, prob= prob.sim.Y1[j])) # 2963 cases get the predictor (0,1, or 2) pulled from binomial distribution at a particular rate
Y.sim <- rbind(Y0, Y1) #creating the joint object
}
fit.sim <- glm(Y.sim$outcome ~ Y.sim$predictor, family = binomial(link = "logit")) # Do analysis (Simple regression)
p.value <- summary(fit.sim)$coefficients[2,4] # Extract p-values
significant.experiments[i] <- (p.value <= alpha) # Determine significance according to p <= 0.05, storing TRUE or FALSE according to whether or not the value was <= 0.05
}
powers[j] <- mean(significant.experiments) # store average success rate (power) at each frequency tested by taking the mean() of a T,F vector - which calculates the frequency of TRUE
}
powers
plot(prob.sim.Y1, powers, ylim=c(0,1))
```
|
Conducting power analysis to minimize number of tests: generalized linear model (GLZ) with binary outcome, zero inflated binomial predictors R
|
CC BY-SA 4.0
| null |
2023-04-18T20:56:33.223
|
2023-04-23T16:38:42.237
|
2023-04-21T21:35:38.097
|
332527
|
332527
|
[
"generalized-linear-model",
"binomial-distribution",
"statistical-power",
"biostatistics",
"effect-size"
] |
613391
|
1
|
613872
| null |
2
|
71
|
Main questions
I'm using conditional logistic regression for a resource selection function (RSF) in an ecology study (more details below). I have no trouble implementing the code but I'm having difficulty grasping what the fitted values represent and how I can ultimately calculate probability of an event for given conditions. I understand that clogit() is actually a Cox Proportional Hazard model with constant time, and I have read up on these models, the help files, and a number of related posts, but so far I have not been able to find answers detailed enough for me to fully understand:
- What does type "survival" return when using predict.coxph? It is not discussed in the help file. My assumption is that it is the probability of an event not occurring for a given row in the hypothetical next time step (as this is a constant time model), but I don't know if that is accurate?
- How can I calculate predicted probability of event occurrence? I have seen it suggested that 1-survival should give me this value (which makes sense to me if I'm correct above), but I've also seen it stated that risk/1 + risk would provide this as well, which I do not understand.
- More generally, when predicting from a stratified Cox, my understanding is that all predictions are relative? For example, with type "risk", I'd be getting the average change hazard ratio within each strata for given predictor values. Is that understanding correct?
I would greatly appreciate any help. Thank you!
Additional details if needed
My goal is to compare nest locations (cases) to random points (controls) for bird species with several continuous predictors and ultimately predict the probability of nest occurrence based on predictors. I use "nest_id" as a strata with the clogit() function from the survival package. The reason for this is that not all random points were available to all birds because of territoriality etc., and so traditional logistic regression would not be appropriate. Each strata consists of a known nest location and 5 random points within the territory that could have theoretically been selected by that nesting pair. Thus, in my case, the "event" would be nest occurrence, not mortality. Here is my code to illustrate:
```
# My data (rwbl.rsf)
# A tibble: 6 × 5
nest_id status ndvi_pix veg_pix wood_pix
<chr> <dbl> <dbl> <dbl> <dbl>
1 aml0032022 1 0.235 1.39 231.
2 aml0032022 0 0.00778 1.06 232.
3 aml0032022 0 0.174 1.01 201.
4 aml0032022 0 0.306 1.42 207.
5 aml0032022 0 -0.0670 1.22 230.
6 aml0032022 0 -0.228 0.963 228.
# My model
model = clogit(status ~ scale(ndvi_pix) + scale(veg_pix) + scale(wood_pix) + strata(nest_id),
rwbl.rsf, method = "exact")
# Predictions
preds = predict(model, se.fit = T, type = "survival")
preds$fit[1:5]
1 2 3 4 5
0.7315548 0.9185614 0.8761767 0.6907805 0.9302214
```
|
Understanding predicted values for clogit() model via predict.coxph(); calculating survival probability?
|
CC BY-SA 4.0
| null |
2023-04-18T20:55:20.537
|
2023-04-23T20:21:22.060
|
2023-04-19T02:12:31.230
|
386034
|
386034
|
[
"r",
"logistic",
"survival",
"cox-model"
] |
613392
|
2
| null |
613288
|
1
| null |
Your confusion is perhaps due to the fact that there are different modes of convergence of a sequence of random variables. Definitions first. The CLT tells us that:
>
under suitable conditions, the standardized sample average has limiting
distribution $N(0,1)$
that is
$$
\frac{\sqrt{n}(\bar X_n -\mu)}{\sigma}\overset{d}\to N(0,1).
$$
On the other hand, the (Weak) Law of Large Numbers (LLN) deals with averages and tells us that
>
Under suitable conditions, the distribution of the sample average is the
point mass at $\mu$ (the population average)
that is
$\bar X \overset{P}\to \mu.$
The message here is that the CLT is an instance of convergence in distribution whereas the LLN is an instance of convergence in probability. The fact that $\bar X \overset{P}\to \mu$, i.e. that $\bar X-\mu \overset{P}\to 0$ doesn't conflict with CLT because LLN targets the (centred) sample average whereas the CLT targets the standardized sample average.
| null |
CC BY-SA 4.0
| null |
2023-04-18T21:04:49.777
|
2023-04-18T21:04:49.777
| null | null |
56940
| null |
613393
|
1
| null | null |
1
|
18
|
I want to find the Wold representation of $y_t = e_t + \alpha e_{t-1} e_{t-2}$, but I'm having difficulties with the product.
For instance, I tried:
$$
y_t = e_t + \alpha e_{t-1}e_{t-2} \pm e_{t-1}^2 \pm \frac{\alpha^2}{4}e_{t-2}^2 = e_t - e_{t-1}^2 - \frac{\alpha^2}{4}e_{t-2}^2 + \left(e_{t-1}+\frac{\alpha}{2}e_{t-2}\right)^2,
$$
but of course the last term is a problem... Any idea? Thanks
|
compute the Wold representation of this process
|
CC BY-SA 4.0
| null |
2023-04-18T21:21:42.983
|
2023-04-18T21:32:02.630
|
2023-04-18T21:32:02.630
|
385292
|
385292
|
[
"self-study",
"stationarity",
"decomposition"
] |
613394
|
1
| null | null |
1
|
25
|
Is there any non linear dimensionality reduction algorithm that returns a projection matrix $W$, such as Kernel PCA and Kernel LDA does?
The projection matrix $W$ can be a non linear transformation matrix. My goal is to find a method that don't have any optimization requirements when projecting data onto a new dimension, such as Isomap, Umap etc. have.
|
Is there any non linear dimensionality reduction algorithm that returns a projection matrix?
|
CC BY-SA 4.0
| null |
2023-04-18T21:26:06.207
|
2023-04-18T21:26:06.207
| null | null |
275488
|
[
"machine-learning",
"dimensionality-reduction"
] |
613395
|
2
| null |
613347
|
0
| null |
This is any extremely general question, but I'll give it a shot.
- The assumptions combine the assumptions of generalized linear models and mixed models, among others:
the responses are conditionally independent (i.e., once you account for the fixed and random effects)
on the link (log-odds) scale, the response variable is a linear function of the fixed covariates
the random effects are Normally distributed with zero mean, random effects within a given term are independent and identically distributed (econometricians sometimes use additional assumptions about the independence of random and fixed effects)
Since your responses are binary, there are a couple of assumptions that are either not applicable or not testable (homoscedasticity/constant scale parameter/equidispersion; conformity of the conditional distribution of the response to the assumed distribution). (If you have repeated values of the covariates, it [is possible to test for overdispersion in a binary-response model](http://www.highstat.com/Books/BGS/GLMGLMM/pdfs/HILBE-Can_binary_logistic_models_be_overdispersed2Jul2013.pdf).)
- While performance::check_model() does a good job providing graphical tests of GLMMs, binary responses are especially challenging, so I would recommend the DHARMa package as an alternative (especially ss <- simulateResiduals(fitted_model); plot(ss)). It would be worth reading through the whole vignette for the package.
Questions 3 and 4 are a little too general to answer here. The answer depends what kind of problem you experience (singular fits; "failed to converge with max|grad|"; "model is nearly unidentifiable"; etc.). General advice:
- read help("troubleshooting", package = "lme4") and help("convergence", package = "lme4")
- read the relevant sections of the GLMM FAQ
- Google the warning messages and/or search here and on Stack Overflow
- the gold standard is usually to run allFit() and see if the results are sufficiently close across a range of optimizers
- double-check to make sure your model makes sense (if necessary, post a question here)
- simplify your model
---
PS a couple of notes about your model:
- it looks like you're constructing your own dummy variables from categorical predictors (binary_*). It's usually a better idea to use factor variables as predictors in the model, allowing R to construct dummies on the fly. (See the contr_code_* functions in the faux package, currently only available from Github, for user-friendly ways to code contrasts.)
- using a weights= argument for binary data is unusual, and might not do what you think/you should be careful, e.g. see here, here (if you have survey data you may want to consider the survey package instead ...)
| null |
CC BY-SA 4.0
| null |
2023-04-18T21:30:12.253
|
2023-04-18T21:30:12.253
| null | null |
2126
| null |
613396
|
2
| null |
613344
|
6
| null |
I agree with the other two answers here, but just wanted to highlight one reason why the $\sigma$-algebra based definition in Durrett, though more sophisticated, is beneficial. This definition emphasises that the conditional expectation $E[X_{n + 1} | X_1, \ldots, X_n]$ only depends on $X_1, \ldots, X_n$ through the $\sigma$-algebra $\mathscr{F}_n=\sigma(X_1, \ldots, X_n)$ that they generate, not on the random variables themselves. So, for example, $$E[X_{n + 1} | X_1, \ldots, X_n]=E[X_{n + 1} | 2 X_1, \ldots, 2X_n]$$ because $$\sigma(X_1, \ldots, X_n)=\sigma(2X_1, \ldots, 2X_n)$$
| null |
CC BY-SA 4.0
| null |
2023-04-18T21:42:25.387
|
2023-04-18T21:42:25.387
| null | null |
86998
| null |
613397
|
1
| null | null |
0
|
69
|
I am starting to learn more advanced statistics and I have come across this problem:
Find the Maximum likelihood estimator of $\theta$ if $X_i$ have the following density:
$f(x-\theta)$
with $f(x-\theta)=\frac{e^{\frac{-|x-\theta|}{2}}}{2\sqrt{2\pi|x-\theta|}}$
I have tried to differentiate the log but cannot find where it is equal to zero.
<\br> What I have so far:
$$
V_{X_1,...,X_n}(\theta)=\prod_{i=1}^nf(X_i-\theta)
$$
$$
\log(V_{X_1,...,X_n}(\theta))=(\sum_{i=1}^n-\frac{|X_i-\theta|}{2}-\frac{1}{2}\log(|X_i-\theta|))-n\log(2\sqrt{2\pi})
$$
$$
\frac{\partial\log(V_{X_1,...,X_n}(\theta))}{\partial\theta}=\sum_{i=1}^n-\frac{1}{2}(\boldsymbol{1}_{\theta>X_i}-\boldsymbol{1}_{\theta\leq X_i})-\frac{1}{2}\frac{1}{|X_i-\theta|}
$$
|
MLE of $\frac{e^{-\frac{|x-\theta|}{2}}}{2\sqrt{2\pi|x-\theta|}}$
|
CC BY-SA 4.0
| null |
2023-04-18T21:52:34.527
|
2023-04-19T11:47:25.100
|
2023-04-19T11:47:25.100
|
384407
|
384407
|
[
"self-study",
"maximum-likelihood"
] |
613399
|
2
| null |
613152
|
6
| null |
Putting aside whether reducing class imbalance is even a good thing, propensity score matching or any other matching method would be a terrible way to reduce class imbalance. I presume your strategy would be to find 100 non-diseased cases that are similar on all your covariates (i.e., on the propensity score) to the diseased cases. What you will be left with is a group of diseased and a group of non-diseased patients who are identical to each other, meaning you can't predict which is which from the covariates. The goal of matching is to make it so that covariates don't predict the treatment variable (in this case, disease status), but the entire point of your analysis is to be able to predict disease status. So creating groups that are balanced on the covariates is a terrible idea, essentially ruining your study. Do not do this.
You may feel like propensity score matching sounds like a good method because its goal is to reduce imbalance in the covariates, but covariate imbalance is a completely different concept from class imbalance. Covariate imbalance concerns the association between treatment and the covariates (i.e., the very thing you are trying to study), and class imbalance concerns the sample size of the classes to be predicted. While it's true that matching would eliminate class imbalance by discarding a huge number of your non-diseased cases, it would also make it so that disease status cannot be predicted from the covariates. I reiterate, do not do this.
| null |
CC BY-SA 4.0
| null |
2023-04-18T22:54:14.813
|
2023-04-18T22:54:14.813
| null | null |
116195
| null |
613401
|
2
| null |
613150
|
0
| null |
It's possible that you can lose information label encoding a column like this without knowing what the values represent and what they're going to be used for.
Assuming you're columns have two numerical variables, there are two general situations to consider from a modelling perspective.
## 1. Models that don't implicitly scale the data
In this case you have to take into account exactly what affect your scaling and encoding steps have. If they change the data in different ways the model has no way of changing them to get the same final end point.
Inevitably, this will give you different final models.
## 2. Models that do implicitly scale the data
In this case encoding and scaling steps will have exactly the same effect, because whilst information on the magnitude is lost, the relationship is preserved.
The model's implicit scaling will bring them to roughly the same endpoint. Therefore, encoding or scaling are equally safe in this situation.
## Takeaway
In your case, as you already have the scaling step set up, I would continue to use that. If you really want to encode, try both and check it makes no difference. That's the best way to be sure.
| null |
CC BY-SA 4.0
| null |
2023-04-18T23:03:42.963
|
2023-04-19T23:20:29.917
|
2023-04-19T23:20:29.917
|
363857
|
363857
| null |
613402
|
1
| null | null |
0
|
26
|
Consider a stochastic process $\left( x_t \right)_{t \in \mathbb{R}}$ with auto-covariance function $k(t, t') = \mathbb{E}[(x_t - \mu_t) (x_{t'} - \mu_{t'})]$ where $\mu_{t} = \mathbb{E}[x_t]$.
Suppose $k(t, t)$ is constant, meaning that the variance of the process is stationary.
What is such a process called?
Note that every wide-sense stationery (WSS) process has the above property since a WSS process satisfies $k(t, t') = k(\vert t - t' \vert)$ and $k(t, t) = k(0) < \infty$. But the converse is not necessarily valid. Take for instance $k(t, t') = f(\vert t^2 - t'^2 \vert)$ where $f$ is a suitable positive-definite function. A process corresponding to such auto-covariance (if exists) is not WSS, but its variance is constant.
|
What a stochastic process with constant variance is called?
|
CC BY-SA 4.0
| null |
2023-04-18T23:30:59.087
|
2023-04-26T06:47:18.017
|
2023-04-26T06:47:18.017
|
53690
|
252938
|
[
"terminology",
"stochastic-processes",
"stationarity"
] |
613403
|
1
| null | null |
0
|
27
|
I have a list of coordinates for where different people live over an eight-year period. They are repeat cross-sections of populations served by several county agencies for free workforce training for low income populations. We suspect (and basically know) that, as the city has gotten more expensive and gentrified, lower income folks are moving further from the city and into various other areas further from service providers.
I could track the overall walk or changes by tracking the "center" of those served on a yearly basis, but how do I get something like a variance or standard deviation of that point in two dimensions? Is there an agreed upon measure of two-dimensional dispersion that I can use to measure this? Or would it have to be two values (dispersion of X and dispersion of Y)? Is this going to be sensitive to changes in sample size on a yearly basis?
|
Best way for measuring dispersion in two dimensional, continuous data
|
CC BY-SA 4.0
| null |
2023-04-18T23:33:18.730
|
2023-04-19T02:51:34.633
| null | null |
186886
|
[
"variance",
"euclidean",
"dispersion"
] |
613404
|
2
| null |
613288
|
2
| null |
#### Both types of convergence hold
I think that the distinction you are referring to gets somewhat lost in a formal explanation, so allow me to give you a more informal heuristic view. Re-arranging the formal convergence in distribution result in the CLT result gives the following asymptotic approximating distribution:
$$\bar{X}_n \overset{\text{Approx}}{\sim} \text{N} \bigg( \mu, \frac{\sigma^2}{n} \bigg)
\quad \quad \quad \text{for large } n.$$
However, as you can see, as $n \rightarrow \infty$ the variance in this approximating distribution approaches zero and so $\bar{X}_n \rightarrow \mu$ in probability. So both of the results you raise are true --- the distribution of the sample mean converges to the normal distribution, and the variance of that distribution converges to zero, such that the sample mean converges (in probability) to the true mean.
| null |
CC BY-SA 4.0
| null |
2023-04-18T23:33:44.993
|
2023-04-18T23:33:44.993
| null | null |
173082
| null |
613405
|
1
| null | null |
1
|
31
|
Suppose I have the model
\begin{align*}
Y_i \sim \text{Poisson}(e^{\beta_0 + \beta_1t_i + \beta_2X_i + \beta_3 t'_i})
\end{align*}
where $\boldsymbol{\beta} = (-1, 0, -10, 0)$ for $t_i = i$ for $i = 1, \cdots, 100$, $X_i = \mathbb{I}(t_i \ge 60)$, and $t_i' = \max(0, t_i - 60)$. You can imagine a scenario where we are observing some count outcome for 100 weeks; at week 60, an intervention occurs and is super effective at eliminating any count outcomes (with $\beta_2 = -10$). We regress against $t_i$ and $t_i'$ to capture the temporal effects before and after intervention, albeit in this example there are no temporal effects.
Here is a reproducible example of simulating from this model and fitting with a Poisson GLM in R:
```
beta0 = -1
beta1 = 0
beta2 = -10
beta3 = 0
t = 1:100
X = (t >= 60)*1
t_prime = pmax(0, t - 60)
lambda = exp(beta0 + beta1*t + beta2*X + beta3*t_prime)
Y = rpois(100, lambda)
dat = data.frame(Y = Y, t = t, X = X, t_prime = t_prime)
mod = glm(Y ~ t + X + t_prime, data = dat, family = poisson)
summary(mod)
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -9.412e-01 4.093e-01 -2.299 0.0215 *
t 3.982e-03 1.153e-02 0.345 0.7298
X -1.860e+01 2.891e+03 -0.006 0.9949
t_prime -3.982e-03 1.244e+02 0.000 1.0000
```
We see that testing for $\beta_2 \neq 0$ is very insignificant, while clearly $\beta_2$ is very different from 0. The reason for this insignificance stems from having absolutely no outcomes once the intervention is in place, in which case the model can't differentiate between something like $\beta_2 = -10$ or $\beta_2 = -100$, all of which result in $\exp(\beta_2) \approx 0$, and results in large standard errors. How can I correctly test for the significance of $\beta_2$?
|
Poisson GLM with rare outcome (very negative coefficient)
|
CC BY-SA 4.0
| null |
2023-04-18T23:49:28.850
|
2023-04-19T01:06:22.220
| null | null |
117159
|
[
"generalized-linear-model",
"zero-inflation",
"rare-events"
] |
613407
|
1
| null | null |
0
|
26
|
I know that Khinchin's Law is for sample mean:
$$\overline X_n=\cfrac 1n\sum_{i=1}^{n}{X_i}$$
$$\lim _{n \rightarrow \infty} \operatorname{Pr}\{\left|\overline{X}_n-\mu\right|<\varepsilon\}=1$$
How to get a law for sample variance:
$$S^2_n=\cfrac{1}{n-1}\sum_{i=1}^{n}(X_i-\overline X_n)^2=\cfrac{1}{n-1}(\sum_{i=1}^{n}X_i^2-n\overline X^2_n)$$
$$\lim _{n \rightarrow \infty} \operatorname{Pr}\{\left|S^2_n-\sigma^2\right|<\varepsilon\}=1$$
|
Khinchin's Law for Variance
|
CC BY-SA 4.0
| null |
2023-04-19T00:08:26.720
|
2023-04-19T00:08:26.720
| null | null |
184355
|
[
"variance",
"mean",
"expected-value",
"law-of-large-numbers"
] |
613408
|
2
| null |
611747
|
0
| null |
The answer by user @krkeane is interesting.
Following is a different kind of answer, which considers the "effective behavior" of the Markov chain transition matrices.
For a given $n$-by-$n$ Markov chain transition matrix, the most important things are:
- Does the Stochastic matrix converge? In other words, are the conditions of the Perron–Frobenius theorem satisfied? The original question mentioned the existence of many zeros, which is natural in practice; so to "encourage", but not guarantee, that a typical stochastic matrix with zeros complies with the Perron–Frobenius theorem, it is typical to replace zeros with very small positive values, and re-normalize row sums to 1, prior to the subsequent numerical processing (see next 2 items).
- What is the steady-state final probability of residing in each state, i.e., what is the distribution values of the leading eigenvector? (The leading eigenvector $\mathbf{v}_1$ should contain only real non-negative numbers which sum to 1, and whose associated eigenvalue should have a real value $\lambda_1 = 1$.) Thus, the steady-state distribution of 2 stochastic matrices $M_1$ and $M_2$ can be compared simply by considering the leading eigenvector $\mathbf{v}_1(M_1)$ and $\mathbf{v}_1(M_2)$ from each of the pair of stochastic matrices, and the 2 vectors compared as any 2 discrete distributions of length $n$, with for example the Kullback-Leibler (KL) divergence, etc. (Note that this is a comparison of just a pair of length $n$ vectors, in contrast to @krkeane answer which requires $n$ times comparisons of length $n$ pairs of vectors, corresponding to the complete $n \times n$ matrices.)
- How fast does the convergence to the steady-state final probabilities occur? I.e., the approximate dynamic behavior. This is typically the ratio of the magnitude of the $2^{nd}$-largest eigenvalue (which is typically complex valued) to the largest eigenvalue (the leading eigenvalue), which should have real value 1: $|\lambda_2|/|\lambda_1| = |\lambda_2|/1 = |\lambda_2|$. The smaller this ratio (actually simply the value of $|\lambda_2|$), the faster the convergence to steady-state. Thus, the 2 stochastic matrices $M_1$ and $M_2$ can be compared by comparing $|\lambda_2(M_1)|$ to $|\lambda_2(M_2)|$
Items 2 and 3 above give 2 comparison conditions, both of which should be passed in order for a pair of stochastic matrices to be considered "effectively equivalent" in terms of their steady-state and approximate dynamic behavior. Item 1 (replacing zeros with very small positive values) is a practical pre-processing for being able to apply items 2 and 3.
Note: Typical convention for a [Stochastic matrix](https://en.wikipedia.org/wiki/Stochastic_matrix) is that each of its rows sum to 1. Therefore, the leading eigenvector $\mathbf{v}_1$ mentioned in item 2 is the row eigenvector more commonly known as left eigenvector. However, typical numerical analysis libraries customarily compute the right eigenvector(s), i.e., column eigenvectors. Therefore, in order to use a library which computes the right eigenvector(s), it's simply required to input to it the transpose $M^T$ of the stochastic matrix $M$. Of course, $M^T$ has columns which sum to 1, instead of rows.
| null |
CC BY-SA 4.0
| null |
2023-04-19T00:24:45.970
|
2023-04-19T00:58:15.430
|
2023-04-19T00:58:15.430
|
366449
|
366449
| null |
613409
|
2
| null |
613405
|
0
| null |
If this is how your data is generated, you need way more data to be able to pick up the effect of `X`. If you are interested in significance testing of GLMs, there is a good paper by Lyles (2006) A practical approach to computing power for generalized linear models with nominal, count, or ordinal responses. There's probably an implementation in R somewhere.
However, it may be that in a real life context, there are better (or at least, different) ways of describing the data. For example, a zero-inflated Poisson model, where `X` is a predictor variable for the zero-inflated term. Whether you think this is appropriate should hopefully be dictated by theory.
| null |
CC BY-SA 4.0
| null |
2023-04-19T01:06:22.220
|
2023-04-19T01:06:22.220
| null | null |
369002
| null |
613410
|
1
| null | null |
2
|
73
|
I'm interested in performing a Bayesian meta-analysis, specifically, using a random-effects hierarchical model (as described [here](https://bookdown.org/MathiasHarrer/Doing_Meta_Analysis_in_R/bayesian-ma.html)). Briefly, in this model we assume that the $k$th study's reported (observed) effect size $\hat{\theta}_k$ is an estimate of that study's true effect size $\theta_k$. Further, all $\theta_k$ are considered samples from a higher-level distribution that describes the true overall effect size. The mean of this higher-level distribution $\mu$ is what we're most interested in. So, we have:
$$\hat{\theta}_k \sim N(\theta_k, \sigma^2_k)$$
$$\theta_k \sim N(\mu, \tau^2)$$
In the section of the tutorial describing how to fit the model, the authors write: "Furthermore, we cannot simply use the effect size of each study...as is. We also have to give studies with higher precision (i.e. sample size) a greater weight."
I'm confused by this because I thought the observed standard error $\sigma_k$ we use in the likelihood already weights each individual study's contribution to the estimate of the overall effect $\mu$ (i.e., the smaller $\sigma_k$, the more study $k$ will contribute to the estimate of $\mu$). That is, don't we get that for "free" using Bayes' theorem?
Next, if you can explain why I should take an extra step to weight each study by its precision, can you also explain the mechanics of how you do that? I'm using Python and PyMC rather than R, so it will be extra appreciated if the explanation does NOT involve R-brms syntax.
Note: Below is what I have within the PyMC model definition, where `theta_obs` and `sigma_obs` denote each study's reported effect size and standard error (i.e., $\hat{\theta}_k$ and $\sigma_k$ from the equations above):
```
# Hyper-priors
mu_theta = pm.Normal("mu_theta", mu=0, sigma=1)
tau = pm.Exponential("tau", lam=1)
# Priors
theta = pm.Normal("theta", mu=mu_theta, sigma=tau, shape=n_studies)
# Likelihood
theta_hat = pm.Normal('theta_hat', mu=theta, sigma=sigma_obs, observed=theta_obs)
```
The estimated parameters seem reasonable, so I think I'm on the right track, but appreciate any feedback regarding weighting each study.
|
Bayesian meta-analysis: Why and how to weight individual study's contribution to overall effect?
|
CC BY-SA 4.0
| null |
2023-04-19T01:24:40.500
|
2023-04-19T13:42:53.027
|
2023-04-19T13:42:53.027
|
360122
|
360122
|
[
"bayesian",
"python",
"meta-analysis",
"hierarchical-bayesian",
"pymc"
] |
613411
|
2
| null |
613282
|
0
| null |
I received a response from Prof. Trevor Hastie, the author of the `glmnet` package. (Thanks!)
>
In the glmnet code, we convert an inf for penalty factor to an exclude (another argument which lists the excluded variables),
and then internally reset it to 1 (so as to have minimal effect on the rescaling).
The new version will include it into the help file
I confirmed this using `CVXR`, where I constructed an explicit objective function, and found that it matched perfectly. Check the code below
```
library(glmnet)
library(CVXR)
# simulate data
n <- 500
p <- 10
x <- matrix(rnorm(n*p), ncol=p)
b <- c(1:5, rep(0, 5))
y <- x %*% b + rnorm(n, sd = .5)
# fix penalty value
l <- 0.1
pf=c(1:5/10, rep(Inf, 5)) ## exclude last 5 variables
# fit with glmnet
fit_glmnet <- glmnet(x=x, y=y, standardize=FALSE, intercept=TRUE,
penalty.factor=pf)
betas_glmnet <- as.vector(coef(fit_glmnet, s=l, exact=TRUE, x=x, y=y,
penalty.factor=pf))
# fit with CVXR
## for Inf values in penalty.factor
## 1. exclude corresponding x cols
## 2. convert penalty factor into 1
x.index <- c(pf!=Inf)
beta <- Variable(sum(x.index)+1)
pf.new <- pf
pf.new[pf== Inf] <- 1
pf.scale <- pf.new / mean(pf.new)
## pf.scale <- pf[x.index] / mean(pf[x.index]) / sum(x.index) * p
loss <- l * p_norm(pf.scale[x.index]*beta[-1], 1)
obj <- Minimize( sum((y - beta[1] - x[,x.index] %*% beta[-1])^2) / (2*n) + loss)
## solver
prob <- Problem(obj)
result <- solve(prob, verbose=FALSE)
betas_cvxr <- drop(result$getValue(beta))
## compare results
betas_cvxr <- c(betas_cvxr, rep(0,5))
round(cbind(betas_glmnet, betas_cvxr), 5)
```
| null |
CC BY-SA 4.0
| null |
2023-04-19T02:16:30.133
|
2023-04-19T02:16:30.133
| null | null |
204986
| null |
613412
|
2
| null |
613283
|
0
| null |
"Probability density" is not a single number that can be lower or higher. It's a function $p(x)$ of $x$. Usually it also depends on some additional parameters $q_i$. For example, in the normal distribution, we have $q_1 = \mu$ and $q_2 = \sigma$.
To change variance, you would change these parameters. You could also change the function itself, if you hadn't said "same distribution". When you change those parameters, you will also alter $p(x)$, which may become higher or lower depending on the exact function. So no, it is not true that higher variance will always reduce probability density. For example, for a normal distribution, extreme values will have higher density.
To give a specific counter example, let's talk about the probability of getting $2 < x < 3$:
- With a normal distribution $N(0, 1)$, $p(x > 2) = 0.023$ and $p(x > 3) = 0.001$ therefore $p(2 < x < 3) = 0.022$.
- With a normal distribution $N(0, 4)$, $p(x > 2) = 0.158$ and $p(x > 3) = 0.067$ therefore $p(2 < x < 3) = 0.091$
In this case, the probability density between 2 and 3 has increased when variance went from 1 to 4.
I had to change your example a little, because this is trivial:
>
is it true that $P(\mathbf{X_1}; N(\mu_1, \sigma_1^2)) < P(\mathbf{X_2}; N(\mu_2, \sigma_2^2))$?
You've left too many independent variables, so I can say $X_1 = \mu_1$ and $X_2 = \mu_2 + 10 \sigma_2$ and then it is false, or vice versa and it is true.
| null |
CC BY-SA 4.0
| null |
2023-04-19T02:45:33.800
|
2023-04-19T02:46:25.353
|
2023-04-19T02:46:25.353
|
386036
|
386036
| null |
613413
|
1
| null | null |
0
|
24
|
My general understanding of particle filters is that you represent your state as a collection of discrete particles which you then transform using your state propagation equation.
What I don't understand is why you re-sample the particles after propagation? Isn't the whole point to accurately track the distribution?
Secondly how is this fundamentally different than fitting a gaussian mixture distribution to the propagated particles and then re-sampling?
|
What is the point of the re-sampling step in a particle filter?
|
CC BY-SA 4.0
| null |
2023-04-19T02:51:01.983
|
2023-04-19T04:41:04.327
| null | null |
218119
|
[
"stochastic-processes",
"filter",
"particle-filter"
] |
613414
|
2
| null |
613403
|
0
| null |
There is a book by Neft (1966) Statistical analysis for areal distributions. He suggests in Chapter IV, and I agree, that the "harmonic mean" of an areal distribution is ideal for describing its central tendency, for a variety of reasons.
If you have a set of data points (coordinates) indexed by $i=1,\cdots ,P$ the harmonic mean center is located at position $j$ where the following expression is minimised:
$$
H_j=\frac{1}{\sum_{i=1}^P \frac{1}{r_{ij}}}
$$
with $r_{ij}$ being the great circle distance between coordinate $j$ and point $i$.
This has some benefits over using Euclidian distance based measures of center
- it's less sensitive to outliers, as the Euclidian distance-based center is based upon the square of distances
- it will always be based within the area of study, even if the area of interest is concave
- if you are spread over a large area, you will correctly account for the curvature of the earth
The value at the minimum, is the measure of dispersion you are interested in.
You should be able to calculate this manually, either by a) using a two-dimensional optimiser built into whatever software you are using, or b) generated a grid of points, calculating $H_j$ at each point, and then finding the minimum.
| null |
CC BY-SA 4.0
| null |
2023-04-19T02:51:34.633
|
2023-04-19T02:51:34.633
| null | null |
369002
| null |
613415
|
1
| null | null |
2
|
55
|
I have conducted OLS with interaction terms in R and used emmeans package to examine further. I am not sure whether I am on the right track to interpret the interaction terms.
- My research question: Does depression moderate the association between abuse and delinquency?
- Hypothesis: Abuse experience will have a greater negative impact on delinquency for people with high levels of depression compared to those with low depression.
IV: abuse (yes=1, no=0);
Moderator: depression (high depression=1, low depression=0);
DV: delinquency (continuous)
Here are my codes:
```
reg <-lm(delinquency~ abuse + age + depression + NBsafety +abuse*depression, data=test)
summary(reg)
reg_a <- emmeans(reg, ~abuse*depression)
reg_a
contrast(reg_a, "revpairwise", by="depression", adjust="none")
emmip(reg, depression~abuse, CIs=TRUE)
reg_b <-emmip(reg, depression~abuse, CIs=TRUE, plotit=FALSE)
reg_b
p <-ggplot(data=reg_b, aes(x=depression, y=yvar, fill=abuse)) + geom_bar(stat="identity", position="dodge")
p
```
And, I got these results:
```
Call:
lm(formula = delinquency ~ abuse + age + depression + NBsafety +
abuse * depression, data = test)
Residuals:
Min 1Q Median 3Q Max
-8.494 -3.130 -1.424 1.780 28.770
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 6.90783 0.41027 16.837 < 0.0000000000000002
abuse1 0.89760 0.28890 3.107 0.00191
age -0.09954 0.01507 -6.607 0.000000000047
depression1 1.35707 0.27387 4.955 0.000000767370
NBsafety1 -0.88643 0.18767 -4.723 0.000002439185
abuse1:depression1 1.12343 0.57906 1.940 0.05247
Residual standard error: 4.716 on 2714 degrees of freedom
(13 observations deleted due to missingness)
Multiple R-squared: 0.06073, Adjusted R-squared: 0.059
F-statistic: 35.1 on 5 and 2714 DF, p-value: < 0.00000000000000022
>
> reg_a <- emmeans(reg, ~abuse*depression)
> reg_a
abuse depression emmean SE df lower.CL upper.CL
0 0 3.95 0.111 2714 3.73 4.16
1 0 4.84 0.268 2714 4.32 5.37
0 1 5.30 0.251 2714 4.81 5.80
1 1 7.32 0.437 2714 6.47 8.18
Results are averaged over the levels of: NBsafety
Confidence level used: 0.95
>
> contrast(reg_a, "revpairwise", by="depression", adjust="none")
depression = 0:
contrast estimate SE df t.ratio p.value
abuse1 - abuse0 0.898 0.289 2714 3.107 0.0019
depression = 1:
contrast estimate SE df t.ratio p.value
abuse1 - abuse0 2.021 0.504 2714 4.011 0.0001
Results are averaged over the levels of: NBsafety
>
> emmip(reg, depression~abuse, CIs=TRUE)
> emmip(reg, abuse~depression, CIs=TRUE)
>
> reg_b <-emmip(reg, depression~abuse, CIs=TRUE, plotit=FALSE)
> reg_b
depression abuse yvar SE df LCL UCL tvar xvar
0 0 3.95 0.111 2714 3.73 4.16 0 0
1 0 5.30 0.251 2714 4.81 5.80 1 0
0 1 4.84 0.268 2714 4.32 5.37 0 1
1 1 7.32 0.437 2714 6.47 8.18 1 1
Results are averaged over the levels of: NBsafety
Confidence level used: 0.95
>
> ggplot(data=reg_b, aes(x=depression, y=yvar, fill=abuse)) + geom_bar(stat="identity", position="dodge")
```
[](https://i.stack.imgur.com/tTUEM.jpg)
[](https://i.stack.imgur.com/0D7lI.jpg)
If I assume that the interaction term was significant (although it was not, p=0.05247), can I interpret that the impact of abuse on delinquency is stronger for people with high levels of depression than those with low depression. This was because emmeans contrast showed the greater difference in depression=1 (abuse1 - abuse0 2.02) compared to the difference in depression=0 (abuse1 - abuse0 0.898), and the slope in the graph is steeper for depression=1 compared to depression=0.
Could you please guide me whether I am correct?
|
Interpretation of interaction terms in R
|
CC BY-SA 4.0
| null |
2023-04-19T03:08:47.543
|
2023-04-23T17:12:47.317
| null | null |
385148
|
[
"r",
"interaction",
"interpretation"
] |
613416
|
2
| null |
613388
|
1
| null |
Asymptotically, we know the distribution of $\hat\beta_{OLS}$, because it's just the distribution of $\hat\beta_{OLS}$
$$\sqrt{n}(\hat\beta_{OLS}-\beta_{OLS})\stackrel{d}{\to} N(0, V)$$
where
$$V=A^{-1}BA$$
and $A$ and $B$ are the probability limits of $(X^TX)/n$, $X^T(Y-\mu)^2X/n$ respectively. That's just a fact about the OLS estimator, not depending on the probability model (except for assumptions like finite variance and independence).
Now if $X=Z+\eta$ where $\eta$ has zero mean and constant finite variance, and if the residuals also have constant variance and are independent of $\eta$, the variance simplifies to $\sigma^2(X^TX)^{-1}$. The variance of the slope, in particular, simplifies to $\sigma^2/\mathrm{var}[X]$. Since by assumption
$$\mathrm{var}[X]=\mathrm{var}[Z]+\mathrm{var}[\eta]$$
we've made quite a bit of progress. Especially as you said you already knew what the limiting value $\beta_{OLS}$ was.
In the real world, you don't know $\mathrm{var}[\eta]$, and $\eta$ doesn't have zero mean or constant variance, so things are more difficult, but under the classical assumptions it's not too hard.
| null |
CC BY-SA 4.0
| null |
2023-04-19T03:56:37.520
|
2023-04-19T03:56:37.520
| null | null |
249135
| null |
613417
|
1
|
613423
| null |
2
|
28
|
I know there is similar question related this topic but I still cannt solve my case.
So, I have one year data (365 days) of car request. I did 80%(+-280days) for train, and rest for test data. And, I make prediction with ARIMA.
```
Arima_model=sm.tsa.arima.ARIMA(train_df,order=(0,0,0))
result=Arima_model.fit()
future_dates = pd.date_range(start='2022-10-20',periods=73)
predictions = result.predict(start=future_dates[0], end=future_dates[-1])
```
The problem is, all of my prediction return the same value.
I attach the p value and autocorellation of train_df. I dont think it is because of my insufficient data. I have tried also to change the order parameter but still does not work.
[](https://i.stack.imgur.com/JbvHn.png)
[](https://i.stack.imgur.com/LMuWi.png)
|
Arima model generate same predictions
|
CC BY-SA 4.0
| null |
2023-04-19T04:02:42.820
|
2023-04-19T06:33:05.467
|
2023-04-19T04:05:49.463
|
386040
|
386040
|
[
"time-series",
"python",
"arima"
] |
613418
|
2
| null |
613413
|
1
| null |
If you had perfect knowledge of the state and the propagation equations, you wouldn't need to resample. You would already indeed be accurately tracking the distribution already.
Typically, though, you don't know the state perfectly and your propagation equations are approximations and/or they are intrinsically stochastic. Your particles will (slowly or rapidly) diverge from each other and from the true path of the state. Your updated estimate will be imprecise (because it's based on old data) and/or incorrect (because the propagation equations aren't perfectly correct)
When you acquire more data, you have a chance to fix this divergence. You'd like to keep the particles whose state is more consistent with the data you acquire at a later time and discard those that are less consistent with later data. Resampling does this, but in a smoother way -- it's not an in or out decision, the sampling is proportional to the likelihood given the observed data.
| null |
CC BY-SA 4.0
| null |
2023-04-19T04:41:04.327
|
2023-04-19T04:41:04.327
| null | null |
249135
| null |
613420
|
2
| null |
612952
|
1
| null |
Your reference seems using the traditional accumulating eligibility trace, maybe the same type of eligibility trace equation (3) of this paper [Replacing eligibility trace for action-value learning with function approximation](http://www.cs.hut.fi/%7Eframling/Publications/KF_ESANN2007.pdf) by Framling is more clear to understand, that is, only when $s=s_t$ and $a=a_t$, $e_t(s,a)=γλe_{t−1}(s,a)+1$, otherwise $e_t(s,a)=γλe_{t−1}(s,a)$. And indeed according to $Q$-values update equation (1) there the same $\delta_t$ is used to update $Q$-values for all $s,a$ at time step $t$. Thus note in this context of your reference's pseudo algorithm it's clear that only for the tabular case of $s=s_t$ and $a=a_t$ at time step $t$ the accumulating eligibility trace is set at maximum value of $1$ since your reference simple overrides $e_t(s,a)=1$ rightly after choosing action $a$ at a specific state $s$, while for all other state-action pairs it's simply set at $γλe_{t−1}(s,a)$ due to the one-step trace decay together with discount. So your reference algo actually simplifies the maximum of accumulating eligibility trace equation (3) via dropping the accumulating term.
Finally to address your main concern of why the same delta applies to all $s,a$ at each time step, it's helpful to know that eligibility traces were inspired from the behavior of biological neurons that reach maximum eligibility for learning a short time after their activation via adjusting the synaptic weights between neurons. Thus if you follow the true online Sarsa(λ) algorithm using linear function approximation for action value of all state-action pairs from the book Reinforcement Learning, An Introduction by Sutton and Barto, each component of the eligibility trace with the common delta at every episodic time step will affect its corresponding component of the weight vector of the linear function form of the action value, thus the same delta affects all $Q$-values in your tabular case at every time step.
| null |
CC BY-SA 4.0
| null |
2023-04-19T05:36:41.067
|
2023-04-19T05:36:41.067
| null | null |
371017
| null |
613421
|
2
| null |
613343
|
3
| null |
Consider the graph of $|X-\mu|$ which is like a triangle, and the graph of $k\sigma$ which is like a straight line. In the example image below we use $k=3$
[](https://i.stack.imgur.com/ke8mB.png)
The inequality
$$|X-\mu| < k\sigma$$
translates into $$\mu-k\sigma <X < \mu+k\sigma$$
If you take the square (which is a [non-injective](https://en.m.wikipedia.org/wiki/Bijection,_injection_and_surjection), ie. many-to-one or non-invertible, function) then this covers two regions because the cases $\mu-k\sigma <-X < \mu+k\sigma$ will also satisfy the squared inequality.
So, if $\mu > k\sigma$ then
$$\mathbb{P}\left((\mu-k\sigma)^2 < X^2 < (\mu +k\sigma)^2\right) = \mathbb{P}\left(\mu-k\sigma < -X < \mu +k\sigma \right) +\mathbb{P}\left(\mu-k\sigma < X < \mu +k\sigma \right) \geq \mathbb{P}\left(\mu-k\sigma < X < \mu +k\sigma \right) = \mathbb{P}\left(|X-\mu|<k\sigma\right) \geq 1-\frac{1}{k^2}$$
If $X>0$ then the inequality in the middle becomes an equality.
| null |
CC BY-SA 4.0
| null |
2023-04-19T06:30:28.443
|
2023-04-19T06:44:50.840
|
2023-04-19T06:44:50.840
|
164061
|
164061
| null |
613423
|
2
| null |
613417
|
0
| null |
```
Arima_model=sm.tsa.arima.ARIMA(train_df,order=(0,0,0))
```
This line creates an ARIMA(0,0,0) model, i.e., there is no autoregression (the first zero), no integration (the second zero), and no moving average component (the last zero). Your model is an intercept-only model, so forecasts are naturally flat.
Try [pmdarima.arima.auto_arima](https://alkaline-ml.com/pmdarima/modules/generated/pmdarima.arima.auto_arima.html) for automatic order selection. Don't forget to specify a possible seasonal cycle length of 7 for your time series, since car requests probably have an intra-weekly seasonality.
Also, you might be interested in these resources for time series forecasting: [Resources/books for project on forecasting models](https://stats.stackexchange.com/q/559908/1352).
| null |
CC BY-SA 4.0
| null |
2023-04-19T06:33:05.467
|
2023-04-19T06:33:05.467
| null | null |
1352
| null |
613425
|
2
| null |
589557
|
0
| null |
As stated in Slocum (2009), when the curve is becoming flat, there is no improvement, so you can select between 4 and 6 based on your graphs.
Five classes may be a good number but it depends on what you want, if you prefer as many as possible or the contrary. For example, in a map, this is used to assign a set of colors and in order to get the maximum difference among them, sometimes it is better to use fewer classes, but in that case, the resolution of the classification is lower because there are fewer classes.
The algorithm helps, but the final decision is yours.
You can know that 5 or 4 classes,... are better, by reviewing the value for GVF on each variable.
Yes, a value next to 1 means there are clusters in your data (groups of similar values that are well separated from other groups, also with similar values).
| null |
CC BY-SA 4.0
| null |
2023-04-19T06:58:50.543
|
2023-04-19T07:04:05.910
|
2023-04-19T07:04:05.910
|
56940
|
386048
| null |
613426
|
1
| null | null |
2
|
43
|
Suppose I want to learn about a random variable $X$ (e.g., whether the subject is infected with Covid). I have already accessed a random variable $Y$ (e.g., an antigen test), which can help me partially learn about $X$. Now I would like to know given $Y$, how much another random variable $Z$ (e.g., nucleic acid test) can help me additionally learn about $X$.
What is the corresponding concept in information theory? Is it conditional mutual information $I(X; Z \vert Y)$ or total correlation?
Edit: I proposed a question, and asked which concept in information theory is a measure of the additional information about $X$ provided by $Z$ given $Y$. I don't quite know what additional details are required.
|
Which concept applies here?
|
CC BY-SA 4.0
| null |
2023-04-19T07:32:01.690
|
2023-04-19T13:50:15.747
|
2023-04-19T12:51:58.057
|
357424
|
357424
|
[
"experiment-design",
"decision-theory"
] |
613427
|
2
| null |
612978
|
0
| null |
>
...the total number of times that $\hat{P}(x;z) > \hat{P}_y(x;z) $ and $\hat{P}(y;z)>\hat{P}_x(y;z)$ is 7.
Can someone please replicate this result..
It is a little counting error. It should be 8 instead of 7.
[](https://i.stack.imgur.com/w8dmM.png)
Or possibly the table contains an error and the seven had been (correctly) counted on a different table.
| null |
CC BY-SA 4.0
| null |
2023-04-19T07:32:16.553
|
2023-04-19T08:28:34.230
|
2023-04-19T08:28:34.230
|
164061
|
164061
| null |
613428
|
1
| null | null |
0
|
42
|
I have a data set for observations of life stages of red seaweed with the surface area (mm2) taken on day 1 - 8, 10 and 14 (n = 10). Each life stage has been assigned a number 1 - 5 for a life stage and 8 for death.
Culture vessel - petri dish had 10 identified and then tracked. 2D surface area measured and life stage noted.
Factors:
Treatment - 3 different nutrients
Intensity - 6 different light intensities
Time - Days 1 to 8, 10 and 14.
What I want to find out via a survival analysis is
Time to different stages (germination/holdfast/branching/death)
Time to different stages under the treatment
Time to different stages under the intensity
Time to different stages under treatment x intensity
Currently using R and getting Time error coding etc too. Have Time entered as 1, 2, 3, 4, 5, 6, 7, 8, 10 and 14. Should this be the actual date or what format?
Also should I be considering censoring interval and/or truncate.
Thanks in advance, I’ve been told it’s a challenge this one.
But very much needed results.
Cheers
|
Survival Analysis_Seaweed Life Stages
|
CC BY-SA 4.0
| null |
2023-04-19T07:39:43.267
|
2023-04-19T07:41:03.563
|
2023-04-19T07:41:03.563
|
56940
|
386049
|
[
"survival"
] |
613429
|
2
| null |
613337
|
0
| null |
A `gam` with beta regression (family="[betar](https://stat.ethz.ch/R-manual/R-devel/library/mgcv/html/Beta.html)") is more appropriate for modeling probabilities and it pretty much answers both questions. Adjusting theta=50 and k=15 gives the best fitting.
```
library(mgcViz)
library(qgam)
library(ggplot2)
# qgam and gam
qg.5 <- qgamV(prob ~ s(xa, bs="cr", k=15), data = dat, qu = 0.5); summary(qg.5)
btr <- gamV(prob ~ s(xa, bs="cr", k=15), data = dat, method = "REML", family = betar(link = "logit", theta = 50)); summary(btr)
gam.check(btr)
# add fitted values to dat
dat$fit_qg.5 <- qg.5[["fitted.values"]]
pred <- predict(btr, newdata = dat, se.fit = TRUE, type="response")
dat$fit_btr <- pred[["fit"]]
dat$se.fit_btr <- pred[["se.fit"]]
dat$low <- dat$fit_btr - 2 * dat$se.fit_btr
dat$upr <- dat$fit_btr + 2 * dat$se.fit_btr
# predict xa at prob = 0.5
xa_at_prob.5 <- with(dat, approx(fit_btr, xa, xout=0.5)); xa_at_prob.5
# plot
ggplot(dat, aes(x = xa, y = prob)) +
geom_point() +
geom_line(aes(y = fit_qg.5), lwd = 1.2, color = "black") +
geom_line(aes(y = fit_btr), lwd = 1.2, color = " red") +
geom_ribbon(aes(x = xa, ymin = low, ymax = upr), alpha = 0.3, fill = "red") +
geom_vline(xintercept = xa_at_prob.5[["y"]], linetype = "longdash", color = "black", linewidth = 0.6) +
scale_y_continuous(breaks = seq(0, 1, 0.1)) +
scale_x_continuous(breaks = seq(0, 16, 1)) +
geom_hline(yintercept=c(0, 0.5, 1), linetype="solid", color = "black", linewidth = 0.3) +
theme_classic(base_size = 20) +
coord_cartesian(xlim = c(4, 13))
```
Note: The deviance explained cannot be compared between qgam and gam. The number of basis dimension (k) is quite acceptable:
```
Basis dimension (k) checking results. Low p-value (k-index<1) may
indicate that k is too low, especially if edf is close to k'.
k' edf k-index p-value
s(xa) 14 12 1 0.52
```
[](https://i.stack.imgur.com/z9Xu1.jpg)
Edit:
As relevantly explained [here](https://cran.r-project.org/web/packages/qgam/vignettes/qgam.html) by Matteo Fasiolo (more details [here](https://arxiv.org/pdf/2007.03303.pdf)), "since the variance of prob varies with xa, the bias induced by the smoothed pinball loss used by `qgam` is not constant. This issue can be solved by letting the learning rate change with xa". In this way, the following code almost completely answers question 1, while increasing the qgam deviance explained from 65% to 94%:
```
# qgam with list(...bs="ad")
qg.5_ad <- qgamV(list(prob ~ s(xa, bs="ad", k=20), ~ s(xa)), data = dat, qu = 0.5); summary(qg.5_ad)
qg.25_ad <- qgamV(list(prob ~ s(xa, bs="ad", k=20), ~ s(xa)), data = dat, qu = 0.25)
qg.75_ad <- qgamV(list(prob ~ s(xa, bs="ad", k=20), ~ s(xa)), data = dat, qu = 0.75)
```
providing the blue quantile regression curves below. However, it does not answer question 2 (goodness of fit), which would perhaps require to create a discontinuity starting at the pink area, but which is certainly not possible using smooth effects.
[](https://i.stack.imgur.com/VbJ24.jpg)
| null |
CC BY-SA 4.0
| null |
2023-04-19T07:45:32.293
|
2023-04-20T11:29:03.317
|
2023-04-20T11:29:03.317
|
307344
|
307344
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.