Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
615642
1
null
null
8
189
El Karoui and Purdom wrote a mathematically solid paper on how the bootstrap as a general resampling technique fails in high dimensions: [https://arxiv.org/abs/1608.00696](https://arxiv.org/abs/1608.00696). I think it is a very important piece of work and I wonder why it has so few citations. This led me to imagine that perhaps most practitioners are either unaware of this issue (honestly, the derivations use random matrix theory, which are already quite sophisticated) or maybe in real life datasets, the bootstrap can be used. I would like to understand if this paper is the current state of the art in our understanding of the bootstrap. Do applied researchers still use the bootstrap when they analyze high dimensional data, and if so, do they trust it? I personally am not aware of any general purpose method that solves this issue of bootstrapping in high dimensions. May be the real datasets really have a lot of signal in them so that the bootstrap can be applied without much trouble? What is your take on this issue? Do you agree with the message of the paper, or do you think otherwise? Acknowledgement: Thanks to @Silverfish for suggesting the question framing!
On the high dimensional bootstrap
CC BY-SA 4.0
null
2023-05-12T05:54:09.790
2023-05-15T07:16:49.380
2023-05-13T04:52:45.677
121522
59485
[ "bootstrap", "high-dimensional" ]
615644
2
null
615641
0
null
I am adding this here since I do not have reputation to comment. The question requires some more clarity on the following statement. > But the problem is i don't know which type of svm should i use, because i extract three lession features (MA, HM, and EX) You will need to provide more information to set the context of your problem. For now I will assume you are using a single task - single model setting which basically means that you have one feature set and a single model. The model will try to learn that feature set and make predictions. Now if you are only concerned about the multiclass vs multilabel then the only question you should be asking is will any of you samples ever fall into multiple classes (normal, mild, moderate, severe, PDR), that is can a sample fall into 2 classes say normal,mild (you need to decide if this makes sense)? If the answer to that question is "No" then [https://scikit-learn.org/stable/modules/multiclass.html#multiclass-classification](https://scikit-learn.org/stable/modules/multiclass.html#multiclass-classification) If the answer to that question is "Yes" then [https://scikit-learn.org/stable/modules/multiclass.html#multilabel-classification](https://scikit-learn.org/stable/modules/multiclass.html#multilabel-classification) Additional Reference: - Multitask Classification : https://scikit-learn.org/stable/modules/multiclass.html#multilabel-classification
null
CC BY-SA 4.0
null
2023-05-12T06:40:52.353
2023-05-12T06:40:52.353
null
null
307980
null
615646
1
null
null
1
31
I've found a pretty interesting visualization in the article "High monkeypox vaccine acceptance among male users of smartphone-based online gay-dating apps in Europe, 30 July to 12 August 2022" ([link](https://www.eurosurveillance.org/content/10.2807/1560-7917.ES.2022.27.42.2200757#f2)) with visualized data according vaccination acceptance among survey participants by country, subregion and with calculated 90% CI for each country and subregion. Does anyone know the name of this chart and instruments to build the similar one?[](https://i.stack.imgur.com/9CsCt.gif)
Vaccination acceptance by subregion and country of residence visualization name and instrument to build the similar one
CC BY-SA 4.0
null
2023-05-12T07:35:14.527
2023-05-15T07:20:53.663
2023-05-15T07:20:53.663
121522
375942
[ "data-visualization", "terminology", "survey" ]
615648
1
null
null
0
7
Consider a process that takes state $X_{t-1}$ to $X_t$. Let us also assume that the process is Markovian. Therefore, $$P(X_t | X_{t-1}, X_{t-2}, X_{t-3}, ...) = P(X_t | X_{t-1}).$$ My question is whether the Markovian property also implies $$P(X_t | X_{t-1}, X_{t+n}) = P(X_t | X_{t-1})$$ where $n>0$?
Transition probability conditioned on the future state and the present state of a Markov process
CC BY-SA 4.0
null
2023-05-12T07:55:25.937
2023-05-12T07:55:25.937
null
null
387799
[ "markov-process" ]
615650
1
null
null
0
6
I am interested in using Krippendorff's Alpha for a dataset containing a large number of raters to assess the reliability of using an assessment tool for case studies, which contains 8 items with ordinal (yes / no) responses (e.g., does this individual meet x criteria?). Typically I have seen kalpha being used to assess the inter-rater reliability of independent items within tools using two or more case studies. For example, the inter-rater reliability of 40 raters of item x would be examined by calculating expected / observed reliability across 2 / 3 / 4 case studies. In the case of the current tool, this would produce 8 kalpha values for items 1 through 8. However, I would instead be interested in calculating a single kalpha value across all eight items for the entire measure, using only one case study that has been rated by 40 raters.
Can you use Krippendorff's Alpha to assess reliability of an entire measure using one case study?
CC BY-SA 4.0
null
2023-05-12T08:02:13.393
2023-05-12T08:02:13.393
null
null
387796
[ "spss", "reliability", "agreement-statistics" ]
615651
2
null
466425
1
null
The internally studentized residuals do have exactly unit variance. Consider a linear regression model $\boldsymbol y=X\boldsymbol\beta+\boldsymbol\varepsilon$, where $\boldsymbol y$ is an $n\times 1$ response vector, $X$ is an $n\times p$ matrix of covariates (fixed), $\boldsymbol \beta$ is a $p\times 1$ vector of parameters, and the error vector $\boldsymbol\varepsilon$ is multivariate normal $N_n(\boldsymbol 0,\sigma^2I)$. The $i$th internally studentized residual is $$r_i=\frac{e_i}{\hat\sigma\sqrt{1-h_{ii}}}\,,$$ where $e_i=y_i-\boldsymbol x_i^T\hat{\boldsymbol\beta}$ is the $i$th residual, $h_{ij}$ is the $(i,j)$th entry of the hat matrix $H=X(X^TX)^{-1}X^T$, and $\hat\sigma^2=\frac1{n-p}\sum_{j=1}^n e_j^2$ is the usual unbiased estimator of $\sigma^2$. Also, $\boldsymbol x_i^T$ is the $i$th row of $X$ and $\hat{\boldsymbol\beta}$ is the least squares estimate of $\boldsymbol\beta$. Note that $e_i \sim N(0,\sigma^2(1-h_{ii}))$ for each $i$. The mean of each $r_i$ is $0$ because $e_i/\hat\sigma$ is symmetric about $0$. So the variance is just the second moment, which one can find using the [distribution of $r_i^2$](https://stats.stackexchange.com/q/400217/119261): $$\frac{r_i^2}{n-p}\sim \text{Beta}\left(\frac12,\frac{n-p-1}{2}\right) \tag{1}$$ So, $$\operatorname{Var}(r_i)=(n-p)\operatorname E\left[\frac{r_i^2}{n-p}\right]=\frac{(n-p)/2}{1/2+(n-p-1)/2}=1$$ --- For a simple derivation of $(1)$, we can use the relationship between $\hat\sigma^2$ and $s_{(i)}^2=\frac1{n-p-1}\sum\limits_{j(\ne i)=1}^n \left(y_j-\boldsymbol x_j^T\hat{\boldsymbol\beta}_{(i)}\right)^2$, where $\hat{\boldsymbol\beta}_{(i)}$ is the least squares estimate of $\boldsymbol\beta$ with the $i$th case removed. First we need the following [formula](https://en.wikipedia.org/wiki/Leverage_(statistics)#Relation_to_influence_functions) for $\operatorname{DFBETA}_i$: $$\operatorname{DFBETA}_i := \hat{\boldsymbol\beta}-\hat{\boldsymbol\beta}_{(i)} = \frac{(X^TX)^{-1}\boldsymbol x_i e_i}{1-h_{ii}} \tag{2}$$ Then, \begin{align} (n-p-1)s_{(i)}^2 &= \sum_{j(\ne i)=1}^n \left[(y_j-\boldsymbol x_j^T\hat{\boldsymbol\beta})+\boldsymbol x_j^T(\hat{\boldsymbol\beta}-\hat{\boldsymbol\beta}_{(i)})\right]^2 \\&=\sum_{j(\ne i)=1}^n \left[e_j+\frac{h_{ji}e_i}{1-h_{ii}}\right]^2 \\&=\sum_{j=1}^n \left[e_j+\frac{h_{ij}e_i}{1-h_{ii}}\right]^2 - \left[e_i+\frac{h_{ii}e_i}{1-h_{ii}}\right]^2 \\&=\sum_{j=1}^n e_j^2 + \frac{e_i^2}{(1-h_{ii})^2}h_{ii} - \frac{e_i^2}{(1-h_{ii})^2} \\&=(n-p)\hat\sigma^2 - \frac{e_i^2}{1-h_{ii}} \end{align} In the penultimate step, we have used $h_{ii}=\sum_{j=1}^n h_{ij}^2$ and $\sum_{j=1}^n h_{ij}e_j=0$, which follow from $H=H^2$ and $H\boldsymbol e=\boldsymbol 0$ respectively. Now $(\star)$ follows from \begin{align} \frac{r_i^2}{n-p}&=\frac{e_i^2/(1-h_{ii})}{(n-p)\hat\sigma^2} \\&=\frac{\frac{e_i^2}{\sigma^2(1-h_{ii})}}{\frac{(n-p-1)s_{(i)}^2}{\sigma^2}+\frac{e_i^2}{\sigma^2(1-h_{ii})}} \\&=\frac{U}{U+V}\,, \end{align} where $U=\frac{e_i^2}{\sigma^2(1-h_{ii})}\sim \chi^2_1$ and $V=\frac{(n-p-1)s_{(i)}^2}{\sigma^2}\sim \chi^2_{n-p-1}$ are independently distributed. --- For proving $(2)$, we first define $X_{(i)}$ and $\boldsymbol y_{(i)}$ as the $X$ matrix and $\boldsymbol y$ vector without their $i$th rows. Then $$\hat{\boldsymbol\beta}_{(i)} =(X_{(i)}^TX_{(i)})^{-1}X_{(i)}^T\boldsymbol y_{(i)}$$ Now using $$X^TX=X_{(i)}^TX_{(i)}+\boldsymbol x_i\boldsymbol x_i^T$$ and $$X^T\boldsymbol y=X_{(i)}^T\boldsymbol y_{(i)}+\boldsymbol x_i y_i$$ combined with the [Sherman-Morrison formula](https://en.wikipedia.org/wiki/Sherman%E2%80%93Morrison_formula) leads to $$\hat{\boldsymbol\beta}_{(i)}=\hat{\boldsymbol\beta}-\frac{(X^TX)^{-1}\boldsymbol x_i e_i}{1-h_{ii}}$$
null
CC BY-SA 4.0
null
2023-05-12T08:14:42.517
2023-05-12T15:04:16.433
2023-05-12T15:04:16.433
119261
119261
null
615652
1
null
null
0
24
I have been reading several articles on the disaggregation of between and within effects in mixed models, such as Curran and Bauer's (2011) excellent article in which they state "More careful work is needed to understand how zbi and zwti can best be obtained when the TVC is binary or ordinal" where in this case zbi refers to the between effect estimate and zwti to the within effect estimate of a time varying covariate z. Forgive my long-winded question but I think it important to carefully convey my query by laying the baseline for what I understand of the topic and also by putting it into context first, so my concerns are clear. In my research I am exploring the effect of a time-varying binary covariate (social media use) on a continuous outcome in sleep duration (mins). Both were assessed daily over two weeks, and participants could either report no social media use before bed (0) or social media use before bed (1) as the two levels of the predictor. It is clearly of intrinsic value to disagreggate the between and within effects of my variable in case they are different. For example, more average social media use might correlate with poorer quality sleep in general (between effect), however, people might also use social media as a tool to wind-down at night when they are feeling stressed and so higher than normal use might in fact be associated with better quality sleep (within effect). In reality I don't expect this to be the case, nonetheless it's important to disagregate the effects (this point has been made by a reviewer). Ordinarily the between-persons component is generated by obtaining an estimate for person-specific mean values of a predictor whereas the within-persons component is generated by calculating an estimate for the person-specific deviation for that variable (i.e., the value of said predictor on any given night minus the person-specific mean of that predictor). For a continuous variable (for arguments sake lets pretend this is the number of minutes spent using social media), a person-specific mean would therefore be the average value of the predictor (average amount of time spent using social media) across all 14 nights. The person-specific deviation for that variable, as mentioned above, is calculated by subtracting this mean from the individual values on each night to represent the number of minutes spent on social media relative to a person's indivual average time spent on social media. Negative scores on this person-specific deviation variable therefore represent less time on social media than they would normally spend, whereas positive values represent more time on social media than they would normally spend. The issue arises when considering how this looks for a binary predictor. The person-specific mean is no longer an average amount of time spent on social media before bed, it instead becomes the proportion of nights in which they used social media before bed. This is still comparable to the person-specific mean of a continuous variable because a person with a higher individual mean still roughly equates to more time spent on social media over the course of the two weeks. An example of some person-specific means include someone who never used social media (scored 0 on all possible occasions and therefore has a mean of 0), someone who always used social media (scored 1 on all possible occasions and therefore has a mean of 1), or anyone in between whereby scores closer to 1 represent a higher mean (proportion) of days where social media was used before bed. No problem here. However, the person-specific deviation is then still calculated in the same way as for a continuous variable (i.e., the person-specific mean is substracted from the individual scores for each day). Given the individual scores can only be 0 or 1, the resulting person-specific deviation variable then depends on how high or low their mean was. Those with higher mean scores (let's take a mean of .93 for example), would have 14 days of data, 13 days of which they used social media before bed and 1 day in which they did not. On the days they did, their person-specific deviation would be .07 (1 - .93), and therefore represent only a slight departure away from their mean performance, in favour of having used social media more than they normally would. The 1 day they did not would have a score of -.93 (0 - .93) and therefore represent a drastic departure away from their mean performance, in favour of using social media less than they normally would. Thus plotting these scores for all participants (as seen below) would illustrate that higher magnitudes of negative scores represent a more unusual non-performance of the behaviour. But higher magnitudes of positive scores represent a more unusual performance of the behaviour. Doesn't this then make the coefficient that is generated from the model unintepretable? Because a one unit change in this person-specific deviation variable is not equivalent at all points of the scale. For instance a one unit change from -1 to 0 represents a change from very unusual abstince to performing (or not performing) the behaviour as you normally would, whereas a one unit change from 0 to 1 represents performing (or not performing) the behaviour as you normally would to very unusual performance. [](https://i.stack.imgur.com/BB9Tj.png) I have looked through other sources but cannot find an answer to whether or not it is truly acceptable to extract a within-persons effect in this manner for binary predictor variables. The only thing I have to go off is Curran and Bauer's advice from above. Is anyone able to advise me why/why not I am safe to interpret the within-person coefficient for my binary variable like you would for a continuous variable? ## References Curran, P. J., & Bauer, D. J. (2011). The disaggregation of within-person and between-person effects in longitudinal models of change. Annual review of psychology, 62, 583–619. [https://doi.org/10.1146/annurev.psych.093008.100356](https://doi.org/10.1146/annurev.psych.093008.100356)
Is it correct to calculate a within-person estimate for the effect of a binary predictor on an outcome variable in a linear mixed model?
CC BY-SA 4.0
null
2023-05-12T08:15:39.060
2023-05-15T03:19:45.897
2023-05-15T03:19:45.897
342904
342904
[ "mixed-model", "repeated-measures" ]
615653
2
null
615594
1
null
> Can someone please explain why the scaled chi-square difference test favors the model with more parameters (i.e., Model A) while the AIC, BIC, and Sample-Size adjusted BIC favor the model with fewer parameters (i.e., Model B)? The LRT is a test statistic, information criteria are not. The former are used to conduct null-hypothesis significance tests, testing an exact null hypothesis without adjusting for model parsimony (in terms of how many parameters are estimated). The latter adjust for parsimony, and are used to descriptively compare models in a way that balances parsimony and fit. They are not used for the same purpose, so they cannot really contradict each other. > Which fit statistic(s) should I rely on and report in this situation? Report them all. - The LRT is only "statistically significant" relative to an arbitrary p value. You reported only < .05 (the actual p value is .038), so that must be your alpha level: a common convention but rather liberal (1 out of 20 true null hypotheses will be rejected). - AIC is lower for the model that estimates fewer free parameters. It is designed to select the model that is most likely to predict new data. - BIC is also lower for the model that estimates fewer free parameters. It is a poor approximation of a Bayes factor (making many simplifying assumptions), and asymptotically selects the true data-generating model when it is among the competitors. When the true model is not among those considered, then its behavior is less understood. I prefer AIC because it is about out-of-sample prediction and doesn't assume any model is a perfect representation of real data-generating processes, but it is up to you to transparently report your own values. Here is some reading you might find helpful. Vrieze, S. I. (2012). Model selection and psychological theory: A discussion of the differences between the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Psychological Methods, 17(2), 228–243. [https://doi.org/10.1037/a0027127](https://doi.org/10.1037/a0027127) Bollen, K. A., Harden, J. J., Ray, S., & Zavisca, J. (2014). BIC and alternative Bayesian information criteria in the selection of structural equation models. Structural Equation Modeling, 21(1), 1-19. [https://doi.org/10.1080/10705511.2014.856691](https://doi.org/10.1080/10705511.2014.856691) Another observation: Your scaling factor is basically 1 (within rounding to 2 decimals), so the scaling factor in each model is only making a negligible adjustment. That implies your data do not have enough excess kurtosis to cause a problem for your SEs and test statistics. If you use standard MLE, you don't need the more complex formulas for a chi-squared difference test: [https://www.statmodel.com/chidiff.shtml](https://www.statmodel.com/chidiff.shtml)
null
CC BY-SA 4.0
null
2023-05-12T08:18:51.467
2023-05-12T08:18:51.467
null
null
335062
null
615654
2
null
615572
1
null
To see why $P(A\mid B) + P(A \mid B^C) = 1$ is not generally true, consider the case where $A$ is always guaranteed to happen, regardless of what happens with $B$. Then both $P(A \mid B)$ and $P(A \mid B^{C})$ are equal to $1$, so you obtain $2=1$.
null
CC BY-SA 4.0
null
2023-05-12T08:29:54.667
2023-05-12T08:29:54.667
null
null
366672
null
615655
1
null
null
0
29
When using a regressor ("generated regressor") that is generated in a first-stage equation and used in a second-stage equation, then standard errors will be understated (here is a readable introduction, [https://onlinelibrary.wiley.com/doi/full/10.1111/1475-679X.12470](https://onlinelibrary.wiley.com/doi/full/10.1111/1475-679X.12470)). This is because the standard error bias associated with a generated regressor derives from the sampling error inherent in the first-step regression coefficient estimates, used in creating the generated regressor. In the minimal working example below, you may think of "y_hat_fs", the predicted values of the first stage as some type of competition metric, that is generated and used in the second stage regression as a regressor (In general, the two datasets are not identical and may have a different number of observations, ids can be thought to be "stores" or "gas stations". In the second-stage regression, the standard error of the variable labelled "generated variable" is biased. A solution to this problem is the "pairs cluster bootstrap", which includes the first and the second stage regression. This involved multiple steps (see source above, also for a pair cluster bootstrap): - Randomly draw with replacement a bootstrap sample of size N from the sample used in the first-step regression, where N is the number of observations in the full sample of the first-step regression - Estimate the first-step regression using the bootstrap sample - Use the first-step regression output to generate the regressor of interest, that is, the generated regressor - Estimate the second-step regression using the bootstrap sample and the generated regressor, and store the coefficient estimates - Repeat this process a large number of times - Use the standard deviations of the collected coefficient estimates as the bootstrapped standard errors. There are many packages that allow bootstrapping the second stage, however, none of the packages (to my knowledge) allow for "the pairs bootstrap". I'm especially unsure how to derive the correct standard errors from a bunch of collected coefficient estimates, see step 6. Here is a minimal simplified working example (regressions are run using the fixest package for efficiency reasons, the regressions run on the bootstrap sample have 17 million observations, the second stage consists of 160,000 observations). Any help (or hint to resources) on how to conduct a "pair bootstrap" in R is highly appreciated. ``` library("tidyverse") library("fixest") set.seed(123456789) # First Stage~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # Data first Stage --------- data_fs <- tibble(x1 = rnorm(100), x2 = rnorm(100), y = rnorm(100), cluster = rep(letters[1:4],25), time = rep(2020:2024,20), id = rep(c("id1", "id2", "id3", "id4", "id5"), 20)) # First Stage Regression, which produces generated regressor. fsreg <- feols(y~ x1+x2| id + time, cluster = ~cluster, data = data_fs) # Get Table etable(fsreg) # Get predicted values y_hat_fs <- fsreg$fitted.values # Second Stage ~~~~~~~~~~~~~~~~~~~~~ # Data Second Stage data_ss <- tibble(z = rnorm(100), x1 = rnorm(100), x2 = rnorm(100), cluster =(rep(letters[1:4], 25)), time = rep(2020:2024, 20), id = rep(c("id1", "id2", "id3", "id4"),25), y_hat_fs = y_hat_fs) # Second Stage Regression ~~~~~~~~~~~ ss_reg <- feols(z ~ x1 + x2 +y_hat_fs| id+ time, cluster = ~cluster, data = data_ss) #Get Table etable(ss_reg, dict = c(y_hat_fs= "Generated Variable")) ```
Pairs (Cluster) Bootstrap R
CC BY-SA 4.0
null
2023-05-12T08:30:17.393
2023-05-12T11:47:32.887
2023-05-12T11:47:32.887
3277
387801
[ "r", "bootstrap", "standard-error", "cluster-sample" ]
615656
1
615679
null
3
27
I have recently been reading more about causal inference so am trying to conceptually think about model specification in more detail. From reading (e.g. [this paper](https://doi.org/10.1177/25152459221095823)), we adjust for confounders which, by definition, cause both the independent and dependent variable. But how does adjusting for only variables associated with either the independent or dependent variable impact regression models? In my field of infant MRI imaging, age at scan is adjusted for in all studies because brain structure dramatically changes over the first weeks of life, so age at scan is strongly correlated with MRI feature (independent variable). But I would not expect age at scan to be correlated with e.g. developmental outcome at 1 year of age (dependent variable). To me, it doesn't make sense to adjust for age at scan as there is no causal pathway. I run a basic simulation in R to see the effects, and age at scan is significant in the model and altered the coefficient for imaging feature (even when age at scan and outcome are uncorrelated). What am I missing? ``` library(tidyverse) library(faux) library(GGally) # define data # age_scan strongly correlated with mri_feature # mri_feature moderately correlated with outcome # assume all had outcome at same date set.seed(42) dat <- rnorm_multi( n = 1000, mu = c(40, 0.4, 100), sd = c(3, 0.1, 15), r = c(0.8, 0, 0.5), varnames = c("age_scan", "mri_feature", "outcome") ) glimpse(dat) ggpairs(dat) # linear models m0 <- lm(outcome ~ mri_feature, data = dat) m1 <- lm(outcome ~ mri_feature + age_scan, data = dat) summary(m0) summary(m1) ```
In regression, should we adjust for variables only associated with the independent or dependent variable?
CC BY-SA 4.0
null
2023-05-12T08:55:19.250
2023-05-12T13:37:05.443
2023-05-12T08:59:24.523
387802
387802
[ "regression", "simulation", "causality", "confounding" ]
615657
1
615697
null
1
57
In the code shown at the bottom of this post, I plot survival curves for the `lung` dataset from the `survival` package using a fitted exponential model, using the K-M nonparametric model, and run/show simulations using the exponential model. I use bootstrapping, resampling from the original data with replacement to create multiple bootstrap samples using `sample()`. For each bootstrap sample, the code fits the exponential distribution using the `survreg()` function. This process is repeated, generating a distribution of estimates, representing the variability and uncertainty of the exponential statistical model. My objective with this ultimately is given a partial survival curve (say 500 periods of the lung dataset), generating conservative simulations for periods 501-1000. I don't show that in this code example. When drafting similar code for the Weibull distribution, I use both bootstrapping (with `sample()` function) and additionally simulated uncertainty of the Weibull parameters using `MASS:mvrnorm()`, to derive a nicely dispersed range of simulation outcomes. However, in this exponential model example, the exponential distribution has only one parameter, the rate (λ) parameter; so `MASS:mvrnorm()` makes no sense in this case. To introduce more dispersion in outcomes in the below code I use `rnorm(1, mean = 0, sd = 0.05)` in the `sim_params` section (all commented out in the code and in the below illustration to not introduce this additional uncertainty factor), which as the code is currently drafted is subjective (by manually inputting the SDEV value) and not grounded in the actual data unlike my use of `MASS:mvrnorm()` for the Weibull distribution. So my questions are (1) is there a way to ground this parameter uncertainty factor (`sim_params...`) in the actual `lung` data? and (2) is this method of modeling uncertainty both using bootstrapping with `sample()` and modeling uncertainty in the distribution parameters themselves (in the `sim_params` section) theoretically valid? The image below only shows the results of running the code with only bootstrap resampling functioning, and showing a run of 2000 simulations: [](https://i.stack.imgur.com/mYIFa.png) Code: ``` library(survival) num_simulations <- 2000 # Fit the exponential model to the dataset fit <- survreg(Surv(time, status) ~ 1, data = lung, dist = "exponential") time <- seq(0, 1000, by = 1) # Compute the exponential survival function using fitted model survival <- 1 - pexp(time, rate = 1 / exp(fit$coef)) # Generate bootstrap samples and fit exponential models to each sample bootstrap_fits <- lapply(1:num_simulations, function(i) { sample_data <- lung[sample(nrow(lung), replace = TRUE), ] fit <- survreg(Surv(time, status) ~ 1, data = sample_data, dist = "exponential") return(fit) }) # Generate random distribution parameter estimates for simulations sim_params <- sapply(bootstrap_fits, function(fit) { rate <- fit$coef params <- rate # this is a bypass of "perturbation" below # perturbation <- rnorm(1, mean = 0, sd = 0.05) # Adjust sd for simulation dispersion # perturbed_rate <- rate + perturbation # params <- perturbed_rate return(params) }) # Compute the survival curves for each simulation using the sampled parameters sim_curves <- sapply( 1:num_simulations, function(i) 1 - pexp(time, rate = 1 / exp(sim_params[i])) ) plot(time, survival, type = "n", xlab = "Time", ylab = "Survival Probability", main = "Survival Plot of Lung Dataset") sim_lines <- data.frame( time = time, do.call(cbind, lapply(1:num_simulations, function(i) { curve <- sim_curves[, i] lines(time, curve, col = "lightblue", lty = "solid", lwd = 0.25) return(curve) }))) colnames(sim_lines)[-1] <- paste0("surv", 1:num_simulations) # Compute and add to the plot the Kaplan-Meier survival curve for the dataset lines(survfit(Surv(time, status) ~ 1, data = lung), col = "blue", lwd = 1) # Plot the exponential survival curve lines(time, survival, type = "l", xlab = "Time", ylab = "Survival Probability", col = "red", lwd = 3) legend("topright", legend = c("Fitted exponential model", "Kaplan-Meier & confidence intervals", "Simulations"), col = c("red", "blue", "lightblue"), lwd = c(3, 1, 0.25), lty = c(1, 1, 1), # 1 = solid, 2 = dashed bty = "n") ```
How to appropriately model the uncertainty of the exponential distribution model when running survival simulations?
CC BY-SA 4.0
null
2023-05-12T09:00:06.383
2023-05-17T18:17:00.617
null
null
378347
[ "r", "survival", "bootstrap", "simulation", "exponential-distribution" ]
615658
1
null
null
1
37
When doing logistic regression with binary cross entropy (or equivalently cross entropy) loss, the model tries to approximate conditional probabilities as $$ P(X\mid C) = \sigma(m(C)), $$ where $X$ is an event, $C$ is some condition, $m$ is our (in case of logistic regression linear) model, and $\sigma(y) = \frac{1}{1+e^{-y}}$. According to [wikipedia](https://en.wikipedia.org/wiki/Loss_functions_for_classification#Logistic_loss), we know that the loss is minimized by setting $$ m(C) = logit(P(X\mid C)) $$ where $$ logit(y) = \log\frac{y}{1-y}. $$ My first question is the following, do we know the expectation of the model? Is it equal to the expectation of the logit? Or, formally, $$ E(m(C)) \stackrel{?}{=} E(logit(P(X\mid C)) $$ Second, if it is equal, can we somehow express this as a function of $I_X$, the indicator function of X, in the form $$ E(f(I_X)\mid C) = E(m(C)), $$ where we are looking for some $f$? Edit: In hindsight this question was not just unclear, but incorrect in its formulas, sorry. What I actually meant to ask is if it's true that the output of the model (the logits) approximate the conditional expectation of something. Such as $$ E(m(C)) \stackrel{?}{=} E(logit(P(X\mid C))\mid C) $$
Expectation of the logit in logistic regression?
CC BY-SA 4.0
null
2023-05-12T09:26:45.527
2023-05-12T10:33:41.190
2023-05-12T10:33:41.190
387806
387806
[ "regression", "logistic" ]
615659
1
615746
null
1
42
Following the same [m3 model approach proposed by Gavin Simpson](https://stats.stackexchange.com/questions/403772/different-ways-of-modelling-interactions-between-continuous-and-categorical-pred?rq=1) on my data, I compared an ordered factor model (m3_of) to a non-ordered factor model (m3). The global shapes trend to be similar; however the m3_of plot displays a marked wigglyness for both groups. Question 1: How to explain this difference of wigglyness between both models? Question 2: Comparing both models using `itsadug::compareML(m3, m3_of)`, the best model is m3 (lower AIC); however, m3_of has a slightly better deviance explained (9.67% vs 9.63%), knowing that such model advantageously allow a more interpretable comparison (groupof=1 compared to the reference groupof=0). Hence, what is the more appropriate model? Thanks for any help, advice, reference. Data: ``` n = 180,000 bmk: outcome, continuous, positive delay: predictor, continuous, positive group: factor (n=2) groupof: ordered factor (n=2, groupof=0 being the reference groupof) medu: random effect (n=91 different medical units) ``` Models: ``` m3 <- bam(bmk ~ group + s(delay, k=20, by = group) + s(delay, k=20, medu, bs = "fs"), data = dat, method = 'fREML', family = inverse.gaussian(link="identity"), discrete = TRUE) m3_of <- bam(bmk ~ groupof + s(delay, k=20) + s(delay, k=20, by = groupof) + s(delay, k=20, medu, bs = "fs"), data = dat, method = 'fREML', family = inverse.gaussian(link="identity"), discrete = TRUE) ``` Plots: ``` par(mfrow = c(1,2), cex = 1.1) plot_smooth(m3, view="delay", plot_all="group", rm.ranef=FALSE, n.grid = 50, col=c("blue","red"), xlim=c(0,90), ylim=c(11.5,14.5), main = "m3") plot_smooth(m3_of, view="delay", plot_all="groupof", rm.ranef=FALSE, n.grid = 50, col=c("blue","red"), xlim=c(0,90), ylim=c(11.5,14.5), main = "m3_of") ``` [](https://i.stack.imgur.com/eEFQc.png) ``` gratia::appraise(m3) ``` [](https://i.stack.imgur.com/vJBlF.png) Summary(m3): ``` > summary(m3) Family: inverse.gaussian Link function: identity Formula: bmk ~ group + s(delay, k = 20, by = group) + s(delay, k = 20, medu, bs = "fs") Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 11.76764 0.19840 59.31 <2e-16 *** group1 0.32901 0.02879 11.43 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Approximate significance of smooth terms: edf Ref.df F p-value s(delay):group0 1.001 1.002 2.903 0.0882 . s(delay):group1 1.002 1.003 17.792 2.46e-05 *** s(delay,medu) 145.751 1626.000 10.319 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 R-sq.(adj) = 0.0792 Deviance explained = 9.63% fREML = -1.8252e+05 Scale est. = 0.0089033 n = 179659 > gam.check(m3) k' edf k-index p-value s(delay):group0 19 1 0.98 0.54 s(delay):group1 19 1 0.98 0.48 s(delay,medu) 1700 146 0.98 0.48 ``` Summary(m3_of): ``` > summary(m3_of) Family: inverse.gaussian Link function: identity Formula: bmk ~ groupof + s(delay, k = 20) + s(delay, k = 20, by = groupof) + s(delay, k = 20, medu, bs = "fs") Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 11.70146 0.14033 83.39 <2e-16 *** groupof1 0.33219 0.02974 11.17 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Approximate significance of smooth terms: edf Ref.df F p-value s(delay) 9.370 11.457 2.188 0.01100 * s(delay):groupof1 1.004 1.007 10.200 0.00134 ** s(delay,medu) 176.807 1626.000 10.313 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 R-sq.(adj) = 0.0792 Deviance explained = 9.67% fREML = -1.8246e+05 Scale est. = 0.0089023 n = 179659 > gam.check(m3_of) k' edf k-index p-value s(delay) 19.00 9.37 0.98 0.69 s(delay):groupof1 19.00 1.00 0.98 0.68 s(delay,medu) 1700.00 176.81 0.98 0.68 ``` Compare models: ``` itsadug::compareML(m3, m3_of) Model m3 preferred: lower fREML score (56.679), and equal df (0.000). ----- Model Score Edf Difference Df 1 m3_of -182464.7 9 2 m3 -182521.3 9 56.679 0.000 AIC difference: -60.87, model m3 has lower AIC. ```
Hierarchical gam (HGAM) with 'by' ordered factor and random smooth 'fs' effect: why so wiggly?
CC BY-SA 4.0
null
2023-05-12T09:27:30.910
2023-05-13T08:43:50.970
null
null
307344
[ "r", "mixed-model", "categorical-data", "mgcv" ]
615660
1
null
null
2
12
I am doing an observational study looking at the association between a baseline exposure (binary) and the first instance of an abnormal blood test result (binary) among people with serial blood tests. So far I have used Cox regression. However, the problem is that the baseline exposure is associated with the number of blood test results. What survival analysis method can be used to account for this?
Study design when exposure more likely to lead to test for outcome
CC BY-SA 4.0
null
2023-05-12T09:33:16.513
2023-05-12T09:33:16.513
null
null
387807
[ "regression", "survival", "bias", "hazard", "proportional-hazards" ]
615663
1
null
null
0
26
I would like to get your opinion/feedback about the following problem: I have repeated measures "$y$" that are repeated for each age (e.g., 30:90), and each age is repeated for each time (2000:2010). |y |x |t | |-|-|-| |value# |30 |2000 | |value# |31 |2000 | |... |... |.... | |value# |30 |2010 | |value# |31 |2010 | |... |... |.... | I noticed that - for each time -the $y$'s over ages, are distributed following a Skew Normal distribution, and this property holds for all $t$. Therefore I would write $y_x | t \sim SN(\mu,\sigma^2,\lambda)$ I am wondering if this setting allows us to write the model also using this setting $y_{x,t} \sim SN(\mu,\sigma^2,\lambda)$ Thanks
Distributional assumption on repeated measures
CC BY-SA 4.0
null
2023-05-12T10:29:08.610
2023-05-12T10:29:08.610
null
null
308083
[ "repeated-measures", "conditional-probability", "skew-normal-distribution" ]
615664
1
null
null
0
20
If I have a piecewise distribution with density $f$ on $[0,1)$ so that $f=f_1$ on $[0,0.5)$ and $f=f_2$ on $[0.5,1)$, with both $f_1$ and $f_2$ densities, is it necessarily a mixture distribution? My reasoning is that we can always introduce a latent variable $z$ taking values in $[0,1)$ uniformly and then choose either $f_1$ or $f_2$ depending on whether $z>0.5$ or not. What is the difference between a piecewise distribution and a mixture distribution here?
Is every piecewise distribution a mixture distribution?
CC BY-SA 4.0
null
2023-05-12T10:53:18.580
2023-05-12T10:53:18.580
null
null
207047
[ "mixture-distribution", "piecewise-pdf" ]
615665
1
615667
null
1
30
I have been searching for this, but I couldn't find a salutation. I know we could calculate at what values the quantiles are of a violin plot. But I was wondering how we could calculate the width of a violin plot at a given value y. Here is some reproducible code: ``` set.seed(7) df = data.frame(y = rnorm(100, 2, 1)) library(ggplot2) ggplot(df, aes(x = "", y = y)) + geom_violin(, fill = "blue") ``` ![](https://i.imgur.com/SMM0fZ2.png) Created on 2023-05-12 with [reprex v2.0.2](https://reprex.tidyverse.org) So imagine I would like to calculate the violin width at y = 3. How are we supposed to calculate this using the y values?
How to calculate the width of violin at given value
CC BY-SA 4.0
null
2023-05-12T11:01:31.947
2023-05-12T11:49:32.040
2023-05-12T11:28:24.380
323003
323003
[ "r", "probability", "distributions", "quantiles" ]
615666
2
null
259502
1
null
I found this discussion very helpful for the analysis I need to conduct for my thesis. I am not sure if I understand it right, that the multiple trials you are talking about are with the same sample? In my case I have a control and a treatment group, and each respondent goes through 4 questions where he/she has to choose between train and plane. I now want to analyse the difference between control and treatment group over all four choices combined. So I think I could use the approach with a weighted model and the dependent variable "proportion_train" (which represents how often the train was chosen out of the four choices). I am not sure about the interpretation of the coefficients of this model then. I know these are most likely the log-odds-ratios (or equivalently differences in log-odds). But do these log-odds-ratios show the probability combined over all trials (which I want to find out), or the per-trial probability? Now I read somewhere else that I need to account for the correlation within the individuals by including random intercepts for the individual IDs - as the four questions were answered by the same individuals (but different individuals in control and treatment group of course). None of you here included such a random intercept for the individuals, so I am wondering if it's necessary or not?
null
CC BY-SA 4.0
null
2023-05-12T11:38:06.047
2023-05-12T11:43:41.750
2023-05-12T11:43:41.750
387813
387813
null
615667
2
null
615665
1
null
The violins in ggplot are created with the 'density' function from core R. The settings are to use a Gaussian kernel with the `nrd0` algorithm to define the bandwidth (see [here](https://stat.ethz.ch/R-manual/R-devel/library/stats/html/bandwidth.html)). [](https://i.stack.imgur.com/EXp2P.png) This code shows how to generate the density that is used for plotting the violin ``` ### this will plot the result from the denisty function plot(density(df$y, bw = "nrd0", kernel = "gaussian"), ylim = c(0,1)) ### with the bandwidth computed with nrd0, which is 0.3419, ### we can also manually compute the density as following ### the values of x used for plotting x = seq(min(df$y),max(df$y),0.1) ### compute a sum of Gaussians ### centered at y ### (note that the 'density' function ### uses a faster Fourier transform ### to make the computation, ### but this loop shows more intuively ### what happens) width = sapply(x, FUN = function(x) { mean(dnorm(df$y-x,0, 0.3419)) }) ### add points to the graph points(x,width) ```
null
CC BY-SA 4.0
null
2023-05-12T11:43:46.443
2023-05-12T11:49:32.040
2023-05-12T11:49:32.040
164061
164061
null
615668
1
615672
null
2
23
I'm dealing with a population where there are $5$ categories $A, B, C, D, E$ and I would like to make the claim that $A$ is the most common category in the population (i.e., the most frequent). I thought about the following, use a proportion difference test (e.g., z-test) $4$ times to assure that the proportion of $A$ is larger than all others which would hence imply that it has the highest proportion. However, I read that the the z-test applies on different populations. When here all proportions are part of the same population and I'm at loss as what to do since then. Thank you for your time.
At loss regarding a suitable hypothesis test for proportions in the same population
CC BY-SA 4.0
null
2023-05-12T11:48:43.550
2023-05-12T12:50:30.640
null
null
293012
[ "hypothesis-testing", "proportion", "z-test" ]
615669
1
616017
null
2
27
I'm following the Bi-Encoder architecture (see [here](https://www.sbert.net/examples/applications/cross-encoder/README.html)) in order to build a dense retrieval (search) system. Formally, my network encodes a query q and an item description d based on fixed representations from [Sentence Transformers](https://huggingface.co/sentence-transformers) denoted as SBERT(q) and SBERT(d), respectively. It then learns a transformation (the 'pooling' in the picture below) that maximizes the cosine similarity between positive examples (where the query and item description match) and minimize the similarity between negative examples (randomly assigned query/description pairs). I use an MSE loss. [](https://i.stack.imgur.com/rQVOP.png) Now, when I train my network, I observe that it (always) converges to producing a cosine similarity of 0.5 for all examples, provided that my labels are equally distributed as {0, 1}. If I adjust the balance of the positive/negative examples, it converges to whatever minimizes the MSE loss while still producing the same output (within a fractional range) for both positive and negative examples. What could be going wrong? My dataset isn't the largest, only a few thousand examples. I would say that the queries are fairly semantically related to the descriptions, so it shouldn't be too hard to learn this mapping. The offline-computed sentence representations for the queries and descriptions also look reasonable. I have tried smaller and bigger networks for the pooling transformations, all with the same effect.
Why does my bi-encoder converge to the mean square of the [0,1] label distribution?
CC BY-SA 4.0
null
2023-05-12T12:04:09.837
2023-05-16T11:08:08.253
null
null
26012
[ "machine-learning", "neural-networks", "information-retrieval", "siamese" ]
615670
1
null
null
0
12
I have a 2-step ML pipeline (classification, binary, balanced dataset, about 300 samples and tens of thousands of features): step one I train 7 algorithms (XGB, SVM etc.) and step two I train a RFC on these predictions. I do that with 5x5 nested cross-validation, without repeat. Now, I have 23 different targets (0/1 labels) to predict, so I build 23 independent models. It turns out that some of these models have performances above 0.9 AUC, some are lower, some are close to 0.5, and one of them has a AUC of 0.29. Consistent across multiple runs with different subparts of the dataset, so a bug is excluded. [](https://i.stack.imgur.com/K194E.png) I don't get how that is possible. The pipeline is the same for all targets so I am sure I do not invert labels or mix up samples, as it works very well for some targets. How can a model be consistently doing exactly the opposite than it is supposed to do?
Consistent very low (<0.5) AUC preformance of pipeline, cannot explain why
CC BY-SA 4.0
null
2023-05-12T12:06:21.930
2023-05-12T12:06:21.930
null
null
186873
[ "classification", "cross-validation", "auc" ]
615671
1
null
null
0
7
I'm having trouble understanding how the probability of belonging to each class is being calculated. Appreciate it if someone could help me out! Assuming the dependent variable with 3 ordinal classes (low, medium, high). And assume for simplicity, the coefficients for a model with 2 predictors: - b1 : 2 - b2 : -3 And intercept for low|medium: - b0 : 1.5 And intercept for medium|high: - a0 : 0.5 How do we find the probability that a new set of observations x0 = (x1 , x2) = (1,2) belongs to which class? Can you show how, using the equation for `polr`? Thanks in advance for your help!
Probability to belong in a class in a Proportional Odds Logistic Regression
CC BY-SA 4.0
null
2023-05-12T12:27:14.337
2023-05-12T13:46:23.530
2023-05-12T13:46:23.530
56940
387814
[ "ordered-logit", "polr" ]
615672
2
null
615668
2
null
If I were asked to "prove" (i.e., statistically confirm using some null hypothesis statistical testing (NHST) protocol) that one category for a categorical variable was maximal for a sample (from a single population), my first exploration approach would be the following: First, run a chi-square goodness-of-fit test with all 5 categories where we would start by assuming equiprobability. If you have a statistically significant p-value (which you would want in this case if the goal/desire is to see that the first category is the mode), then you can reject $$H_o : \pi_A = \pi_B = \pi_C = \pi_D = \pi_E$$ If this comes out to not be statistically significant, then you really can't claim that any one category is more frequent than the other with the current sample size. Let's assume you do find a statistically significant result at the first step and let's assume the first category $A$ is both the one you wish to show is the mode and indeed has the largest observed frequency in the sample. My next step would be to run a follow-up chi-square test with all the values for the categorical variable EXCEPT for $A$. If this NHST comes up to be not statistically significant, then we can assume the probabilities for the remaining categories are (nearly) identical. Next, estimate this common probability $$\pi_\text{pooled} = \frac{k_B+k_C+k_D+k_E}{N} \div 4$$ and run a single sample proportion NHST to confirm that $\pi_A$ is bigger than this pooled proportion estimate $\pi_\text{pooled}$. The key here would be to decide what to do if at the second step the probabilities are not the same. In that case, I would probably try an iterative/recursive process to obtain subsets of nearly equiprobable categories. Again, this is how I would explore the given data to see if there was any evidence to suggest one of the categories was indeed the modal category. If you wish to prove this more formally, a different strategy may be required.
null
CC BY-SA 4.0
null
2023-05-12T12:50:30.640
2023-05-12T12:50:30.640
null
null
199063
null
615673
1
615676
null
8
627
I'm not sure how to 'responsibly' balance my model to account for this. I could predict a probability and give that to the business ('predict_proba' in SKlearn) but experience in the past has thought me that I should be the one (they put it on 0.9 because they felt more safe that way). I'm considering resampling the training data in some way... Let's say I'm in charge of an algorithm that removes rotten apples, which is extremely hard (let's say that without bootstrapping it classifies 10% of the rotten apples incorrectly as good apples, and 20% of the good apples as bad apples, but not relevant for this question I think). The ratio between good & bad apples is 1000/1. The company sells apples for 50 cnts. Selling a rotten apple costs them on average 30 dollars as a customer will often return it and not come back. Catching a rotten apple also costs them 1 cnt for returning it too the farmer. Sending a good apple to the farmer costs them 4 euro's as it kills the relationship with the farmer and the apple gets returned. Without any additional work on the features and modelling, could I resample (bootstrap) training data (ratio good & bad is 1000/1) in such a way to align with the cost ratio of the apple value, such that a model is most likely to create the most value? Or are there other ways?
What do I do when a false negative is far more expensive than a false positive?
CC BY-SA 4.0
null
2023-05-12T12:51:22.250
2023-05-12T13:23:49.020
null
null
387815
[ "probability", "mathematical-statistics", "sampling", "modeling", "econometrics" ]
615674
1
null
null
0
29
I am calculating a multiple regression with a sample of 128 and I was wondering, what distribution would best describe this residuals qq plot? It seems like a a Poisson-distribution to me, is it correct? Should I then proceed with counting a Poisson-regression model to adress this violation of normal distribution? Thanks for your help! [](https://i.stack.imgur.com/tVaLk.png)
How do I interpret this QQ plot?
CC BY-SA 4.0
null
2023-05-12T13:02:20.220
2023-05-12T13:02:20.220
null
null
387817
[ "multiple-regression", "generalized-linear-model", "qq-plot" ]
615675
1
null
null
1
23
Which regularisation method L2 or L1 gives a lower variance? $ f(w) = \sum (\hat{y}_i - y_i)^2 + \sum || \beta ||^2 \rightarrow L2 $ $ f(w) = \sum (\hat{y}_i - y_i)^2 + \sum || \beta || \rightarrow L1 $ Is it correct to say that L2 regularisation would give a lower variance in the resulting model as it penalises the weights more harshly? OR would L1 regularisation give lower variance as it enforces sparsity in the model and uses less number of features altogether? From this, is it correct to say that the method which gives lower variance would result in a higher bias?
L1 vs L2 variance?
CC BY-SA 4.0
null
2023-05-12T13:13:20.407
2023-05-12T13:13:20.407
null
null
null
[ "regression", "machine-learning", "lasso", "regularization", "ridge-regression" ]
615676
2
null
615673
11
null
Let's assume that your model is well-calibrated (see [calibration](/questions/tagged/calibration)). If it isn't you can [calibrate it](https://scikit-learn.org/stable/modules/calibration.html), if there are other issues they need to be solved accordingly. This does not need re-sampling of the data. Then, the problem is picking the right threshold for making the predictions. You already seem to have all the pieces for doing it! The costs matrix for your problem is: | |is good |is rotten | ||-------|---------| |predicted as good |50 |-30 | |predicted as rotten |-4 |-1 | With this information, you can pick a threshold and make the positive prediction when the predicted probability is greater than the threshold. After doing this, for each prediction assign the appropriate cost from the matrix above, and calculate the average cost. You can do this for different thresholds and just pick the threshold that maximizes the expected cost.
null
CC BY-SA 4.0
null
2023-05-12T13:23:49.020
2023-05-12T13:23:49.020
null
null
35989
null
615677
1
616023
null
1
24
The reference papers for marginal structural models only talked about handling monotone censoring using IPCW. How to deal with intermittent missing visits? Does it make sense to use available visits only and multiplying the weights of available visits?
How to handle intermittent missing visits in marginal structural model?
CC BY-SA 4.0
null
2023-05-12T13:32:20.523
2023-05-16T12:04:08.803
null
null
298204
[ "causality", "missing-data", "censoring", "marginal-model" ]
615678
1
null
null
0
20
I found this discussion very helpful for the analysis I need to conduct for my thesis: [In using the cbind() function in R for a logistic regression on a $2 \times 2$ table, what is the explicit functional form of the regression equation?](https://stats.stackexchange.com/questions/259502/in-using-the-cbind-function-in-r-for-a-logistic-regression-on-a-2-times-2-t/615666#615666) I am not sure if I understand it right, that the multiple trials that are mentioned in the discussion are done within the same sample? (but I think so) In my case I have a control and a treatment group, and each respondent goes through 4 questions where he/she has to choose between train and plane. I now want to analyse the difference between control and treatment group over all four choices combined. So I think I could use the approach with a weighted logistic regression model and the dependent variable "proportion_train" (which represents how often the train was chosen out of the four choices). I am also not sure about the interpretation of the coefficients of this model then. I know these are most likely the log-odds-ratios (or equivalently differences in log-odds). But do these log-odds-ratios show the probability combined over all trials (which I want to find out), or the per-trial probability? Also, in some other forum someone used "family=quasibinomial" for a logistic regression model with proportion data. How do I find out if for my data I have to use "family=binomial" or "family=quasibinomial"? Or can you in general say that for a weighted logistic regression model with proportion data as dependent variable, the family is binomial? I also read somewhere else that one needs to account for the correlation within the individuals by including random intercepts for the individual IDs (I guess as e. g. in my case the four questions were answered by the same individuals (but different individuals in control and treatment group of course). In the discussion I mentioned above, no one included such a random intercept for the individuals in the weighted logistic regression model (with proportion data as dependent variable), so I am wondering if it's necessary or not? Thanks for your support!
Logistic regression with proportion data (interpretation of the log-odds-ratios, family binomial or quasibinomial, include random intercepts?)
CC BY-SA 4.0
null
2023-05-12T13:33:50.653
2023-05-12T13:33:50.653
null
null
387813
[ "r", "regression", "logistic", "binomial-distribution" ]
615679
2
null
615656
1
null
The answer is that, since there is no connection with both the cause and the effect, it is impossible for age at scan to be a confounding variable. It cannot set up a backdoor path from the cause to the effect. The usual way to treat such variables is basically, "Eh." That is, you can include them in your model (thus conditioning on them) or not, but it shouldn't affect your results much at all. Now you're saying that including it in the model does affect the outcome. To me that suggests two possibilities: - That there actually is a causal path from age at scan to developmental outcome, and that causal path would be simply $Z\to X\to Y,$ where $Z$ is the age at scan, $X$ is the MRI feature, and $Y$ is the developmental outcome. In such a case, we should not be surprised that $Z$ is correlated with $Y.$ The question is: should you include it in your model? I guess I would ask the question: by how much does the MRI feature coefficient change when you include vs. don't include $Z?$ If this really is the right causal diagram, then I would leave $Z$ out, but it shouldn't hurt to include it. - That the assumption that age at scan is not directly causally related to developmental outcome is flawed, and that in your graphical model you should also include the direct edge $Z\to Y,$ making $Z$ a true confounder: the backdoor path is $X\leftarrow Z\to Y$. In this scenario, it would be clear that you should include $Z$ in your model in order to condition on it. In conclusion: you're probably safer including $Z$ in your model; in the second scenario you must, and in the first scenario it shouldn't hurt.
null
CC BY-SA 4.0
null
2023-05-12T13:37:05.443
2023-05-12T13:37:05.443
null
null
76484
null
615680
2
null
615553
3
null
We need to note that $\sigma^{2}$ is an unknown positive constant and $\mathbf{V}=diag(\lambda_{1},\cdots,\lambda_{n})$($\lambda_{i}>0$ are not all the same; $\sigma^{2}=0$ rarely occurs in real-world data). Assuming Model$(2)$ is true, if we misuse the formula$$\operatorname{Var}(\hat{\boldsymbol{\beta}}_{OLS})=\sigma^{2}(\mathbf{X}^{\prime}\mathbf{X})^{-1}$$ which only holds in Model $(1)$ to calculate the variances of $$\hat{\boldsymbol{\beta}}_{OLS}=\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime} \mathbf{y}$$ in Model $(2)$, then we will inevitably get the false result: $$\operatorname{Var}\left(\hat{\boldsymbol{\beta}}_{OLS}\big|\text{ Model (1)}\right)=\sigma^{2}(\mathbf{X}^{\prime}\mathbf{X})^{-1}$$since the ture value should be $$\operatorname{Var}\left(\hat{\boldsymbol{\beta}}_{OLS}\big|\text{ Model (2)}\right)=\sigma^{2}\mathbf{(X^{\prime}X)^{-1}} \mathbf{X^{\prime}VX} \mathbf{(X^{\prime}X)^{-1}}.$$ Under the assumption that all eigenvalues $\lambda_{i}$ of $\mathbf{V}$ are not less than 1,then $$\operatorname{Var}\left(\hat{\boldsymbol{\beta}}_{OLS}\big|\text{ Model (1)}\right)\leq\operatorname{Var}\left(\hat{\boldsymbol{\beta}}_{OLS}\big|\text{ Model (2)}\right).$$ In fact, It's very easy to explain $\mathbf{X}^{\prime} \mathbf{(V-I_{n})} \mathbf{X}\ge \mathbf{0}$ under such assumption. Note that both $\mathbf{X}^{\prime} \mathbf{V} \mathbf{X}$ and $\mathbf{X}^{\prime} \mathbf{X}$ are positive definite matrices, since $\mathbf{X}$ is full column rank. Actually,whether the true variance of the estimator $\hat{\boldsymbol{\beta}}_{OLS}$ for $\boldsymbol{\beta}$ in Model (2) is overestimated or underestimated by the ture variance of the estimator $\hat{\boldsymbol{\beta}}_{OLS}$ for $\boldsymbol{\beta}$ in Model (1) depends on the values of $\lambda_{i}$.
null
CC BY-SA 4.0
null
2023-05-12T13:42:08.200
2023-05-13T07:01:01.670
2023-05-13T07:01:01.670
371966
371966
null
615681
2
null
615563
3
null
$-b/a$ is equivalent to the intercept with the x-axis and may sometimes have specific meaning. For example in a [Lineweaver Burk plot](https://en.m.wikipedia.org/wiki/Lineweaver%E2%80%93Burk_plot) the value is related to the Michaelis-Menten kinetics model as $a/b = K_m$. That is, when we plot $1/v$ versus $1/c$ then the Michaelis-Menten equation $$v = \frac{Vc}{K_m +c}$$ becomes equivalent to your linear regression equation $$\underbrace{(1/v)}_{y} = \underbrace{\frac{Km}{V}}_{a} \underbrace{ (1/c)}_{x} + \underbrace{\frac{1}{V}}_{b}$$ and the ratio of regression coefficients is $$a/b = (K_m/V)/(1/V) = K_m$$ --- In linear regression it doesn't have any specific name that I am aware of (except for possibly something like 'the negative of the x-axis intercept'). --- In your specific example it may play some role, but it is not very clear. It depends on how the error of measurements are made by your device. The error might increase with the magnitude of $y$, but this may not need to be linear. For example reading of some voltage on a digital meter will have a round of error, and that has the same error whether is a large or small value. The same is true for many other instruments which have relatively similar error for smaller and larger values. It might potentially be better to quantify the error of the observed/estimated concentration as function of the concentration $x$ and express something like the difference in two concentrations that is noticeable at some given power. Certainly a larger intercept $b$, and a smaller slope $a$ will make this relative error worse, but it is better to express it more directly. The ratio of $a/b$ doesn't cover all of the nuances.
null
CC BY-SA 4.0
null
2023-05-12T13:46:01.450
2023-05-12T14:43:18.553
2023-05-12T14:43:18.553
164061
164061
null
615682
2
null
615553
0
null
Couple of notes: - Weighted least squares uses a weight matrix $W$ and the variance of the parameters is $\sigma^2 (X^TWX^T)^{-1}$. - No particular choice of $W$ will bias $\beta$, and so classes of these estimators can be compared in terms of their variance. - Choosing $W=V^{-1}$ is a natural choice and, in fact, is the best linear unbiased estimator according to the Gauss-Markov theorem. As such, it's true that, if the variance matrix is anything other than $\sigma^2 I$, then $\sigma^2 (X^TV^{-1}X)^{-1} = \sigma^{*2}(X^TX) + \text{something positive}$ . Note I have to put a $^*$ on the second variance because it's a different parameter. - The actual variance $V$ of a heteroscedastic model is rarely known. For correlated data analysis, the usual process of generalized least squares (GLS) has to do with estimating $V$ with $\hat{V}$ using the EM algorithm and some assumptions about the covariance structure. You can extend this problem to consider estimating a heteroscedastic variance matrix (with 0 off diagonal entries) using residual analyses, splines, or the like. Like the Fisher Behren's problem, there's no guarantee that this estimator is optimal, and there are cases with the OLS parameter variance is less than (or greater than) any given GLS procedure.
null
CC BY-SA 4.0
null
2023-05-12T14:07:26.273
2023-05-12T14:07:26.273
null
null
8013
null
615684
1
null
null
0
21
A company has been collecting water chemistry data annually for 20+ years to monitor water quality. Now they're wondering if they can decrease their sampling interval to once every 2 or 3 years and still detect exceedances when they occur. What would be the best approach to assess this? My first instinct is to use a Monte Carlo simulation to look at every 2/3 year sampling interval combination in their 20+ year dataset and see how much it deviated from the "true" annual measure, and then calculate how many exceedances it missed detecting. But I understand that a Monte Carlo usually needs 1000's of combinations to run effectively and I would only have 10-20 possible combinations. Is it still possible to determine this with a different analysis? I also thought a Bayesian analysis might be appropriate, but I think that would require a very deep understanding of the interaction between the various water chemistry variables and how their changes affect one another. Or is it more about uncertainty? For instance I can say "If you're sampling annually you have an X% chance of detecting an exceedance if it occurs, if you're sampling every 2 years you have a Y% chance, and if you sample every 3 years you have a Z% chance"?
Can I decrease the sampling interval and still have accurate results?
CC BY-SA 4.0
null
2023-05-12T14:17:54.097
2023-05-12T18:44:34.180
2023-05-12T18:44:34.180
176966
176966
[ "confidence-interval", "sampling", "monte-carlo", "uncertainty", "threshold" ]
615685
1
615863
null
1
18
Assume that I want to efficiently draw samples from a (for simplicity bivariate) joint distribution $p(x,y)$, with $x \in \mathbb{R}$ and $y \in \mathbb{R}$. I don't have a closed-form expression for $p(x,y)$, but I can prescribe a number of features and limits that should describe it fully. As a concrete example, consider these limits and properties: - Variable limit 1: $-25 \leq x \leq 25$ - Variable limit 2: $-25 \leq y \leq 25$ - Variable limit 3: $-5 \leq x + y \leq 5$ - Joint uniformity: $p(x,y) = \text{constant}$ for all $(x,y)$ in the support above One way to sample this distribution would be via [rejection sampling](https://en.wikipedia.org/wiki/Rejection_sampling), i.e., just drawing independent samples $x \sim \mathcal{U}[-25,25]$ and $y \sim \mathcal{U}[-25,25]$ and then rejecting all samples that violate variable limit 3. However, depending on the narrow nature of limit 3 (consider for instance the limiting case $0 \leq x + y \leq 0$), this might reject virtually all samples. Is there a more efficient way to sample from a distribution like this?
Drawing samples from a joint distribution defined by limits?
CC BY-SA 4.0
null
2023-05-12T14:29:38.880
2023-05-14T23:18:44.017
null
null
191317
[ "sampling", "linear-model", "linear", "uniform-distribution", "accept-reject" ]
615687
1
null
null
0
19
I'm interested in exploring experiment design and analysis therein through the Bayesian lens. I haven't seen much content here; it would seem that Bayesians are more concentrated in analysis of observation studies rather than RCT. Could any books, courses, YT playlists be recommended that discuss experiment design through the Bayesian lens?
Bayesian Experiment Design Resources
CC BY-SA 4.0
null
2023-05-12T14:31:25.233
2023-05-12T14:31:25.233
null
null
288172
[ "bayesian" ]
615688
2
null
615553
0
null
Simple example: Imagine we estimate just the mean of a sample and we have individuals with the same mean but different variations $$X_i\sim N(\mu,\sigma_i)$$ The OLS estimate of the mean will be $$\hat\mu = \frac{\sum_{i=1}^{n} X_i}{n}$$ and has a sampling variation of $$\text{var}(\hat\mu) = \frac{\sum_{i=1}^{n} \sigma_i^2}{n^2}$$ The estimate of the sampling variation will be based on the sum of squared residuals $s^2$ whose mean will be close to... ... (I am getting slightly confused how to deal with this easily)... ... let's try to simulate it ``` set.seed(1) sim = function(n=10) { ### create sample X = c(rnorm(n, mean = 0, sd = 1), rnorm(n, mean = 0, sd = 0.01)) ### linear model mod = lm(X~1) ### estimate standard error sqrt(sum(mod$residuals^2)/(2*n-1))/sqrt(2*n) } n = 10 real = sqrt(n*1^2+n*0.01^2)/(2*n) s = replicate(10^5,sim()) hist(s, breaks = seq(0,0.35,0.01), xlim = c(0,0.35)) lines(real*c(1,1),c(0,10000), lwd = 2, lty = 2) mean(s) ## 0.1540672 real ## 0.1581218 ``` [](https://i.stack.imgur.com/z7yZH.png) So if we perform OLS a hundred thousand times on simulated samples with size twenty among which ten values of $\sigma_i = 1$ and a ten values of $\sigma_i = 0.01$, then we get no clear sign of overestimating the sample variance. --- Possibly the point 2 is relating to the idea that the sample estimate is not efficient and that some form of [generalized least squares](https://en.wikipedia.org/wiki/Generalized_least_squares) will perform better.
null
CC BY-SA 4.0
null
2023-05-12T14:36:45.897
2023-05-12T14:36:45.897
null
null
164061
null
615689
1
null
null
0
16
Dear Cross Validated community, We are working on a uncertainty & sensitivity analysis using a mathematical optimization model. More specifically, we have a set of uncertain parameters, which follow specific probability distributions and we sample them to perform Monte Carlo simulations with our model. For the sensitivity analysis, our method of choice is called Monte Carlo filtering. Essentially, first, we divide the model outputs from the Monte Carlo simulation in two subsets ('good' or 'bad') based on a given criterion (e.g. cost < 100). Then, we map this division into the input sample space to obtain 'good' and 'bad' input samples for each parameter. As the key part of the method then, we perform a two-sample Kolmogorov-Smirnov tests for each input parameter separately. The two samples used for the test are the 'good' and 'bad' input sample subsets from the previous division. As a metric for parameter importance then, if the result says that the two subsets are from different distributions, then this parameter is deemed as important, as there is higher probability that high or low values of the parameter will lead to 'good' or 'bad' outcome. If not, then the parameter is unimportant, as regardless of its value, it can lead to either 'good' or 'bad' model outputs. This method is proven, tested, and used extensively in the model-based studies. However, in most cases, the input parameters used in the Monte Carlo simulations and the subsequent K-S tests are sampled from continuous distributions. My question then is: If we have one input parameter that is sampled from a discrete uniform distribution e.g. with a range of [1,10], does our method still work or does it break as the two-sample KS-test only works with continuous distributions (as I have also read in numerous threads here, but with contradicting info on the Mathematics stackexchange forum [here](https://math.stackexchange.com/questions/3577453/two-sample-kolmogorov-smirnov-test))? From my perspective, on the one hand, the test would still give us the distance between the two empirical CDFs, even if the distributions are discrete and we can use this information to compare to the condition value ($c(\alpha) \cdot \sqrt{\frac{n + m}{n \cdot m}}$). Alternatively, we could sample this parameter as a continuous uniform distribution $U[1,10]$ to adhere to the KS-test requirements and simply round to the nearest integer in the model before we run our simulations. Any feedback on this? Any help will be greatly appreciated!
Two-sample Kolmogorov Smirnov test for global sensitivity analysis - How to treat discrete distributions?
CC BY-SA 4.0
null
2023-05-12T14:51:57.650
2023-05-12T14:51:57.650
null
null
72554
[ "hypothesis-testing", "monte-carlo", "kolmogorov-smirnov-test", "sensitivity-analysis" ]
615691
1
615694
null
0
30
I have data for 72 employees, each of whom have between 10-30 evaluations from various supervisors. Each evaluation is anonymized, so we don't know who gave each evaluation, and we also can't be sure that multiple evaluations aren't written by the same person over different time frames. If I group my employees into two bins that are roughly equal size (group A and group B) to run some kind of proportion test, and treat each evaluation as a new datapoint, am I violating the independence assumption?
Does this example violate the independence assumption?
CC BY-SA 4.0
null
2023-05-12T15:00:12.850
2023-05-12T15:18:14.747
null
null
338681
[ "experiment-design", "independence" ]
615692
2
null
615691
1
null
These data points are almost certainly not independent. It doesn't matter if there are many evaluations are written by the same person, as there are certainly many evaluations written about the same person. The dozens of evaluations of some particular employee are almost certainly not independent of one another, no matter who writes them. I would expect that an employee who has many good evaluations is more likely than usual to get another good evaluation - some employees consistently get good evaluations, while some consistently get poor evaluations. I would be very surprised if each employee had an equivalent random assortment of good and poor evaluations. There will be even greater dependence if the evaluation is also written by the same person, but as long as it's about the same person, it's not going to be independent.
null
CC BY-SA 4.0
null
2023-05-12T15:12:39.603
2023-05-12T15:18:14.747
2023-05-12T15:18:14.747
76825
76825
null
615693
2
null
104828
2
null
In your comment the interview question you were asked was "give an example of statistical distribution, other than normal distribution, which is closed under affine transformation". The example to which the question refers is the fact that if you have a normally distributed random variable $X$, say $X\sim \text{N}(\mu,\sigma^2)$, then an affine transformation is also normally distributed $aX+b \sim \text{N}(a\mu+b,a^2\sigma^2)$. The terminology in Statistics for distributions which are 'closed under affine transformation' is $\textbf{location-scale family}$. One example which would answer the question is the continuous uniform distribution. If $X\sim U[\alpha,\beta]$, and $Y= aX+b$ then $$Y\sim U[a\alpha+b, a\beta+b].$$
null
CC BY-SA 4.0
null
2023-05-12T15:13:27.463
2023-05-12T15:13:27.463
null
null
387823
null
615694
2
null
615691
1
null
Yes, it is very likely. The evaluations for each employee are likely to be correlated, and if the same supervisor writes multiple evaluations, those are likely correlated. Each evaluation is in a cluster with the other evaluations for the same employee, and in a cluster with the other evaluations by the same supervisor. Note that we call measurements "clustered" when we think they are likely to be informative about other measurements that share a common characteristic. For example, if five different supervisors give an employee a very high rating, we can do better than a random guess on how supervisor 6 will rate an employee (all our evidence suggests that this employee should get a high rating). We know even more if we are aware that supervisor 6 tends to be either lenient or harsh with their evaluations. A test that does not account for these clusters will likely underestimate the variance and thus is more likely to lead to an incorrect test conclusion. If the evaluations are anonymous though, there is not much you can do about it.
null
CC BY-SA 4.0
null
2023-05-12T15:14:56.103
2023-05-12T15:14:56.103
null
null
288048
null
615695
2
null
349828
0
null
One important information is probably kept from the text: “The misclassified instances are assigned with a greater weight when computing information gain.” It simply means that these instances have a greater contribution when computing model loss. So, the gradients you compute to modifying the model weights are more influenced by these instances so the resulting model can correctly classify them. In other words, the loss computation is no longer an average of individual losses but a weighted average of individual losses - the model receives greater punishment for misclassifying some instances than others.
null
CC BY-SA 4.0
null
2023-05-12T15:18:42.980
2023-05-12T15:18:42.980
null
null
387830
null
615696
1
null
null
2
35
I've this kind of equation that I'd like to estimate using R : $t_i = \alpha_1 f(x_{1i}, y_{1i})+\alpha_2 f(x_{2i},y_{2i})+\epsilon_i$. $t_i \in [0,1]$. $\alpha_1$ and $\alpha_2$ are known. I observe $x_{1i}, x_{2i}, y_{1i}$ and $y_{2i}$. I'm not sure of the best way to estimate this model. I thought about generalized additive models, but I've two problems : the estimated $f$ will be different and I want them to be exactly the same, and also I want $f$ to be multivariate. Maybe there's a way to add some constraints on gam regressions with R to force the functions to be the same, but I absolutely don't know how to do that. I also thought of using an answer similar to the one given on [this topic](https://stackoverflow.com/questions/40752550/defining-gam-function-type), but I'm not sure that this solution is adapted to estimate a multivariate function. If anyone has an idea or some code that might help, I would very much enjoy it EDIT : $t_i$ is between 0 and 1 as it's a proportion by definition. The $\alpha_i$ are weights and some to 1. And $f(x_{ji}, y_{ji}) \in [0,1]$ by definition, it represents a probability : $f(x_{ji}, y_{ji})=\mathbb{P}(\nu_i>k+g(x_{ji}, y_{ji}))$ where $k$ is an unknown constant not depending on $i$, $\nu_i$ can be assumed to follow a normal distribution, $g(x_{ji}, y_{ji})$ is unknown. $x_i$ and $y_i$ are positive (not strictly), by definition. $g$ does not depend on $j$, neither on $i$ or $j$.
How to use generalized additive models where all the functions to estimate are the same?
CC BY-SA 4.0
null
2023-05-12T15:24:43.020
2023-05-12T17:45:27.837
2023-05-12T17:45:27.837
387831
387831
[ "r", "regression", "multivariate-analysis", "generalized-additive-model", "splines" ]
615697
2
null
615657
2
null
You have to separate out some different types of "uncertainty" here. The models you fit take the form: $$\log(T)\sim \beta_0 + W, $$ where $\beta_0$ is your `fit$coef` and $W$ represents a standard minimum extreme value distribution. From the perspective of individuals modeled this way, the distribution of $W$ represents a major source of uncertainty. Even if you know $\beta_0$ exactly, the event times among individuals will have a wide distribution, in this case following (in the log scale of time) a standard minimum extreme value distribution. The next source of uncertainty is in your estimate of $\beta_0$. Under the theory of fitting such a model via maximum likelihood, the estimate of $\beta_0$ has an asymptotically normal distribution. In this situation with only one coefficient to estimate, that's just a simple case of the more general multivariate normal distribution of multiple coefficient estimates. You can get that asymptotic normal estimate directly from the first exponential `fit` to the `lung` data set similarly to how you would with more complicated models ``` fit$icoef # (Intercept) # 6.044474 sqrt(vcov(fit)) # (Intercept) # (Intercept) 0.07784989 ``` and sample in this situation from the corresponding one-dimensional normal distribution. That sampling should be done in this scale of coefficient estimates before you do any transformation to exponential rate values. The data themselves thus provide this estimate of uncertainty in modeling. Your subjective choice of a standard deviation value is unnecessary. Bootstrapping provides a different estimate of uncertainty in modeling, by repeating the modeling on multiple bootstrap samples of the full original data set. Among other things, that can provide a check on how well the assumption of asymptotic normality of the original coefficient estimates holds. Ideally, the distribution of coefficient estimates among fits to bootstrap samples should be similar to the normal distribution estimated in the original model. Bootstrapping also can be used to estimate the "optimism" in coefficient estimates due to overfitting and to generate optimism-corrected calibration curves. See the `validate()` and `calibrate()` functions of the [rms package](https://cran.r-project.org/package=rms) in R. If your ultimate interest is in the uncertainty of event times among individuals, however, then you also must consider the fundamental variability imposed by the underlying minimum extreme value distribution. In practice, that typically overwhelms the variability in the estimates of $\beta_0$. Here's an example of how little variability in a coefficient estimate can matter. Here are the distributions of 300 individual log-survival times drawn from each of the following exponential distributions: at the point estimate of your rate, and at rates equivalent to its upper and lower 95% limits (see note *** below). [](https://i.stack.imgur.com/rZrnM.png) You could make the same point analytically, but this has the advantage of also displaying the sampling variability given the specified distributions. The differences associated with the error in the coefficient estimates are essentially lost among the overall widths of the distributions. *** These simulations, as illustrated in this image and per the code below, are of event times, not model parameters. Technically, this is not sampling directly from a standard minimum extreme value distribution $W$; this example takes advantage of the simplicity of the exponential model (with scale $σ$ in the term $σW$ fixed at 1) to sample directly from an exponential survival distribution with a fixed rate parameter. This following link shows a correct way to sample from a minimum extreme value distribution for this general type of parametric survival model: [Simulate a Weibull regression model](https://stats.stackexchange.com/questions/591943/simulate-a-weibull-regression-model) Code: ``` set.seed(203) Point_Est <- rexp(300, rate=1/exp(6.044474)) LCL_Est <- rexp(300, rate=1/exp(6.044474+1.96*0.07784989)) UCL_Est <- rexp(300, rate=1/exp(6.044474-1.96*0.07784989)) plot(density(log(UCL_Est)), col="red", bty="n", xlab="log(Survival Time)", ylab="density", main="Survival time distributions") lines(density(log(Point_Est)), col="black") lines(density(log(LCL_Est)) ,col="blue") legend("topleft", bty="n", legend = "Black, point estimate\nRed, 95% upper limit for rate\nBlue, 95% lower limit for rate") ```
null
CC BY-SA 4.0
null
2023-05-12T15:42:07.393
2023-05-17T18:17:00.617
2023-05-17T18:17:00.617
378347
28500
null
615698
1
null
null
2
19
What is the recommended approach for selecting lag values in a univariate time series forecasting problem, specifically for input variables in a feedforward neural network (FFNN)? In my research project, I am analyzing monthly electricity consumption data using data mining techniques, and I obtained lag values based on ACF values and cut-off lags. However, all the Pearson and Spearman correlations of the original data set and each and every lag value came out to be significant, and I am unsure of how many lag values to use as input variables. What should be the best course of action in such a situation?
Selecting optimal lag values for Neural Network in univariate time series forecasting - How many lags to use as input variables?
CC BY-SA 4.0
null
2023-05-12T15:43:23.923
2023-05-12T15:43:23.923
null
null
384755
[ "machine-learning", "time-series", "forecasting", "lags", "univariate" ]
615699
1
null
null
0
16
I have data that is the result of measurements $f(x)$ at points $x$. These measurements fluctuate (call it noise), also differently for each $x$ so we have that $\sigma(x)$ are the fluctuations. By measuring several times at each $x$ I can estimate the fluctuations as well as $f$ (which would be the average). So then I have something that looks like this: [](https://i.stack.imgur.com/aQWUG.png) Now my goal is, given one measurement $f$, to tell which value of $x$ produced such measurement, which is probably a typical problem in many disciplines. Due to the shape of $f$ and the fluctuations, then there will be regions in which there will be few doubts about the value of $x$ and regions in which many values of $x$ could be possible, so there is a "prediction power", I hope the following drawing explains what I mean: [](https://i.stack.imgur.com/W3cjt.png) I want to quantify this. What are typical measures for this? Intuitively I think of something like this: If $f(x)$ was sampled at discrete points $x_1,x_2,...$ with equal spacing, i.e. $x_i-x_j=\Delta x ~ \forall ~ i,j$, then a possible measure of this "prediction power" at point $x_i$ would be $\frac{\Delta f_i}{\sigma_i}$. Now, since this is probably a super common problem, I would guess there are already better measures of this (or similar).
How to quantify prediction power?
CC BY-SA 4.0
null
2023-05-12T16:03:03.003
2023-05-12T16:03:03.003
null
null
313385
[ "regression", "noise" ]
615700
2
null
615523
2
null
It seems like you might benefit immensely from scaling your outcomes to put them on more even footing. This way, the large values in one variable do not dominate the work. This is completely legitimate, as it is equivalent to a unit conversion (such as meters to kilometers). Two ideas come to mind. - Force all outcomes to be in the interval $[0,1]$ by subtracting the minimum value and then dividing by the range. You can then apply the inverse transformation to predictions on this scale to get predictions on the original scale. If you have holdout data, you would apply the same transformation you used on the training data, not the transformation based on the minimum and range of the holdout data. - Standardize each variable by subtracting its mean and then dividing by its standard deviation. This forces the transformed variables to all have means of zero and standard deviations of one. If you have holdout data, you would apply the same transformation you used on the training data, not the transformation based on the maximum and range of the holdout data. After applying transformations such as these, you may find the usual loss functions to work much better. Neither of these are assured to work, but they might, and I hope they spark some ideas if they do not.
null
CC BY-SA 4.0
null
2023-05-12T16:31:20.673
2023-05-12T18:44:26.683
2023-05-12T18:44:26.683
247274
247274
null
615701
2
null
577710
0
null
AIC/BIC take parsimony into account. RMSEA doesn't. You don't tell us about the chi-square and df (which would help) but in this situation I'd use AIC/BIC.
null
CC BY-SA 4.0
null
2023-05-12T16:46:47.970
2023-05-12T16:46:47.970
null
null
17072
null
615703
1
null
null
0
9
I have two sets of population estimates, one of whom is eligible for a program and one who is not. I don’t have specific observations from these populations, just estimates of population characteristics. For example, in the eligible=0 population, I have the proportion belonging to several age groups, proportion belonging to four different race categories, and proportion metro/non-metro, all adding up to 1. To help target outreach, I’d like to model eligibility based on actual value of those characteristics. For each real observation, we will have race, age and metro/non-metro. Ideally I could get probability of eligibility based on those characteristics, but other outputs are acceptable as well. Is this possible? Any advice on where to start?
Modeling binary outcome based on population characteristics
CC BY-SA 4.0
null
2023-05-12T17:00:07.327
2023-05-12T17:00:07.327
null
null
387838
[ "multiple-regression" ]
615704
1
null
null
0
9
Can someone explain what does the term I(y=k) stand for in the equation for p_mk ? [](https://i.stack.imgur.com/vbN5K.png)
Classification criteria equation of decision trees
CC BY-SA 4.0
null
2023-05-12T17:16:36.363
2023-05-12T17:22:19.643
null
null
377767
[ "scikit-learn", "decision" ]
615705
1
null
null
2
20
I was reading [Reverse Engineering a Neural Network's Clever Solution to Binary Addition](https://cprimozic.net/blog/reverse-engineering-a-small-neural-network/). Without repeating the whole article, it seems the network figured out addition was already a part of its toolset (maybe the addition in x*W + B?), so instead of coming up with an logic gate-based adder like a human might it "cheated". For some reason, I immediately thought of ReLU activation when reading this. We know that ReLU and its cousins (SwiGLU, ELU, GELU) work better in practice than sigmoid/tanh. The vanishing gradient problem in sigmoid/tanh is often said to be reason and I'm mostly satisfied with that answer, but it's still not completely intuitive to me why ReLU works better. Is it right to think of ReLU as adding conditional statements to the network's toolset? With ReLU, the network can do boolean operations. For example, it's easy to come up with weights such that C = ReLU(Linear(A, B)) will reliably activate iff A and B both activate. We've effectively given the network the tools to build an AND gate when it needs one. Whereas with sigmoid/tanh something like this would be difficult due to the overall smoothness. I know this is a bit handwavy, but I find it much easier to work with ideas when they "fit" in my head perfectly, so I'd love to get a intuitive explanation for ReLU superiority.
With ReLU activation, are we adding if conditions to a neural network's toolset?
CC BY-SA 4.0
null
2023-05-12T17:19:37.043
2023-05-12T17:21:07.560
2023-05-12T17:21:07.560
387837
387837
[ "machine-learning", "activation-function" ]
615706
2
null
615704
1
null
Here, $I$ represents an indicator function, which takes a value of 1 when the input is equal to some value, and 0 otherwise. Here, $I(y=k)$ returns 1 when $y$ is equal to $k$, and 0 otherwise. The this formula just counts up the number of instances where the class value is equal to $k$, and divides by the total number of instances to get a proportion.
null
CC BY-SA 4.0
null
2023-05-12T17:22:19.643
2023-05-12T17:22:19.643
null
null
76825
null
615707
1
null
null
0
20
I want to fit an ARIMA(1,1,0)(0,1,1)[12] with drift, with an additionnal MA at lag 3 as when I have fitted an ARIMA(1,1,0)(0,1,1)[12] with drift model, I have seen there was still autocorrelation in my residuals at lag 3. Therefore I am trying to add an additional MA or AR term at lag 3. However, the arima function in R does not allow me to do this in a straightforward manner. Indeed, I have tried the following, without success : ``` # set initial values init <- rep(NA, 4) # specify that you want an AR(1) term, an MA(1) term at lag 12, # an MA(1) term at lag 3, and a constant (drift) init[c(1, 2, 3, 4)] <- c(0.5, -0.5, -0.5, 0) eq1cbis <- arima(prod_periode_1, order=c(1,1,0), seasonal=c(0,1,1), fixed=init, include.mean=T) eq1cbis ``` The init vector is defined as init <- rep(NA, 4). Then, initial estimates for these parameters are set using: init[c(1, 2, 3, 4)] <- c(0.5, -0.5, -0.5, 0) This sets initial estimates for the AR(1) term, seasonal MA(1) term, additional MA(1) term at lag 3, and the drift respectively. These are just initial estimates for the optimization routine used to fit the ARIMA model. The actual estimates of these parameters after the model is fitted may be different (setting initial estimates can sometimes help the optimization routine converge to a solution, especially in complex models, which is not the case here though). If we put NA for an element in the init vector, it means we are asking the arima() function to estimate this parameter. If we put a numeric value, we are asking the function to fix the parameter at this value. In my case, I'm asking the function to estimate all parameters because I haven't set any of the elements in the init vector to a numeric value. However, this does not work as I get the following message of error from R : ``` Error in arima(prod_periode_1, order = c(1, 1, 0), seasonal = c(0, 1, : longueur de 'fixed' incorrecte ``` This is because the ARIMA(1,1,0)(0,1,1)[12] model with drift, as specified by order=c(1,1,0) and seasonal=c(0,1,1), only has three parameters: one for the AR(1) term, one for the seasonal MA(1) term, and one for the drift (or mean). This is why I'm getting an error when I try to pass a fixed vector of length 4. So it seems the arima() function in R doesn't allow adding a specific MA term at a certain lag in a straightforward manner, and neither does the Arima() function from package forecast. The standard ARIMA model only has parameters for the p most recent AR terms and the q most recent MA terms. If I want an MA term at lag 3 specifically, I would have to specify order=c(1,1,3) for an ARIMA(1,1,3) model, but this model would also include MA terms at lags 1 and 2, which isn't what I want... How could I fit the model I want on R ? Here is the structure of my dataset for reproducibility : ``` structure(c(22951.429, 21465.026, 19334.531, 19319.365, 19923.664, 21275.248), tsp = c(1981, 1981.41666666667, 12), class = "ts") ``` Thanks for any help !
what R function can be used to fit an additional MA term at lag 3 to my ARIMA model?
CC BY-SA 4.0
0
2023-05-12T17:37:38.303
2023-05-12T17:37:38.303
null
null
364061
[ "r", "time-series", "arima", "univariate" ]
615708
2
null
615531
2
null
You have not told us what you consider a valid kernel, so I can't comment on that. But I can give you a reason why one likes to consider positive semi-definite kernels and not just positive definite ones. Just consider the following kernel: $$ K:\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}\quad,\quad K(x,y)=xy.$$ This may be just the canonical inner product on $\mathbb{R}$ but defines a decent real reproducing kernel Hilbert-space (RKHS) and $K$ is its reproducing kernel. The kernel even has a name: linear kernel. But notice that the kernel matrix for the points $x=1$ and $y=2$ is $$ \begin{pmatrix} 1 & 2\\2 & 4\end{pmatrix}$$ hence semi-definite. This is no accident. Consider the canonical feature map of this RKHS. This is the map which associates with a point $x\in\mathbb{R}$ the function (= vector in the RKHS) $k_x$ with the property that $k_x(z)=K(x,z)=xz.$ It is not difficult to see that for any two points $x,y$ the respective functions $k_x$ and $k_y$ are scalar multiples of each other. Since the canonical features span the RKHS, you can conclude that the feature space is one-dimensional and in a one-dimensional space each Gram-Matrix (=Kernel matrix) has at most rank one. This generalizes to feature spaces of arbitrary but finite dimension, where the kernel is defined on an infinite set. If your feature space is $n$-dimensional you pick $n+1$ points and the according feature vectors will be linearly dependent and their Gram/Kernel Matrix semi-definite. To conclude, insisting on positive definite Kernels, would exclude finite dimensional RKHS, which does not make a lot of sense. And of course if you want all Kernel matrices invertible, on infinite domains of definition, you need a Kernel with infinite dimensional feature space.
null
CC BY-SA 4.0
null
2023-05-12T17:46:59.010
2023-05-12T17:46:59.010
null
null
8298
null
615710
2
null
615636
1
null
If you think about one experiment as one control and one treatment that you compare, and each replicate as running the experiment once, you have 10 replicates. I think this is what he means.
null
CC BY-SA 4.0
null
2023-05-12T18:18:44.580
2023-05-12T18:18:44.580
null
null
288048
null
615711
1
615719
null
3
60
I used [emmeans](https://cran.r-project.org/package=emmeans) functions (with help from this site) to obtain pairwise comparisons for different levels of variables in a model with interactions. [Interpreting an interaction term in the context of study](https://stats.stackexchange.com/questions/614669/interpreting-an-interaction-term-in-the-context-of-study) The model `int` was estimated with`lrm()` from the [rms package](https://cran.r-project.org/package=rms). The plot below was plotted with `emmip(int, ses~fam, type = "response", CIs = T, levels = 0.95)` and I also tested `type = "linear.predictor"`. These two options returned exactly the same plot. [](https://i.stack.imgur.com/yFyBh.png) Using the emmip function with response mode to plot the effects, I expected to get the probabilities for the levels of ordinal response, but the y axis has values between-3 and -1. I also printed the plotted means. The means and their were called with `emmeans(model, specs = pairwise ~ ses*fam )` and I also extracted `$contrasts` adding `%>%confint(adjust = "tukey")`. [](https://i.stack.imgur.com/WDuum.png) I cannot work out what these numbers are and why they are negative. What do they reflect and how do they relate to the pairwise comparisons from emmeans? I expected them to be probabilities for each outcome adding to 1. In response to the comments: So the negative numbers come from the "Latent Variable" that underlies the ordinal regression model. A [vignette for emmeans](https://cran.r-project.org/web/packages/emmeans/vignettes/models.html#O) describes different ways ("modes") to express the results. With respect to `rms` models, it says: > rms models have an additional mode. With mode = "middle" (this is the default), the middle intercept is used, comparable to the default for rms::Predict(). This is quite similar in concept to mode = "latent", where all intercepts are averaged together. As the intercepts of my model are: ``` cat1|cat2 2.198193 cat2|cat3 3.158535 ``` Could I ask how these means relate to the contrasts that can be obtained for the combinations of all the levels of ses and family type (from emmeans contrasts)? [](https://i.stack.imgur.com/86HVE.png) For example in contrasts there is a significant difference between low ses family and and low ses family c, but the means of predicted response are not different as their confidence intervals overlap. And is it possible and does it make sense to "remove" the average intercepts from the plot?
Why are emmip( "response") y axis numbers not probabilities for ordinal regression?
CC BY-SA 4.0
null
2023-05-12T18:21:03.887
2023-05-14T13:32:48.253
2023-05-13T18:16:44.540
28500
303717
[ "interaction", "interpretation", "multiple-comparisons", "ordinal-data", "lsmeans" ]
615713
1
null
null
0
19
I have a dataset with many observations with NA's in one variable (almost a third of them). Actually, is a numerical variable with some values being zero (these zeroes mean 'no data') My options: - Try to impute: Due to the characteristic of the data, I suspect imputation would be inaccurate and should create too much noise. Even with mice or missForest. - Remove observation: A third of my dataset? No way! - Remove variable: It's a very important variable. I think I can get pretty much info from it - Convert zeros to NA and leave algorythm handle them. So I'm searching for an algorithm that can handle those NA's, so, instead of having to remove that variable, loose a lot of observations or introduce noise. My dataset set has 80000 observations and some categorical variables have 1000 or even more different values. After reading in some dfferent sources, I'm not sure of how these two families of algorithms (Random Forest, Gradient Boosting) handle NA's without problem. My idea was that they can handle them, so I can leave some NA's, that they just omit if necessary, but use the non-NA values of that variable for trainig the model. So, with Caret, I have read that I only must set na.action = na.pass, and no preprocessing (do not specify preProcess, leave it as its default value NULL). Would Random Forest (or Ranger) work fine this way? And Gradieny boosting (or XGBoost)? Thanks :)
NA's in Random Forest and Gradient Boosting
CC BY-SA 4.0
null
2023-05-12T18:26:24.810
2023-05-26T07:45:40.537
2023-05-26T07:45:40.537
178468
381118
[ "r", "random-forest", "missing-data", "boosting" ]
615714
1
null
null
0
22
Dear members of the statistics forum, I am interested in a process that selects genes from a gene pool and divides them into two groups according to the treatment. My objective is to estimate the "real" read count of each gene. in order to identify the most enriched gene group per treatment. My model accounts for an unmeasured underlying effect that influences both treatment and read counts, which is crucial for detecting real enrichment. The data is organized into clusters of gene groups, where each gene is further clustered into sgRNA (specific inhibitors of each gene). I am using a model with various parameters, where X is the response variable that indicates 0 for no treatment and 1 for treatment. I would like to seek your advice and suggestions on the model I have developed, which seems to provide results in the right direction. $$ \begin{align} read_count &\sim \text{NB}(\mu,\theta_s)\tag{11} \\ \theta_s &\sim \text{Gamma} (1,\bar{\theta_s} \sim \text{Gamma}(1,1))\\ ln(\mu) &= \alpha + \beta_{gene} + (\alpha_{intercept} + \beta_{gene}) \times X + \lambda_{sgRNA} +( \alpha_{intercept} + \lambda_{sgrna}) \times X\\ \beta_{gene} &\sim \text{MvNormal}(\beta, \Sigma)\\ \beta &= \text{Student-t}(\bar{\nu} = 5, \bar{\mu} = 0, \bar{\sigma} = 1)\\ \Sigma &= lkj(3,\sim Exp(1))\\ \lambda_{sgrna} &\sim \text{MvNormal}(\lambda, \Sigma')\\ \lambda &= \text{Student-t}(\tilde{\nu} = 5, \tilde{\mu} = 0, \tilde{\sigma} = 1)\\ \Sigma' &= lkj(3,\sim Exp(1))\\ \end{align} $$ \beta and \lambda represent the slopes describing the change in read counts between treatment groups (X). Since my variance is much higher than the mean, I am fitting my data to a negative binomial model. To improve detection, I believe that an instrumental variable should be included. The response variable, read-pool, influences sgRNA and gene reads but not the treatment condition. therefore, read-pool can be a good instrumental variable with covariate \epsilon. Here is the model I have tried which was preforming very poorly: ### Model 13- Student’s T (varying efffect proir) with erorr $$ \begin{align} read_count &\sim \text{NB}(\mu,\theta_s)\tag{12} \\ \theta_s &\sim \text{HalfNormal}(0.5) \\ ln(\mu) &\sim \text{MvNormal}(\begin{bmatrix}\mu_{y}\\ \mu_{\epsilon}\ \end{bmatrix}, \Sigma_{y,sgrna})\\ ln(\mu_{y}) &= \alpha + \beta_{gene} + (\alpha_{intercept} + \beta_{gene}) \times X + ( \alpha_{intercept} + \lambda_{sgrna}) \times X \\ ln(\mu_{\epsilon}) &= \lambda_{sgRNA} + (\epsilon_{sgrna} + \epsilon_{sgrna}) \times Pool\\ \epsilon_{gene} &= \kappa_{gene} + \kappa_{gene} \times Pool \\ \beta_{gene} &\sim \text{MvNormal}(\beta, \Sigma)\\ \beta &= \text{Student-t}(\nu = 5, \bar{\mu} = 0, \bar{\sigma_{\beta}} = 1)\\ \Sigma &= lkj(3,\sim Exp(1))\\ \lambda_{sgrna} &\sim \text{MvNormal}(\lambda, \Sigma') \\ \lambda &= \text{Student-t}(\nu = 5, \tilde{\mu} = 0, \tilde{\sigma_{\lambda}} = 1)\\ \Sigma' &= lkj(3,\sim Exp(1))\\ \end{align} $$ I would like to mention that I have generated synthetic data and am using the PyMC library for fitting the Bayesian hierarchical model. I am seeking advice on the model I have developed and any suggestions for potential improvements, considering the nature of the data and the use of PyMC for fitting the model.
Seeking advice on a Bayesian hierarchical model for count data with overdispersion and instrumental variable:
CC BY-SA 4.0
null
2023-05-12T18:26:30.173
2023-05-12T18:26:30.173
null
null
387839
[ "bayesian", "glmm", "instrumental-variables", "overdispersion" ]
615716
2
null
615331
0
null
Thank you very much for this complete answer @BenBolker. I guess things do get messy with my data because my responses are non-Gaussian. Your example has quite the design I have (just simpler). Because my responses are abundance counts I am using a negative binomial distribution to model abundances (negative binomial was a better fit than poison because of overdispersion). I made a few tests and indeed, if I use the Gaussian distribution I get the exact same Wald statistics and p-values for the mixed model (but not always) and the one adding up abundances from multiple samples of the same pond. But for the negative binomial distribution, it is not the same. In this case, p-values are smaller for the mixed model approach. See this example with simulated data: ``` > dd <- expand.grid(pond = factor(1:10), + sample = factor(1:4)) > dd$treat <- ifelse(as.numeric(dd$pond) <= 5, "C", "T") > > set.seed(2) > s1_C <- rnbinom(n = 5, mu = 140, size = 50) > s1_T <- rnbinom(n = 5, mu = 130, size = 50) > s2_C <- rnbinom(n = 5, mu = 140, size = 50) > s2_T <- rnbinom(n = 5, mu = 130, size = 50) > s3_C <- rnbinom(n = 5, mu = 140, size = 50) > s3_T <- rnbinom(n = 5, mu = 130, size = 50) > s4_C <- rnbinom(n = 5, mu = 140, size = 50) > s4_T <- rnbinom(n = 5, mu = 130, size = 50) > > dd$y <- c(s1_C, s1_T, s2_C, s2_T, s3_C, s3_T, s4_C, s4_T) > > dd_summaryzed <- aggregate(dd, y ~ pond, FUN = sum) > dd_summaryzed <- merge(dd_summaryzed, unique(dd[c("pond", "treat")])) > > library(glmmTMB) > library(car) > > mod_gaussian <- glmmTMB(y ~ treat + (1|pond), data = dd, family = "gaussian") > Anova(mod_gaussian) Analysis of Deviance Table (Type II Wald chisquare tests) Response: y Chisq Df Pr(>Chisq) treat 0.5421 1 0.4616 > > mod_gaussian_sum <- glmmTMB(y ~ treat, data = dd_summaryzed, family = "gaussian") > Anova(mod_gaussian_sum) Analysis of Deviance Table (Type II Wald chisquare tests) Response: y Chisq Df Pr(>Chisq) treat 0.5421 1 0.4616 > > mod_negbin <- glmmTMB(y ~ treat + (1|pond), data = dd, family = "nbinom1") > Anova(mod_negbin) Analysis of Deviance Table (Type II Wald chisquare tests) Response: y Chisq Df Pr(>Chisq) treat 0.48 1 0.4884 > > mod_negbin_sum <- glmmTMB(y ~ treat, data = dd_summaryzed, family = "nbinom1") > Anova(mod_negbin_sum) Analysis of Deviance Table (Type II Wald chisquare tests) Response: y Chisq Df Pr(>Chisq) treat 0.6612 1 0.4161 ``` Now an example with real data: ``` > Abundances <- rowSums(com_SS4[isolation_SS2_3_4 == "120",]) > Abundances_summarized <- tapply(Abundances, INDEX = as.character(ID_SS2_3_4[isolation_SS2_3_4 == "120"]), FUN = sum) > > data <- data.frame(ID = ID_SS2_3_4[isolation_SS2_3_4 == "120"], treatments = treatments_SS2_3_4[isolation_SS2_3_4 == "120"], ab = Abundances) > data_summaryzed <- data.frame(ID = ID_SS1[isolation_SS1 == "120"], treatments = treatments_SS1[isolation_SS1 == "120"], ab = Abundances_summarized) > > library(glmmTMB) > library(car) > > mod1 <- glmmTMB(ab ~ treatments + (1|ID), data = data, family = "gaussian") > Anova(mod1) Analysis of Deviance Table (Type II Wald chisquare tests) Response: ab Chisq Df Pr(>Chisq) treatments 4.9448 2 0.08438 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 > > mod2 <- glmmTMB(ab ~ treatments, data = data_summaryzed, family = "gaussian") > Anova(mod2) Analysis of Deviance Table (Type II Wald chisquare tests) Response: ab Chisq Df Pr(>Chisq) treatments 4.9448 2 0.08438 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 > > mod1 <- glmmTMB(ab ~ treatments + (1|ID), data = data, family = "nbinom1") > Anova(mod1) Analysis of Deviance Table (Type II Wald chisquare tests) Response: ab Chisq Df Pr(>Chisq) treatments 5.4888 2 0.06429 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 > > mod2 <- glmmTMB(ab ~ treatments, data = data_summaryzed, family = "nbinom1") > Anova(mod2) Analysis of Deviance Table (Type II Wald chisquare tests) Response: ab Chisq Df Pr(>Chisq) treatments 4.3773 2 0.1121 ``` Note that in this case, p-values are 'almost' significant only for the mixed model. Indeed, if run all of my analyses without the mixed-model approach I lose a lot of significant results. Maybe I do have some variation among samples of the same pond. For instance, some taxa may be more prone to be sampled in the first samples while others in the last ones.
null
CC BY-SA 4.0
null
2023-05-12T19:10:20.640
2023-05-12T19:10:20.640
null
null
171805
null
615718
1
null
null
1
42
I am having a trouble understanding how to answer the following question and what method to use. The final answer is $415800$ > Suppose there are 4 committees A, B, C, and D. 11 candidates are randomly assigned to these 4 committees. Each candidate can only be assigned to 1 committee. In how many ways can we randomly assign the 11 candidates to these four committees such that one committee consists of 1 member, one committee consists of 4 members, another committee consists of 4 members, and another committee consists of 2 members? Now if I know how many candidates are needed for each of $A,B,C,D$ committees (e.g., 1, 4,4,2 candidate for $A,B,C$,and $D$, respectively, the probability will be $$11 \cdot {10\choose 4} \cdot {6\choose 4} \cdot {2\choose 2} = 34650$$ But how do I deal with the fact that they are not determined?
Question on permutation and combination
CC BY-SA 4.0
null
2023-05-12T20:14:15.907
2023-05-18T19:57:49.633
2023-05-12T20:35:38.253
384861
384861
[ "probability", "combinatorics", "permutation" ]
615719
2
null
615711
2
null
There are several ways to report the estimated outcomes from an ordinal regression model. With the `emmeans` package you can choose among them with a `mode` argument submitted to `emmeans()` or `ref_grid()`. Latent variable According to a [vignette page](https://cran.r-project.org/web/packages/emmeans/vignettes/models.html#O), the default is `mode = "latent"`, and > With mode = "latent", the reference-grid predictions are made on the scale of the latent variable implied by the model. The scale and location of this latent variable are arbitrary... The results you show are based on that underlying default ["latent variable" model](https://stats.oarc.ucla.edu/r/dae/ordinal-logistic-regression/), an assumed continuous response (arbitrarily centered and scaled) that is converted to ordered categories as its value passes associated thresholds. The idea is illustrated in Figure 6.4 of these [course notes](https://grodri.github.io/glms/notes/c6s5). That "latent variable" model is fundamental and can be used to obtain other types of outcome estimates. As you discovered, the "latent variable" for a scenario here is just the linear predictor $X' \beta$, where $X$ is the vector of predictor values and $\beta$ is the corresponding vector of coefficient estimates. When you ask `emmeans` for estimates of marginal means, it takes the grid of values of variables that you specify (in this case, the combinations of the predictors `ses` and `fam`) and returns the model's estimates for those values (by default, at the average values of variables that you don't specify). As the vignette notes, however, "The scale and location of this latent variable are arbitrary." In the parameterization used by the `lrm()` and `orm()` functions in the [rms package](https://cran.r-project.org/package=rms), the linear predictor is associated with the class probabilities: $$\Pr(Y \ge j|X)=\frac{1}{1+\exp\left(-(\alpha_j + X' \beta)\right)} $$ where you have classes of outcome $Y$ labeled with increasing values of $j$ and the $\alpha_j$ are the corresponding intercepts estimated by the model. Clearly, you can shift the linear predictor $X' \beta$ by any offset and get the same outcome probability estimates if you make corresponding changes to the values of the $\alpha_j$ intercepts. From that perspective, you can display the latent-variable estimates with any offset shift that you want. The `mode="middle"` choice will tend to center the estimates around 0, if you want to display the results in terms of the latent variable. It doesn't matter that the model estimates you displayed have negative values. If you ask for probability estimates, the corresponding values of $\alpha_j$ (your two intercept estimates) will put everything together properly. Probability estimates If you want outcome-class probabilities, you need to specify a different `mode`. There is a `mode="prob"` argument that can report class probability estimates for supported model types. There's a "gotcha" with that mode, however, explained in the section of the vignette on [multinomial responses](https://cran.r-project.org/web/packages/emmeans/vignettes/models.html#N), of which ordinal models are a special case: > Please note that, because the probabilities sum to 1 (and the latent values sum to 0) over the multivariate-response levels, all sensible results from emmeans() must involve that response as one of the factors. For example, if resp is a response with k levels, emmeans(model, ~ resp | trt) will yield the estimated multinomial distribution for each trt; but emmeans(model, ~ trt) will just yield the average probability of 1/k for each trt. If you want to display class probabilities, you need to include the outcome among the variables that you specify to `emmeans()`. For example, to get a plot of outcome-class probabilities as a function of both `ses` and `fam`, you could specify: ``` emmip(int, outcome ~ fam|ses, mode = "prob", CIs = TRUE) ``` That will display estimates of outcome probabilities as a function of `fam`, in a separate facet for each `ses` level. The outcome probabilities for each combination of `fam` and `ses` will sum to 0, as expected. A warning about implementation: in the older version of `emmeans` I'm using on this computer (emmeans_1.6.2-1), the `mode="prob"` argument didn't seem to work on an `lrm()` model, while it worked OK with an `orm()` model (which provided the same coefficient estimates). I don't know if that's still the case with newer versions. Contrasts and statistical significance A useful estimate of the "significance" of your predictors with respect to outcome includes all the terms involving each predictor. If you use a model generated by the `rms` package, the `anova()` function applied to the model provides a very useful display of overall significance and the significance of interaction and nonlinear terms in the model, via Wald tests. At the other extreme of complexity, the pairwise comparisons among all predictor combinations take into account the corresponding coefficient estimates $\beta$, their variances, and their covariances. It applies the formula for the [variance of a weighted sum of variables](https://en.wikipedia.org/wiki/Variance#Weighted_sum_of_variables) to determine if any of those coefficient combinations differs significantly from 0, with multiple-comparison corrections as specified. Of course, the correction for multiple comparisons becomes more restrictive as the number of comparisons increases. If there are particular comparisons of major interest, it can be better to restrict your analysis to those rather than evaluating all pairwise comparisons. For those pairwise differences, any constant offset shift in the linear predictors don't matter, as the offset will cancel out when you take the differences. That's another reason not to worry about the negative values displayed for the latent variables. The point estimates of both of the contrasts that you show agree with the differences of the corresponding `emmean` values in the table you show. The standard error of `low fam_a - low fam_b`, however, is more than twice as large as for `low fam_a - low fam_c`, presumably because of a small number of cases in `low fam_b`. Although the point estimate of `low fam_a - low fam_b` is larger than for `low fam_a - low fam_c`, the larger standard error means that the first contrast isn't significant at $p < 0.05$ while the second is. A final warning: overlapping confidence intervals can be consistent with a significant difference between two scenarios. The [emmeans FAQ](https://cran.r-project.org/web/packages/emmeans/vignettes/FAQs.html#CIerror) emphasizes that point. [This answer](https://stats.stackexchange.com/a/18259/28500) goes into detail with respect to t-tests in simple scenarios, in which non-overlap of 95% CI is approximately equivalent to $p < 0.005$ for the difference between means.
null
CC BY-SA 4.0
null
2023-05-12T20:29:20.217
2023-05-14T13:32:48.253
2023-05-14T13:32:48.253
28500
28500
null
615720
1
null
null
0
14
I have a task where I need to perform ROC analysis and measure the AUC of the ROC curve, but the data is image data. I have pairs of images (which contain real-valued pixels) and masks (which contain 0 or 1, if the pixel is negative or positive, respectively). I have measured AUC using standard tools (sklearn), however I am aware that there are biases in this measurement because: 1- I have no notion of uncertainty around the AUC number. 2- I know that neighbourhood pixels are correlated because of spatial proximity. The ROC-AUC metric expects the score/label pairs to be independent from each other, so I need to account for the fact that they aren't, which could be introducing a bias to the AUC I am measuring. However, I have little idea on how to proceed, and I am unable to find the proper keywords - for some reason, using what I described isn't yielding much progress. --- Here's my attempts. Please tell me if I am in the right direction: 1- For this problem, I may be able to use bootstrapping to get a notion of uncertainty around AUC. But how to define number of boostrapping samples? 2- I have read that you can account for autocorrelations by modelling behaviour using regression predicting the score given a set of neighbourhood voxels, then applying this regressor on the entire data and considering the predicted scores to be independentent from each other. Does this make any sense? --- What am I missing here? Cheers!!
How to measure uncertainty and account for spatial bias when conducting ROC analysis on image data?
CC BY-SA 4.0
null
2023-05-12T20:30:11.540
2023-05-12T20:30:11.540
null
null
387846
[ "bias", "roc", "uncertainty", "image-segmentation" ]
615721
2
null
276067
0
null
As others have pointed out, the - log of the loss = (probability of correct classification). So, for example, losses of -log(.9), -log(.8), -log(.7), -log(.6), and -log(.5), or .11, .22, .36, .51, and .69 corresponding to probabilities of correct classification of 90%, 80%, 70%, 60%, 50%. Thinking evaluatively, a random classifier will, on average, make correct predictions in a balanced classification problem 1/n_classes % of the time, so the loss of a random classifier would be -log(1/n_classes). However, by log laws, this is equal to log((1/n_classes)^-1) = log(n_classes). For n_classes on [2,10], random classifiers should produces losses [log(2), log(10)], or (0.69, 1.10, 1.39, 1.61, 1.79, 1.95, 2.08, 2.20, 2.30) respectively if you're looking for some concrete numerical benchmarks. For unbalanced classification refer to Fed Zee's answer, and for determining the significance of a better-than-random log loss look at significance testing with binomial distributions.
null
CC BY-SA 4.0
null
2023-05-12T20:42:18.787
2023-05-12T20:42:18.787
null
null
313877
null
615722
1
null
null
2
51
I am doing a meta-analysis where my size effect is the odds ratio. There is a study with multiple odds ratios for different times. For example: I want the odds ratio for positive UDS for the intake of amphetamines My end goal is to calculate to combine the odds ratio over various times, to calculate an odds ratio that would most represent the chance of positive UDS for opioids for any previous or actual positive amphetamine positive UDS. [](https://i.stack.imgur.com/EY8qp.png)
How to combine multiple odds ratio from different time points
CC BY-SA 4.0
null
2023-05-12T20:46:15.517
2023-05-12T21:11:37.240
2023-05-12T21:11:37.240
387848
387848
[ "meta-analysis", "odds-ratio" ]
615724
1
615963
null
3
113
I've been working to better understand GLS by manually fitting the parameters in R. In the example below I fit the coefs of a GLS from the `nlme` package in R. However, I'm trying now to calculate the standard errors on those coefs. I don't understand how the variance-covariance matrix for a fitted GLS model is calculated (i.e., `vcov(glsModel)` in R). ``` library(nlme) set.seed(3415) n <- 25 x <- rnorm(n = n) B0 <- 10 B1 <- 0.5 phi <- 0.8 epsilon <- arima.sim(model = list(c(1,0,0), ar = phi),n=n, sd=0.5) y <- B0 + B1 * x + epsilon # Fit a gls with ar1 errors fixed to phi so I can # compare results easily... gls1 <- gls(y~x,correlation = corAR1(value = phi,fixed=TRUE)) coef(gls1) ############################ # Fit the GLS model ############################ # 1. Create design matrix X <- cbind(1, x) # equivalent to model.matrix(gls1) # 2. Get the variance-covariance matrix of errors V <- toeplitz(phi^(1:n-1)) * 0.5^2 # 3. Fit GLS model using matrix algebra Sigma <- solve(V) XtX <- t(X) %*% Sigma %*% X Xty <- t(X) %*% Sigma %*% y betas <- solve(XtX) %*% Xty betas # equivalent to coef(gls1) ############################ ## Get std errors ############################ # 1. Get residuals e <- as.matrix(y-betas[1] - betas[2]*x) e # equivalent to resid(gls1) # 2. calc variance-cov mat for the fitted model -- aka vcov(gls1) vcov(gls1) #?? ``` I've tried several approaches to getting `vcov(gls1)` manually but I'm flailing about. Can anybody shed some light?
Manually calculate the variance-covariance matrix for a fitted GLS model -- i.e., vcov(glsModel)
CC BY-SA 4.0
null
2023-05-12T21:05:37.160
2023-05-15T21:43:19.000
null
null
111024
[ "r", "generalized-linear-model", "lme4-nlme", "generalized-least-squares" ]
615725
1
null
null
0
28
I know that the reason we use multi-head attention in transformers instead of only single head is to attend to different parts of the input instead of just only one part. In attention we use softmax function to give attention weights to every input token. We can't attend to different parts of the input using one attention head because of the softmax function. but what if we used sigmoid function instead of softmax? For example: If one token in a sentence wants to attend to the first token and the last one. If we used only one-head attention with softmax, It would give approximately .5 attention weights to the first and last token and that's not right so we solve this by using multi-head attention. Can't we also solve that issue by using sigmoid to give (approximately) atttention weight value of 1 to the first and last tokens?
Why can't we replace the multi-head attention layer in transformers by a single head with sigmoid function?
CC BY-SA 4.0
null
2023-05-12T21:22:01.593
2023-05-12T21:22:01.593
null
null
116480
[ "neural-networks", "transformers", "attention" ]
615726
1
null
null
1
35
I am running vif(model) to identify multicollinearity between predictor variables in the model. Is there a rule of thumb on the gvif value that indicates presence of multicollinear variables?
GVIF in binomial logistic regression
CC BY-SA 4.0
null
2023-05-12T21:34:44.703
2023-05-13T00:57:42.000
2023-05-12T23:26:27.843
11887
387372
[ "r", "logistic", "variance-inflation-factor" ]
615727
1
null
null
2
101
I'm sending a number of DNA test kits to customers each day. The customers swab their cheeks to gather DNA and send back the kit for processing. I have data about each test kit that has been sent for the past two years. The data includes the date it was sent, the state that it was sent to, and the date that it was received back (if it has been received). We process each kit the day it is returned. I'm trying to forecast the number of kits that will be returned each day, looking forward 60 days, so I can staff accordingly for processing. Each state will process its own tests so I need to estimate the number of kits to be processed in each state for each of the next 60 days (3000 estimates updated daily). Many customers return their kits within one week, but some take a few weeks, sometimes up to two months. Some never return their kits at all. (We can treat kits sent more than two months ago as if they will never be returned if it simplifies the problem.) We will update the forecasts each day. Forecasts far out are expected to be less reliable; most of the kits that will be processed in 60 days haven't been sent out or even ordered yet. Forecasts for dates within the next few days should be more reliable since we know how many kits are outstanding. A confidence interval estimate, e.g. 30-50 kits at 80% confidence in NY on June 1, 2023, would be great, but a point estimate would be helpful too. The data looks like this: ``` Sent,Returned,State 2023-01-02,2023-01-10,CA 2023-01-02,2023-01-15,NY 2023-01-04,NA,CA ``` What's a good way of modelling this problem? I've been looking into using ARIMA on the number of kits returned each day, but not sure how to make the predictions factor in the number of sent, but not yet returned. I also starting looking at Cox survival modeling, but I'm not clear on how to predict the number that would be returned each day. Also thought about using multiple linear regression with 60 columns for number of kits sent 1 day ago, 2 days ago, etc, but not sure how to deal with the sparseness at we try to predict further and further out.
Estimating Number of DNA Tests to Process Each Day
CC BY-SA 4.0
null
2023-05-12T22:15:52.320
2023-05-19T17:14:42.980
2023-05-17T11:57:09.693
22311
3712
[ "survival" ]
615728
1
null
null
1
33
Are there any known statistical methods or laws that can be applied towards the detection of categorical data masquerading as continuous? Categorical data can masquerade (or be "obfuscated" in computer parlance) as continuous by a simple replacement scheme such as: - Original data (categorical): A, B, B, B, A, C - Replacement scheme (mapping): A -> 1.234, B -> -3.1416, C -> 0.0 - Masquerading as continuous it becomes: 1.234, -3.1416, -3.1416, -3.1416, 1.234, 0.0
How to detect categorical data masquerading as continuous?
CC BY-SA 4.0
null
2023-05-12T22:55:10.927
2023-05-13T23:31:37.990
null
null
151135
[ "distributions", "categorical-data", "categorical-encoding", "continuous-data" ]
615729
1
null
null
1
18
1.How to prove the asymptotic distribution formula? 2.How to derive Ljung Box statistic test? [](https://i.stack.imgur.com/yNmUU.png)
proof of asymptotic distribution of autocorrelation
CC BY-SA 4.0
null
2023-05-12T22:58:49.777
2023-05-12T22:58:49.777
null
null
249098
[ "autocorrelation", "asymptotics" ]
615730
1
null
null
1
35
Is there a way to get an estimate of good scaling parameters (namely mean and variance) for a Gaussian Process kernel serving as a surrogate model to a Reinforcement Learning reward function for optimization? Considering the reward function to be a scalar objective of 3 input variables, each having box constraints i.e. l1<x1<u1, l2<x2<u2, l3<x3<u3 and r= f(x1,x2,x3). Using the following from sklearn: ``` kernel = ConstantKernel(1.0) * RBF(1.0) gauss_pr = GaussianProcessRegressor(kernel) ``` Also, is it advisable to use a Gaussian Process as a surrogate or would another method be more beneficial? Any help would be appreciated. Thanks.
Best way to fit a Gaussian Process surrogate model to an RL Reward function
CC BY-SA 4.0
null
2023-05-12T23:15:33.093
2023-05-12T23:26:10.020
2023-05-12T23:26:10.020
272157
272157
[ "reinforcement-learning", "gaussian-process", "bayesian-optimization", "surrogate" ]
615731
1
null
null
0
56
In a multiple linear regression, what is the reason for the most predictors to be non-significant ($p > 0.05$), but ANOVA shows significances? What could be some of the reasons? Or a calculation error? [](https://i.stack.imgur.com/A7jP0.png) Any help is appreciated. I am trying to explain why majority of the coefficients are not significant, however obtained a significant regression model.
Multiple linear regression: F-test significant, but most predictors are not
CC BY-SA 4.0
null
2023-05-13T00:26:57.387
2023-05-13T10:20:33.603
2023-05-13T10:20:33.603
169343
387853
[ "regression", "multiple-regression", "regression-coefficients", "linear" ]
615733
2
null
615726
0
null
John Fox talks about the variance inflation factor (VIF) and generalized variance inflation factor (GVIF) in his book (2016). He did not give an explicit rule of thumb but argues that > ... the linear relationship among $X$s must be very strong before collinearity seriously impairs the precision of estimation: It is not until $R_j$ approaches .9 that the precision of estimation is halved (p. 342) Considering that $\text{VIF} = 1/(1-R_j^2)$, we may decide on a cut-off value (but there are other things to consider, see my answer to a related [question](https://stats.stackexchange.com/questions/342161/how-do-i-know-what-my-vif-limits-should-be-for-collinearity-should-be-when-doing/342181#342181)). Now, Fox recommends using $\sqrt{\text{VIF}}$ instead of VIF, because "the precision of estimation of $B_j$ [slope coefficient] is most naturally expressed as the width of the confidence interval for this parameter, and because the width of the confidence interval is proportional to the standard deviation of $B_j$ (not its variance)" (p. 342). In a few pages, Fox discusses GVIF and suggests reporting $\text{GVIF}^\frac{1}{2df}$ where df is the number of coefficients, which is analogous to reporting $\sqrt{\text{VIF}}$ (p. 358), and makes GVIF comparable across dimensions (p.460). By the way, Fox visited Cross Validated to answer a question on GVIF ([here](https://stats.stackexchange.com/questions/70679/which-variance-inflation-factor-should-i-be-using-textgvif-or-textgvif)). So, I would like to argue that whatever cut-off value you decide on for $\sqrt{\text{VIF}}$ applies to $\text{GVIF}^\frac{1}{2df}$. But again, whether that cut-off value is appropriate or not, is open to discussion. --- Fox, John. 2016. Applied Regression Analysis and Generalized Linear Models. 3rd ed. Los Angeles: Sage Publications.
null
CC BY-SA 4.0
null
2023-05-13T00:57:42.000
2023-05-13T00:57:42.000
null
null
109647
null
615734
1
null
null
1
25
For many generalized linear models (GLMs), the effect of changing a predictor $x_m \in X$ on the predicted mean outcome $E[\hat{y}|X]$ depends on the values of other predictors in the model $x_{n \neq m} \in X$. This is true even when we assume additivity of the linear predictor. My understanding is that this is because the GLM link functions are non-linear transformations of the model's parameters. For example, the probit function changes more rapidly at the extreme of its domain so that $Probit'(0.1)$ > $Probit'(0.5)$, making the effect of a given change in the linear predictor dependent on the value of the linear predictor. Assuming away interactions, am I then correct in understanding that the dependence between the effect of $x_m$ and the values of $x_n$ has nothing to do with the assumed error distribution? It would then follow that, for a GLM with an identity link function and a Poisson error distribution, the effect of $x_m$ is independent of $x_n$ -- and we might call it a linear model in contrast with the typical Poisson model assuming a log link.
Do most GLM predictors' effects depend on other covariates' values because of non-linear link function, error distribution, or both?
CC BY-SA 4.0
null
2023-05-13T01:55:32.053
2023-05-13T01:55:32.053
null
null
120828
[ "regression", "logistic", "multiple-regression", "generalized-linear-model", "linear-model" ]
615735
2
null
615029
0
null
The answer to your question is no, the p-value or the proximity of the estimated effect to the null hypothesis (in this case, I assume it's zero or no effect) are not good metrics for predicting how well a model will perform on new data. The p-value is a measure of the strength of evidence in support of a statistical hypothesis, not a measure of the prediction accuracy of a model. A smaller p-value suggests that we have stronger evidence to reject the null hypothesis, not that our model will necessarily perform better on unseen data. In the same vein, the estimated effect size being closer to zero does not imply that a model will perform better on new data. It simply means that, according to the data you used to estimate the effect size, the effect is close to zero. When you want to assess how well a model is likely to perform on new data, you are concerned with the model's predictive accuracy or generalizability, not the statistical significance of its parameters. Model comparison based on predictive performance is typically done through cross-validation or using a separate validation dataset. Metrics such as mean squared error (for regression problems) or accuracy, precision, recall, or AUC-ROC (for classification problems) are often used. In essence, while p-values and effect sizes are useful for understanding the properties of your model and the relationships in your data, they do not directly provide information about the predictive performance of your model. Moreover, be cautious about the use of a p-value threshold (like 0.05) to determine the validity of a model or effect. The p-value is a continuous measure of evidence, and the choice of a particular threshold like 0.05 is somewhat arbitrary and can lead to dichotomization of results, which is a topic of ongoing debate in the statistics community.
null
CC BY-SA 4.0
null
2023-05-13T02:02:38.167
2023-05-13T02:02:38.167
null
null
387856
null
615736
2
null
614609
0
null
You're dealing with an interesting challenge. The short answer is that, in general, SVMs are not easily updated with new training data without retraining, but there are alternative approaches you can consider. Incremental learning: Some machine learning models support incremental learning, which allows you to update the model with new data without needing to retrain on the entire dataset. Models such as Naive Bayes, k-Nearest Neighbors (k-NN), and certain neural networks support incremental learning. Note, however, that these models may not be as effective as SVMs for OCR. Online SVM: There are variations of SVM, known as Online SVMs, that can be updated incrementally. This could be an option if you wish to stick to SVMs. Transfer Learning: For your use-case, transfer learning could be a very effective approach. Deep learning models, specifically Convolutional Neural Networks (CNNs), have achieved state-of-the-art results in OCR tasks. You can provide a pre-trained CNN model to your users, which can then be fine-tuned on specific fonts that users come across. This fine-tuning process typically requires much less data and computational resources than training a model from scratch. Active Learning: This is a semi-supervised learning approach where the model identifies instances in the dataset for which it is least confident and requests their labels from the user. This approach can be very efficient when labeling data is expensive or time-consuming. In your situation, I would recommend exploring transfer learning with a deep learning model. This would allow users to improve the model with new fonts without needing to retrain from scratch. It's also worth noting that deep learning models tend to perform better than SVMs for image-based tasks like OCR. Remember, any model improvement approach will require careful design of the user interface and user experience to make the process intuitive and efficient for the user.
null
CC BY-SA 4.0
null
2023-05-13T02:05:26.100
2023-05-13T02:05:26.100
null
null
387856
null
615737
1
null
null
0
7
In the [original paper on Isolation Forest by Liu, Ting and Zhou](https://cs.nju.edu.cn/zhouzh/zhouzh.files/publication/icdm08b.pdf), the authors cite some problems in creating anomaly scores out of the path length $h(x)$ of a point $x$ in an iTree. The text reads as follows (section 2, following definition of "Path Length") - > The difficulty in deriving such a score from $h(x)$ is that while the maximum possible height of iTree grows in the order of $n$, the average height grows in the order of $log(n)$. Normalization of $h(x)$ by any of the above terms is either not bounded or cannot be directly compared. I understand the logic about the growth order of maximum possible height and average height for an iTree. I do not understand how - use of them to normalize leads to unboundedness of anomaly scores. - And then, what type comparison is being referred to, that becomes difficult. Can these points please be clarified? Please feel free to use mathematical formulations to drive the point. In fact, it is preferred if someone starts with the formulation for the anticipated normalization step itself (just so there is no ambiguity).
Significance of challenges cited, in creating anomaly scores out of path length, in original paper for Isolation Forest
CC BY-SA 4.0
null
2023-05-13T03:22:46.477
2023-05-13T03:22:46.477
null
null
331772
[ "machine-learning", "random-forest", "anomaly-detection", "isolation-forest" ]
615739
1
null
null
0
16
I find some references and discussion state that Chi-square test used for comparison analysis, but some said for correlation also. I am confused, so Chi-square is test for correlation or comparison? or both? If both, when we used it for correlation or for comparison beside of based on hypothesis (association or comparison)? Also, if we run Chi-square test in SPSS, it found at crosstabs menu and non-parametric test which are different way to settings the analysis. Which menu should used for correlation or comparison?
Chi-square: correlation or comparison test?
CC BY-SA 4.0
null
2023-05-13T06:34:18.870
2023-05-13T07:23:49.110
null
null
387864
[ "correlation", "chi-squared-test", "method-comparison" ]
615740
2
null
615739
0
null
I personally have not heard the chi square test being used to measure correlation. That said, if you want to test if the population correlation is 0, the sample correlation is asymptotically a centered Normal under the null. So its square will be a scaled chi square and hence I can see that the test statistic will be chi squared under the null. The Pearson chi square test can be used to test if a sample comes from a specified distribution, which is I guess what you mean by comparison?
null
CC BY-SA 4.0
null
2023-05-13T07:23:49.110
2023-05-13T07:23:49.110
null
null
59485
null
615742
1
null
null
0
13
I divided participants into different pairs according to their language proficiency score, and now I ask participants in different pairs to have conversations, so every participant now have a pair ID (e.g., pair 1, 2, 3...). I want to investigate how the pause frequency in participants' individual speech in the conversation predicts their language proficiency scores. So I used a linear mixed model with pair ID as a random effect, the pause frequency as fixed effects, and the language proficiency score as the dependent variable. I used the linear mixed model to address the non-independent data. formula in R: ``` Language proficiency score~pause frequency+pause length+(1|pair ID) ``` The model runs successfully. However, I found the marginal R squared value is pretty small, and it seems large variance is explained by the random effect. Is this because I had paired participants according to their proficiency score and then included pair ID as a random effect in the model to predict their language proficiency score (the random effect is not really random)? How can I solve this problem?
Using linear mixed models to model nested data
CC BY-SA 4.0
null
2023-05-13T07:33:44.067
2023-05-14T02:10:34.067
2023-05-14T02:10:34.067
362671
387867
[ "r", "non-independent" ]
615743
1
null
null
0
13
I'm running an ab test for an on-demand service where one group receives compensation in response to poor service, while the other group does not. The hypothesis is that "compensating for poor experience has a measurable effect on reorder rate". However, I don't know how to control for the variation that not all customers in each group will experience the same quality of service - they could vary by number of orders in observation period and number of poor service experiences, as well as potentially other features. It seems like the effect of compensation should control for these variables, but I don't know how to do that? Would it be possible to analyse various subgroups within the datasets? Or should I fit some kind of model?
Measuring treatment effect when the dosage is not under your control
CC BY-SA 4.0
null
2023-05-13T08:03:46.313
2023-05-13T08:03:46.313
null
null
87437
[ "hypothesis-testing", "statistical-significance", "mixed-model", "experiment-design", "stratification" ]
615744
2
null
366410
0
null
Of what I observed on MATLAB : - there is an IMM state vector that corresponds to the first model provided - When mixing the other states are converted into the IMM state vector with the "switchimm" function The switchimm function converts a state vector from one motion model to another by putting 0's when the dimension does not exist in the input model. For the covariance matrix, it sets the variance to 100 when the dimension does not exist. [https://fr.mathworks.com/help/fusion/ref/switchimm.html](https://fr.mathworks.com/help/fusion/ref/switchimm.html) ``` x = [1; 2; 3; 3]; %constvel = [x; vx; y; vy]; constturn = [x; vx; y; vy; w] disp(switchimm("constvel", x, "constturn")) >> test 1 2 3 3 0 ``` There are other more elaborate ways that exist : [https://ieeexplore.ieee.org/abstract/document/6324701](https://ieeexplore.ieee.org/abstract/document/6324701) [https://ieeexplore.ieee.org/document/7376231](https://ieeexplore.ieee.org/document/7376231)
null
CC BY-SA 4.0
null
2023-05-13T08:30:17.493
2023-05-13T08:30:17.493
null
null
385289
null
615746
2
null
615659
1
null
By removing 'k=20', the wiglyness disappears. I deduce (without being able to demonstrate it) that the ordered factor-based model as proposed [here](https://stats.stackexchange.com/questions/403772/different-ways-of-modelling-interactions-between-continuous-and-categorical-pred?rq=1) should not have too many degrees of freedom to be comparable to the corresponding model without ordered factor. Note that using `itsadug::compareML`, the non-ordered factor-based model (m4) has still the lower AIC. Models: ``` m4 <- bam(bmk ~ group + s(delay, by = group) + s(delay, medu, bs = "fs"), data = dat, method = 'fREML', family = inverse.gaussian(link="identity"), discrete = TRUE) m4_of <- bam(bmk ~ groupof + s(delay) + s(delay, by = groupof) + s(delay, medu, bs = "fs"), data = dat, method = 'fREML', family = inverse.gaussian(link="identity"), discrete = TRUE) ``` Summary(m4): without 'k=20' ``` > summary(m4) Family: inverse.gaussian Link function: identity Formula: bmk ~ group + s(delay, by = group) + s(delay, medu, bs = "fs") Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 11.77303 0.20189 58.31 <2e-16 *** group1 0.32838 0.02879 11.41 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Approximate significance of smooth terms: edf Ref.df F p-value s(delay):group0 1.002 1.004 3.777 0.0517 . s(delay):group1 1.003 1.006 20.979 4.48e-06 *** s(delay,medu) 143.147 849.000 19.762 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 R-sq.(adj) = 0.0792 Deviance explained = 9.63% fREML = -1.8252e+05 Scale est. = 0.0089037 n = 179659 > gam.check(m4) Basis dimension (k) checking results. Low p-value (k-index<1) may indicate that k is too low, especially if edf is close to k'. k' edf k-index p-value s(delay):group0 9 1 0.98 0.60 s(delay):group1 9 1 0.98 0.64 s(delay,medu) 850 143 0.98 0.64 ``` Summary(m4_of): without 'k=20' ``` > summary(m4_of) Family: inverse.gaussian Link function: identity Formula: bmk ~ groupof + s(delay) + s(delay, by = groupof) + s(delay, medu, bs = "fs") Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 11.70962 0.16160 72.46 <2e-16 *** groupof1 0.33310 0.02948 11.30 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Approximate significance of smooth terms: edf Ref.df F p-value s(delay) 1.004 1.007 0.004 0.97162 s(delay):groupof1 1.003 1.006 10.304 0.00129 ** s(delay,medu) 167.479 848.000 19.769 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 R-sq.(adj) = 0.0792 Deviance explained = 9.65% fREML = -1.8249e+05 Scale est. = 0.008902 n = 179659 > gam.check(m4_of) Basis dimension (k) checking results. Low p-value (k-index<1) may indicate that k is too low, especially if edf is close to k'. k' edf k-index p-value s(delay) 9 1 0.97 0.34 s(delay):groupof1 9 1 0.97 0.32 s(delay,medu) 850 167 0.97 0.38 ``` Compare models: ``` > itsadug::compareML(m4, m4_of) Model m4 preferred: lower fREML score (35.737), and equal df (0.000). ----- Model Score Edf Difference Df 1 m4_of -182486.6 9 2 m4 -182522.4 9 35.737 0.000 AIC difference: -13.44, model m4 has lower AIC. ``` Plots: [](https://i.stack.imgur.com/gr7JJ.jpg)
null
CC BY-SA 4.0
null
2023-05-13T08:43:50.970
2023-05-13T08:43:50.970
null
null
307344
null
615747
1
null
null
0
17
Mathematically what is the expression that is close to a 10 Day Exponential moving average (span of 10 means decay factor of 0.818181) that is created using averaging over Simple moving averages. E.g. 10D Exponential average is close to weighted average of (3D,5D,7D,10D) Simple moving averages. Context for this : I want to run a exponentially-weighted least squares regression with huber loss (mae for outliers). But this regression does not converge, so I am thinking of averaging out non-weighted least squares with different lookbacks.
Exponential Averaging using Simple averaging
CC BY-SA 4.0
null
2023-05-13T08:48:33.147
2023-05-13T09:12:55.517
2023-05-13T09:12:55.517
387870
387870
[ "regression", "moving-average", "exponential-smoothing" ]
615748
1
615751
null
3
90
I have plotted the qqplot of the residuals that my model generates with the python module [statsmodel](https://www.statsmodels.org/stable/index.html) sm.qqplot(data, line ='r') and it looks like this [](https://i.stack.imgur.com/qzQGK.png) The points are placed on a straight line but the sample quantiles do not correspond to the theoretical quantiles expected from a normal distribution. What does it mean? Furthermore, I also tried using the scipy function [probplot](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.probplot.html#scipy-stats-probplot) probplot(data,dist='norm',plot=plt) and I got [](https://i.stack.imgur.com/JRs21.png) I don't understand: are points on the y-axis the sorted values or the quantiles? the scipy documentation says > probplot generates a probability plot, which should not be confused with a Q-Q or a P-P plot. Statsmodels has more extensive functionality of this type, see statsmodels.api.ProbPlot.
Help me understand this qqplot
CC BY-SA 4.0
null
2023-05-13T08:55:32.873
2023-05-13T12:12:41.260
2023-05-13T09:37:15.190
275569
275569
[ "normal-distribution", "residuals", "normality-assumption", "qq-plot" ]
615749
1
615759
null
2
174
A mountain climber has been lost on a mountain on either slope A or slope B. The head of the rescue mission believes that the mountain climber is on A with probability 0.6. Suppose she has 12 rescue parties which are all equally competent and will locate the climber with a probability .3 if they are on the correct slope. Suppose they work independently of each other. If the mission leader sends 8 teams to slope A and 4 teams to slope B, what is the probability that they rescue the climber? Here's my train of thought: What they need is for 1 team to at least rescue the mountain climber, so I can just take $ 1- P(\textrm{no Teams Rescued)}$, to which $ P(\textrm{no Teams Rescued}) = ((0.6)(0.7))^8 + ((0.4)(0.7))^4 . $ Can someone tell me what is wrong with my way of thinking? The correct answer results to $ .87.$
Probability of rescue
CC BY-SA 4.0
null
2023-05-13T09:13:53.517
2023-05-13T14:02:46.817
2023-05-13T14:02:46.817
362671
387750
[ "probability", "self-study" ]
615750
1
null
null
0
35
I am trying to predict the revenues of a portfolio of items. I want to simulate the revenues in a particular market situation in which they might increase. Each item's revenues is made up of 3 components: Revenues per item = constant * percentage_increase_for_A(item) * A(item) + constant * percentage_increase_for_B(item) * B(item) + constant * percentage_increase_for_C(item) * C(item) As you can see, each of the percentage increases depends on the item since each item has a different one depending on item type and region. A, B and C are just the current base values of each item that contribute to revenues. The percentage increases are extracted from normal distributions. I simulate each item's revenues using a Monte Carlo simulation, where the variable is the percentage increase for an item. So I use a for cycle that fully (until the specified number of simulation is reached) simulates an item before going to the next item (and simulate it). To make sure that items of same type and region are simulated the same way (same "simulation path" for the percentage increases values), I have fixed the seed using Python's numpy seed. At the end I have something like this: [](https://i.stack.imgur.com/bDo0a.png) I want to know what is the mean, median and some percentiles of the TOTAL revenues of the portfolio of items and obtain a probability distribution for the total revenues. What I plan to do is summing the revenues of each item at each simulation. So, for example, for simulation 0 I will have \$20,000.00 + \$10,000.00 + \$30,000.00 + \$15,000.00 = \$75,000.00 total revenues for the portfolio of items. Doing this for each simulation, I will have various values of the total revenues and will be able to plot the probability distribution and get various percentiles. Would this way of summing to get the total revenues probability distribution be correct? I don't think I have to go through convolutions or anything like that in my case, is it correct? Although, I am wondering this: am I summing percentiles by doing this? I know that, for example, the median of the sum is not equal to the sum of the medians. The revenues of each item in a single simulation are not percentiles by themselves but they are if you consider the revenues of one single item at all the simulations. Hence, since I go simulation by simulation when summing, I assume I am not doing anything wrong. Is it correct? Please note: since the actual problem is a bit different from the one I described here, it might happen that the probability distributions of the revenues of some items don't follow a normal distribution, while others do. Would my approach be still correct then? Another question is this: from a statistical point of view, can the revenues of each item be considered random variables or is the only actual random variable the percentage increase described above, extracted from a normal distribution?
Sum of (random) distributions
CC BY-SA 4.0
null
2023-05-13T09:26:38.603
2023-05-13T09:59:15.280
2023-05-13T09:59:15.280
383746
383746
[ "probability", "distributions", "random-variable", "simulation", "monte-carlo" ]
615751
2
null
615748
4
null
It's the same plot. I am not an expert on your software, but the following is a confident series of guesses. The sorted residuals are one and the same as the quantiles in this context. On the vertical axis are your residuals and on the horizontal axis are what you would get on average with a sample of the same size drawn from a normal distribution with the same mean (zero) and SD. If all points fell on the line, you would have a perfect normal distribution, but that is just an ideal. In fact experienced statistical people would expect faking of data in that case as readily as a genuine perfect fit. In practice you have slightly fatter tails in the residuals than a normal distribution, which is not in itself cause for alarm. In essence, the model passes this particular health check. That doesn't mean that there might not be other diagnostics that would point to a better model. It takes a bit of experience to know how much variability is acceptable and how much points to systematic departures that need to be addressed. One handle is a line-up test that goes back at least to Shewhart. Call up a random number routine to get several normal quantile plots, all drawn from a a normal with zero mean and the same SD. Then does the observed quantile plot stick out as very different from the fake plots. The idea is similar to a line-up in police procedure: show not just the suspect but other people too in a line-up and see whether a witness identifies the suspect. Another handle, and an even better one, is whether you can identify a change to the model that improves the quantile plot.
null
CC BY-SA 4.0
null
2023-05-13T09:31:10.510
2023-05-13T12:12:41.260
2023-05-13T12:12:41.260
22047
22047
null
615753
2
null
601782
1
null
Not sure this is still relevant, but answering anyway in case someone else is also looking for this. The formulation looks good to me but looking at the code snipped you copy I think in the loss term it should actually always be $\epsilon_\theta$, since we're interested in the finetuned models prediction of the regularization prompt. So the final formula should become $$ \mathbb{E}_{x,c,\epsilon,t}\Bigl[ \lVert \epsilon - \varepsilon_\theta(z_t,t,c) \rVert_2^2 + \lambda \lVert \epsilon' - \varepsilon_\theta (z'_{t'},t',c_{pr}) \rVert_2^2 \Bigr] $$ where for an example where we try to generate dogs, $c$ is the conditioning vector for "a sks dog", $c_{pr}$ for "a dog". Further $z'_{t'}$ is the latent encoding of a sample from the regularization dataset with $t'$ steps of noise added to it. The quote with the "ancestral sampler" refers to the creation of the regularization dataset. So in this case, they generate images using the frozen model directly, which then generates $z'_0$, to which we can add noise and use our finetuned model to predict the regularization class prompt.
null
CC BY-SA 4.0
null
2023-05-13T10:45:29.757
2023-05-13T10:45:29.757
null
null
387875
null
615754
1
null
null
0
31
I am looking to perform [Two way Anova](https://www.scribbr.com/statistics/two-way-anova/) on the following dataset : ``` YEAR TOURNAMENT WINNER RUNNER-UP 2023 Australian Open Novak Djokovic Stefanos Tsitsipas 2022 U.S. Open Carlos Alcaraz Casper Rudd 2022 Wimbledon Novak Djokovic Nick Kyrgios 2022 French Open Rafael Nadal Casper Rudd 2022 Australian Open Rafael Nadal Daniil Medvedev WINNER_NATIONALITY WINNER_ATP_RANKING RUNNER-UP_ATP_RANKING Serbian 1.0 3.0 Spanish 2.0 5.0 Serbian NaN 25.0 Spanish 5.0 8.0 Spanish 5.0 2.0 WINNER_LEFT_OR_RIGHT_HANDED TOURNAMENT_SURFACE WINNER_PRIZE right Plexicushion Prestige 2050000.0 right DecoTurf - outdoors 2600000.0 right Grass / Outdoor 2507460.0 left Clay 1870000.0 left Plexicushion Prestige 4400000.0 ``` I will be testing the following hypothesis - Do Right hand players have advantage over left hand players - Playing on type of surface has benefited a player to win? The independent variables `WINNER_LEFT_OR_RIGHT_HANDED` have an effect on the dependent variable a `WINNER` The independent variables `WINNER` have an effect on the dependent variable a `TOURNAMENT_SURFACE` What other possible Hypothesis can be performed on the given dataset and are the hypothesis i am trying to answer are valid, are their other possible sources of variation in the data that i can take into account.?
Two way Anova Hypothesis Testing
CC BY-SA 4.0
null
2023-05-13T10:56:36.757
2023-05-13T23:23:41.237
2023-05-13T23:23:41.237
17072
387877
[ "hypothesis-testing", "anova", "dataset" ]
615755
1
null
null
0
17
I'm running MICE for 100 imputations with big data (~600k rows). Due to storage restrictions at work (which I am not permitted to change), I can't save all 100 imputations in one go, and I'd hit memory issues in R anyway. I am thinking of running mice with 1 imputation (m = 1), then saving that imputation to Excel, deleting the imputation in R, and repeating this 100 times. This will only work if running the MICE command 100 separate times with m = 1 is functionally equivalent to running it once with m = 100. I trialled this by running MICE with m = 2 and then twice with m = 1, and the outputs were different, which is what I was hoping for. However, it would be useful if I could find this information for certain.
MICE multiple imputation in R - imputation number
CC BY-SA 4.0
null
2023-05-13T11:08:33.007
2023-05-13T11:18:41.910
null
null
387879
[ "r", "data-imputation", "multiple-imputation", "mice" ]
615756
2
null
443486
0
null
Logistic regression was invented by a statistician, for statisticians. SVMs are a true ML algorithm. Random Forests are a statistician's take on Machine Learning. Since you explicitly ask about "ML algorithms", I suppose you are not interested in inference, probabilities, confidence intervals and all other things statistics can offer to facilitate human understanding of the problem. The next question to clarify is how much do you know about the data generating process, how many observations (points) you have and how many variables (dimensions) per observation. Logistic regression tries to predict class probabilities and model them by the logistic function. This works best when the classes are normally distributed, with same covariance matrices$^*$. The more your data deviate from that assumption$-$and that's more likely with high-dimensional data,$-$the poorer the performance of logistic regression. SVMs, on the other hand, simply attempt to make good binary predictions, without caring about probabilities at all. They will try to find a classification rule which is most likely to produce correct predictions on new data, without making assumptions about the process behind them. They tend to work well on high-dimensional data (your images of cats and dogs, for example). The disadvantage is that they are resource-hungry: the memory and computational costs rise more than quadratically with the dataset size. My experience with Random Forests is limited. A colleague of mine (a statistician) has chosen it for a project because of the human-interpretability of its results, but that's not really the point if you are only interested in the algorithm's performance. So, my highly subjective and personal rule of the thumb would be: - Low dimensional data, compact (ideally likely normally distributed) classes, interpretability is desired $\rightarrow$ logistic regression - Low dimensional data, unknown and/or complex distribution, interpretability is desired $\rightarrow$ Random Forest - Small-to-moderate, high-dimensional dataset (< 100,000 points), interpretability of the results is not an issue $\rightarrow$ SVM - Everything else $\rightarrow$ neural networks --- *It is possible to construct artificial cases which deviate from this rule, e.g. having Poisson-distributed classes, but I have never encountered such a constellation in practice.
null
CC BY-SA 4.0
null
2023-05-13T11:09:31.033
2023-05-13T11:09:31.033
null
null
169343
null
615757
1
null
null
2
55
All I am given is the following information and asked to find the sample size. I just cannot see where to start. I am wondering if some of the info on the document got cut off because I have no idea ``` >lm(formula = sales ~ price + advert + I(advert^2), data = Andy) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 109.7190 6.7990 16.137 < 2e-16 *** price -7.6400 1.0459 -7.304 3.24e-10 *** advert 12.1512 3.5562 3.417 0.00105 ** I(advert^2) -2.7680 0.9406 -2.943 0.00439 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 > (Intercept) price advert I(advert^2) (Intercept) 46.227019 -6.42611301 -11.6009601 2.93902634 price -6.426113 1.09398815 0.3004062 -0.08561906 advert -11.600960 0.30040624 12.6463020 -3.28874574 I(advert^2) 2.939026 -0.08561906 -3.2887457 0.88477357 ```
How do you find the sample size given R output that omits the degrees of freedom?
CC BY-SA 4.0
null
2023-05-13T11:17:34.653
2023-05-13T13:44:44.617
2023-05-13T13:44:44.617
199063
371754
[ "regression", "multiple-regression", "anova", "interpretation" ]
615758
2
null
615755
1
null
You can change the seed argument in each iteration of your mice call to ensure a different imputation.
null
CC BY-SA 4.0
null
2023-05-13T11:18:41.910
2023-05-13T11:18:41.910
null
null
49630
null
615759
2
null
615749
6
null
Your reasoning is correct but you used $0.3$ as the probability of “no success” while it should be $1-0.3=0.7$. If you swap the numbers you'll get the correct result. You also used unnecessary braces there. The calculation becomes $$ 1-(0.6 \times 0.7^8 + 0.4 \times 0.7^4) \approx 0.87 $$ With your braces, it was as if each of the rescue parties could succeed on either of the slopes, while it's $8$ and $4$ parties assigned to the slopes and the probabilities of success of the expeditions.
null
CC BY-SA 4.0
null
2023-05-13T11:23:33.293
2023-05-13T12:52:08.590
2023-05-13T12:52:08.590
35989
35989
null
615760
1
null
null
0
18
In some sources [1](https://www.youtube.com/watch?v=5gW0PO7g6pY) one reads that the frailty model expands on the cox proportional hazards model $$h_i(t|x_i)=h_0(t)\exp(\beta x_i)$$ by adding a frailty term $z$ like so $$h_i(t|x_i,z_i)=h_0(t)\exp(\beta x_i+z_i)$$ In other sources I see frailty figuring as a factor “acting multiplicatively” [2](https://www.demogr.mpg.de/papers/working/wp-2003-032.pdf) on the baseline hazard function $h_0(t)$ $$h_i(t|x_i,z_i)=z_ih_0(t)\exp(\beta x_i)$$ How are individual frailty and shared frailty correctly expressed? [1](https://www.youtube.com/watch?v=5gW0PO7g6pY): [Predicting Horse Race Winners Using Advanced Statistical Methods](https://www.youtube.com/watch?v=5gW0PO7g6pY) [2](https://www.demogr.mpg.de/papers/working/wp-2003-032.pdf): [Frailty Models](https://www.demogr.mpg.de/papers/working/wp-2003-032.pdf)
How is a frailty model in survival analysis expressed
CC BY-SA 4.0
null
2023-05-13T11:37:07.883
2023-05-13T14:35:41.853
null
null
387878
[ "cox-model", "frailty" ]
615761
1
null
null
0
14
The question relates, if I use the e.g. the imagenet standard mean and variance on the lsun dataset. I train a classifier on the lsun dataset then. Can I expect more worse results? How much does it matter?
How important is the standard mean and variance of a dataset to train a classifier?
CC BY-SA 4.0
null
2023-05-13T12:11:26.990
2023-05-13T12:11:26.990
null
null
132997
[ "classification", "normalization", "anomaly-detection" ]