Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
617200
1
null
null
0
30
In my textbook $p$-values are defined as follows: $$p\text{-value} = P(T \text{ is at least as extreme as } t \text{ given } H_0 \text{ is true}) $$ where $T$ is the test statistic. It goes on to say that > The smaller the $p$-value, the stronger the evidence against $H_0$. Here is my issue. Surely if the probability that $|T| \geq t$ is smaller then the probability that $|T|<t$ is larger. That is, the values are more likely to congregate about $0$. This would provide greater evidence of the null hypothesis.
Trouble understanding $p$-values
CC BY-SA 4.0
null
2023-05-29T11:07:57.153
2023-05-29T11:17:00.827
2023-05-29T11:17:00.827
362671
389061
[ "hypothesis-testing", "p-value", "interpretation" ]
617201
1
617212
null
0
40
Given that that correlation coefficient can only be between -1 and 1, it can never increase the steepness of the slope when calculating the steepness of a slope in a regression line. Say correlation coefficient r = 0.946, standard deviation along the x-axis Sx = 0.816, and standard deviation along y-axis is Yx = 2.160 (values same as in reference [Khan Academy video](https://www.youtube.com/watch?v=FGesqq22TCM&t=342s) to follow along with example) then slope m = r(Sx/Sy). Here Sx/Sy will be the slope if the correlation coefficient is 1 and all the data plots match perfectly to the line. When we multiply it by r=0.946 however, the steepness of the slope decreases. My question is, how do we know that the only possibility here is where the slope decreases and not increases? The correlation coefficient only indicates how accurate the data plot is compared with the regression line, it says nothing about in which direction (more steep or less steep) the inaccuracy points to. So why are we multiply r with Sx/Sy when we know a r with a absolute value of less than 1 can only decrease the steepness of the slope but never increase it? Edit: This is how Sal Khan explains the concept in his lesson video, that the slope decreases after multiplying by the correlation coefficient.
while calculating the slope of a regression line, why do we multiply the slope by the correlation coefficient when it can only decrease the steepness?
CC BY-SA 4.0
null
2023-05-29T11:27:53.443
2023-05-29T18:06:14.797
2023-05-29T18:06:14.797
237561
321293
[ "regression", "self-study", "regression-coefficients" ]
617202
1
617217
null
6
409
I was reading about normal distributions and the Central Limit Theorem (CLT) and I came up with a question. Why do we bother ourselves to use machine learning techniques when the CLT gives us the permission to assume that the mean of our data has a normal distribution? I can understand the usage of deep learning when we have images as data or any other type of data that does not have clear numerical representation. But when it comes to numerical-in-nature data, how can one justify the use of machine learning techniques over the simple statistical tools theoretically?
Why don't we use normal distribution in every problem?
CC BY-SA 4.0
null
2023-05-29T11:28:41.713
2023-05-30T00:54:17.960
2023-05-29T14:37:35.693
199063
385032
[ "machine-learning", "central-limit-theorem" ]
617203
2
null
615543
0
null
A note on notation: the quantity $\sum_{j=1}^r X_{pj}$ (the number of books child p receives) appears a lot in your argument. I suggest giving this a more concise name. I'll call it $S_p$. Everything is correct until the penultimate line. Your expression for $\mathbb P(S_p=1,S_q=0)$ is only correct when $p \neq q$.
null
CC BY-SA 4.0
null
2023-05-29T11:54:34.480
2023-05-29T12:03:41.377
2023-05-29T12:03:41.377
319175
319175
null
617204
1
null
null
0
16
I wondered if anyone could help with the following issue… Rather than just assessing whether there is a difference between two means (considering the variance of each sample), I would like to be able to assess the difference between multiple samples’ variances, or coeeficients of variation (i.e. answering the question ‘does one condition vary more than another?’). I would like to actually compare the variance/coefficient of variation between 4 different stockings, for the pressure they apply in a single/constant group of 15 individuals (4 x repeated measures). This is to assess whether different brands (including a custom fitted stocking) have a larger or smaller "spread" of applied pressures. For two related samples I could use the Sokal (1980) test ([https://academic.oup.com/sysbio/article/29/1/50/1655230](https://academic.oup.com/sysbio/article/29/1/50/1655230)). However, this test doesn’t work for 3 or more conditions. In the final paragraph of the Sokal paper, the authors suggest using “Levene's test… applied to logarithmically transformed variables, using a randomized complete blocks design”. Another more recent paper (citing Sokal) did this recently: [https://www.researchgate.net/publication/336597568_A_Novel_Method_for_Assessing_Enamel_Thickness_Distribution_in_the_Anterior_Dentition_as_a_Signal_for_Gouging_and_Other_Extractive_Foraging_Behaviors_in_Gummivorous_Mammals](https://www.researchgate.net/publication/336597568_A_Novel_Method_for_Assessing_Enamel_Thickness_Distribution_in_the_Anterior_Dentition_as_a_Signal_for_Gouging_and_Other_Extractive_Foraging_Behaviors_in_Gummivorous_Mammals) However, I was under the impression that Levene’s test assumed independent samples? ([https://www.spss-tutorials.com/levenes-test-in-spss/#:~:text=Levene's%20test%20basically%20requires%20two,is%2C%20not%20nominal%20or%20ordinal](https://www.spss-tutorials.com/levenes-test-in-spss/#:%7E:text=Levene%27s%20test%20basically%20requires%20two,is%2C%20not%20nominal%20or%20ordinal)), and I would like to do this with related observations! Finally, the Feltz and Miller (1996) test also compares CVs between independent groups ([https://cran.r-project.org/web/packages/cvequality/vignettes/how_to_test_CVs.html](https://cran.r-project.org/web/packages/cvequality/vignettes/how_to_test_CVs.html)). However... again, this test seems to have been used with related samples, with the example in the associated R package being used to analysed difference in artifact length versus breadth or width... even though each measure is taken for a single artifact (repeated/related measures). - Is this because, as we're not comparing multiple measures of the same factor (e.g. test-retest; Length 1, Length 2 --> difference --> CV etc.) there's an argument to treat these as independent measurements? If you have any thoughts or suggestions, I’d be really grateful for any advice.
Equality of variance - three or more related samples
CC BY-SA 4.0
null
2023-05-29T12:12:06.640
2023-05-29T12:12:06.640
null
null
389066
[ "variance", "coefficient-of-variation" ]
617205
1
617210
null
0
13
I'm conducting a survival analysis using Cox proportional hazards model. The failure in the analysis is crime. I have a binary covariate (1 = yes, 0 = no) for which I get huge hazard ratio – usually between 2,000-4,000 (depends on the specific model). I checked and it turns out that around 90% of observations who have the value of "yes" for this variable have 'failed' (i.e., committed a crime). So I understand why the HR is so high, but is it problematic for my model? Is such a case can be considered as overfitting? Thanks!
Overfitting in Cox PH model
CC BY-SA 4.0
null
2023-05-29T12:27:36.893
2023-05-29T13:42:58.977
null
null
279322
[ "cox-model", "overfitting" ]
617207
1
null
null
1
99
Let $X_{1},X_{2},X_{3}$ be i.i.d samples from $N(\mu,\sigma^2)$. Let $u$ denote $$u=\frac{X_{(3)}-X_{(1)}}{S_{3}}\,,$$ where $X_{(i)}$ denotes the $i$th order statistic, and$$S_{3}=\sqrt{\frac{\sum_{i=1}^3(X_i-\bar{X})^2}{2}}$$ Now I know the density function of u is$$f(u)=\frac{3}{\pi}\left(1-\frac{u^2}{4}\right)^{-\frac{1}{2}},\quad(\sqrt{3}\le u\le2)$$ But how to get that?Does anyone have any ideas? Would appreciate some help. EDITED: Thank you guys for help!!! I get the formula from a paper by E.S. Pearson, published in 1964. [](https://i.stack.imgur.com/c4Rpe.png) For $n>3$, it's not possible to find the exact distribution of $u$. For $n=3$, Pearson quoted this relation from Thomson(1955): $$u=2\cos\left({\frac{1}{6}\pi(1-F(u))}\right)\quad (\text{it's easy to find }\sqrt{3}\le u\le2 \text{ here})$$ Then we can just take arccos and take the derivative to get the formula. I read the paper by Thomson in 1955, only to find[](https://i.stack.imgur.com/tOqJN.png) Finally, I find the paper by Lieblein in 1952, [](https://i.stack.imgur.com/FEpWB.png) But I can't see the relation between this paper and the problem we talk, let alone EASILY solve the problem from Lieblein's results. That's all the information I have now.
Distribution of the ratio of sample range to sample standard deviation for normal when n=3
CC BY-SA 4.0
null
2023-05-29T13:19:49.290
2023-05-30T15:21:51.917
2023-05-30T15:21:51.917
119261
389070
[ "probability", "distributions", "mathematical-statistics", "normal-distribution" ]
617208
1
null
null
1
41
I am asked to test a hypothesis that a manufacturing line makes p% faulty parts in a month, it's assumed that the p% is independent of the month. My approach is as simple as it gets, take a random sample from this month and get the proportion and proceed with my test normally. However i am only provided with daily data ( example in day 1 4000 parts are produced and 150 were faulty ), does this violate the iiid of the data? given that i can only take " batches" of parts into my sample and those parts are dependent because they are produced in the same day? I suppose i can try to prove there is no statistical significance in a chi-square test or something, but best i can do using this strategy is prove independence of the different weekdays or month days, but i cant prove if there was some special event or factors that i am not considering, for example workers get tired so the probability of getting a faulty part increases as the day goes by so the p% of faulty for each part is dependent of how many parts are produced before it. But if i can assume that there is no " special events ", could i argue that the samples are IID if i prove independence on month and weekdays? edit: i realized that proving independence on weekdays and month days will only prove that different batches are independent. Any suggestions on how I could handle this?
IID assumption in proportion hyp test
CC BY-SA 4.0
null
2023-05-29T13:26:54.120
2023-05-31T18:19:25.113
2023-05-29T13:33:39.363
389071
389071
[ "hypothesis-testing", "iid" ]
617209
1
null
null
0
21
I am currently looking at the effect of the introduction of Carbon Tax/ETS on corporate investment behavior. I do have a cross-sectional panel-data from the period 2000-2022 and grouped the firms by industry and whether they are in a country that has introduced such a policy or not. I have found 5 industries where there are visually seem to be some kind of parallel trend. However, as the treatments have been placed at different points in time I cannot really identify a clear breaking point (and of course not considering differences in firms, countries visually) and in general thought about a staggered DiD in this case. Would such an approach be valid in this case to estimate the effect of the introduction of such a policy? I probably also need to make sure that the firm and countries are the same across industries. [](https://i.stack.imgur.com/rJnQD.png) [](https://i.stack.imgur.com/bgQpj.png) [](https://i.stack.imgur.com/y6WIM.png) [](https://i.stack.imgur.com/XuQCY.png)
Can a staggered DiD approach be valid in a setting of visual parallel trends but no clear treatment because of multiple different treatment points?
CC BY-SA 4.0
null
2023-05-29T13:40:30.600
2023-05-29T13:40:30.600
null
null
389073
[ "difference-in-difference" ]
617210
2
null
617205
0
null
This seems like this is related to the problem of [perfect separation](https://stats.stackexchange.com/q/11109/28500) in logistic regression. If a set of predictors is adequate to completely determine the outcome, you get very high odds ratios along with high standard errors of the regression coefficients. In your case you perhaps don't have perfect separation but something close to it. You need to think carefully about this, based on your understanding of the subject matter. It's possible that this is a real association. You might consider a penalized model, which is also a way to deal with perfect separation in logistic regression. You could just penalize the coefficient for this predictor, leaving the others as is. You can do that with a `ridge` term in a model fit by the R `coxph()` function, or use the [glmnet package](https://cran.r-project.org/package=glmnet). The risk is that you've accidentally introduced a problem like [survivorship bias](https://en.wikipedia.org/wiki/Survivorship_bias) into your study. That's why you need to approach this from the perspective of your understanding of the subject matter.
null
CC BY-SA 4.0
null
2023-05-29T13:42:58.977
2023-05-29T13:42:58.977
null
null
28500
null
617211
1
null
null
1
7
I have a number of embeddings (300-dimensional FastText vectors for each instance of each class) that I apply a classifier to (Logistic Regression for now). I want to visualize the embeddings as well as the decision boundary as part of model debugging so I can see which classes are not linearly separable, which instances are misclassified etc. I'm not sure if using PCA or K-PCA is a good idea here. Or even t-SNE (since it's non-linear when Logistic Regression is a Linear Model so maybe the separation found by t-SNE can't be achieved by LR?) I'm looking for a procedure that will maintain the same structure (if two instances are close in the higher dimension they should still be so in the 2-D one) while making sure that the decision boundary is still correct. How should I go about this? Thanks.
How to properly visualize high-dimensional embeddings along with the decision boundary in 2D?
CC BY-SA 4.0
null
2023-05-29T13:57:15.130
2023-05-29T13:57:15.130
null
null
206828
[ "machine-learning", "logistic", "classification", "dimensionality-reduction" ]
617212
2
null
617201
1
null
In the formula for the unstandardized bivariate linear regression slope coefficient b = r*(SD_Y/SD_X) it may help to think of the standard deviations SD_Y and SD_X as mere scaling parameters (they "adjust" r for the different--and often arbitrary--units of measurement of Y and X). If you consider the standardized regression weight b_S (where SD_Y and SD_X = 1 due to standardization and thus the regression weight does not depend on the scaling/units of measurement of Y and X), the correlation r does reflect the steepness of the regression line directly: b_S = r
null
CC BY-SA 4.0
null
2023-05-29T13:58:54.223
2023-05-29T13:58:54.223
null
null
388334
null
617213
2
null
617199
2
null
If we write it with characteristic polynomials the relations will look like below $$Y_t=(1-B)X_t, \ \ \ \ \ \ \ X_t(1-\phi_1B-\phi_2B^2)=W_t$$ Then, the relation between $Y_t$ and $W_t$ can be written with $$Y_t=\frac{1-B}{1-\phi_1B-\phi_2B^2}W_t \rightarrow Y_t(1-\phi_1B-\phi_2B^2)=(1-B)W_t$$ Converting it back yields the following ARMA(2,1) model. $$Y_t-\phi_1 Y_{t-1}-\phi_2 Y_{t-2}=W_t-W_{t-1}$$
null
CC BY-SA 4.0
null
2023-05-29T14:20:03.447
2023-05-29T14:20:03.447
null
null
204068
null
617214
1
null
null
1
13
Suppose that we are in a Bayesian context, we we have the following matrix $n,$ $K\times K,$ as parameter, and we assume that $$n_{ij}\sim Pois(w*w_{ij})$$ where $w\sim Gamma(N+1,1)$ and $w_{ij}$ is another parameter which follows a dirichlet distribution. Can the following be seen as a different parametrization for $n_{ij}$? Since the average of $w$ is $N+1$, and $\mathbb{E}[n_{ij}]=(N+1)w_{ij}$ then $$n_{ij}\sim Mult(N+1,w_{ij})$$
Bayesian reparametrization are they equivalent?
CC BY-SA 4.0
null
2023-05-29T14:24:04.217
2023-05-29T15:18:04.803
2023-05-29T15:18:04.803
71679
208406
[ "bayesian", "poisson-distribution", "multivariate-normal-distribution", "parameterization", "dirichlet-distribution" ]
617215
1
null
null
1
33
I need to compute the posterior distribution of a parameter $\theta$ conditionally on signal $t$. $\theta$ is uniformly distributed in $[0,1]$, while $t=\theta+\eta$ where $\eta$ represents a noise, distributed uniformly in $[-0.5,0.5]$. I know Bayes' rule but I don't know how to apply it when distributions, instead of single probabilities, are involved. Can someone show me the actual computations for this?
Computing a posterior distribution
CC BY-SA 4.0
null
2023-05-29T14:34:13.483
2023-05-29T17:19:06.280
null
null
389081
[ "distributions", "bayesian", "posterior" ]
617216
1
null
null
0
16
Can anyone explain why the forecast of my VAR model is bad? Below is the relevant plot: [](https://i.stack.imgur.com/biBuN.png)
Why is the forecast of my VAR Model so bad?
CC BY-SA 4.0
null
2023-05-29T14:35:58.980
2023-05-29T18:27:00.057
2023-05-29T18:27:00.057
53690
389079
[ "forecasting", "vector-autoregression" ]
617217
2
null
617202
6
null
The CLT does not give one permission to assume the mean is normally distributed in any and all circumstances. The mean of a sample, when viewed as a random variable obtained from many different samples from a distribution, has its own distribution. And the CLT gives criteria when that distribution is normal or when it approaches normality. First, if the underlying distribution is normally distributed, then the distribution of the means is normal...regardless of sample size. Second, if the underlying distribution is not normally distributed, then the distribution of the means approaches a normal distribution for large enough sample sizes. Thus, there are instances when the distribution of the means is most definitely not normally distributed.
null
CC BY-SA 4.0
null
2023-05-29T14:41:09.357
2023-05-29T14:41:09.357
null
null
199063
null
617218
1
null
null
1
21
``` model = flexsurvreg(Surv(START,STOP,EVENT) ~ predictor1+predictor2,data= infection,dist="gamma") p=ggflexsurvplot(model, xlab = "Time (Months)", censor = F, conf.int = T, fun = "survival") ``` I am trying to do visual predictive checks such as KM curve overlaid with model prediction and prediction interval. I was able to achieve the plot by `ggflexsurvplot`. I am unable to interpret the plot. 1.Is KM for first event in that plot?. Is Model fit curve for all events in that plot? How can I interpret that plot? How can I do meaningful other plots /tables to interpret the effect of predictors on infection event time such as probability survival curve between predictors? 2.Covariate value is constant between interval, and it has full history of values from start of time. Can I use above gamma AFT model in flexsurv?. reason asking this question is: flexsurv says “ Likelihood is not valid in general however for other forms of dependence on covariates, e.g. accelerated failure time models”. 3.if not, can I use gamma PH model in flexsurv for above counting process(anderson gill) approach? Note: This is repeated time to infection event meaning patients can have multiple events.Covariate is independent variable or predictor. This predictor is drug amount which vary daily(time varying).Dataset has start and end time for daily predictor value and event (0/1) variables.This is encoded in the above model.Hypothesis is to test whether drug amount causes infection events. For multiple events for a subject, time was encoded as start=0, end=1, then, 1-2,2-3,3-4.left truncation, right censored.(count process approach,anderson gill method).
can gamma AFT model be used in flexsurv for time varying covariate?
CC BY-SA 4.0
null
2023-05-29T15:01:26.800
2023-05-29T18:04:47.137
2023-05-29T16:58:56.320
297005
297005
[ "survival" ]
617219
1
null
null
0
7
A common prior for the unknown variance of a normal distribution (with given mean $\mu$) is the scaled inverse $\chi-$squared distribution. It turns out this is a conjugate prior! A simple proof can be found here [https://real-statistics.com/bayesian-statistics/bayesian-statistics-normal-data/conjugate-priors-normal-distribution/normal-conjugate-priors-proofs/](https://real-statistics.com/bayesian-statistics/bayesian-statistics-normal-data/conjugate-priors-normal-distribution/normal-conjugate-priors-proofs/) . However it seems (given the prior is Scaled-Inv-$\chi^2(v_0,s_0^2)$ and we have $n$ samples) that the posterior should be Scaled-Inv-$\chi^2(v_1,s_1^2)$ with $$v_1=v_0+n,~~~s_1^2=\frac{n}{v_1}s^2+\frac{v_0}{v_1}s_0^2,$$ where $$s^2=\frac{1}{n}\sum_{i=1}^n (x_i-\mu).$$ However this is not what is given in the link I provided. Which one is correct?
Bayesian Inference on a Normal Distribution. Unknown Variance, Known Mean. Scaled Inverse Chi squared as a Conjugate Prior
CC BY-SA 4.0
null
2023-05-29T15:08:05.980
2023-05-29T15:19:53.067
2023-05-29T15:19:53.067
389076
389076
[ "bayesian", "normal-distribution", "inference", "conjugate-prior" ]
617220
1
null
null
3
57
In the study of tweets pre- and post- metoo (set as Nov 2017), we are looking at whether there is gender differences in the use of masculine language in tweets for male and female social media users. We have tweets collected for the period 2008-23 for 400 users. Some of these users were only active on Twitter before metoo and some only after metoo. But a majority of them have tweets in both the pre- and post- metoo period. We are currently specifying the model as follows: ``` M1 = lmer(masculine_lang ~ gender * post_metoo + tweet_created_year + (1|user_id) ``` Where tweet_created_year is a factor variable. We are not sure if this is a correct specification. We have two major concerns. - One, should we be using tweet_created_year as factor or as numeric (with year 2008 coded as 0, 2009 as 1, and so on)? - Two, should we include tweet_created_year as random effect as follows: `M2 = lmer(masculine_lang ~ gender * post_metoo + tweet_created_year + (1 + tweet_created_year|user_id)` Alternatively, should we have the following specification? ``` M3 = lmer(masculine_lang ~ gender * post_metoo + (1|user_id) + (1|user_id:tweet_created_year) ``` We are confused what is the right approach as each one gives a different result for test of interaction hypothesis that gender * post_metoo is significant. Also, if we are interested in seeing how the trajectories of masculine language use of men and women users change post_metoo (i.e., in the years 2018-23), how should we go about it? Much appreciated.
How to specify random effects for panel/longitudinal data with level 2 predictor?
CC BY-SA 4.0
null
2023-05-29T15:13:36.733
2023-05-30T10:23:41.750
2023-05-29T15:18:59.780
389082
389082
[ "r", "regression", "mixed-model", "lme4-nlme", "panel-data" ]
617221
2
null
617185
1
null
We want to solve for the MLEs in terms of the known parameters and the data, so you should solve independently for each $\hat\lambda$.
null
CC BY-SA 4.0
null
2023-05-29T15:19:28.887
2023-05-29T15:19:28.887
null
null
319175
null
617222
2
null
617202
8
null
Under certain regularity conditions the CLT does indeed guarantee that a properly normalized sum of random variables converges to a Gaussian limit. But even in classical problems those conditions aren't always met. --- The "law of rare events" gives one example of where a sum of independent random variables converges to a non-Gaussian limit. Suppose we have a random process where at each step $n$, we observe $n$ independent binary random variables $X_{1n}, X_{2n}, \dots, X_{nn}$ where $P(X_{nk} = 1) = p_{nk}$ and $P(X_{nk} = 0) = 1 - p_{nk}$ (so they are Bernoulli with success probability $p_{nk}$). It turns out that if $\sum_{k=1}^n p_{nk} \to \lambda \in (0,\infty)$ as $n\to\infty$ and $\max_{1\leq k \leq n} p_{nk} \to 0$ then $\sum_{k=1}^n X_{nk} \stackrel{\text d}\to \text{Pois}(\lambda)$. This means that if we have a collection of binary random variables where the probability that any one of them is 1 goes to zero, but the collection as a whole maintains a steady expected number of 1s, then the sum will have a Poisson limit, not a Gaussian limit. This is theorem 3.6.1 in Durrett's Probability: Theory and Examples, available [here](https://services.math.duke.edu/%7Ertd/PTE/PTE5_011119.pdf). --- Beyond this, we aren't always interested in means. Suppose we have $X_1, \dots, X_n \stackrel{\text{iid}}\sim \text{Unif}(\theta, \theta+1)$ and we want to estimate $\theta$. It turns out that $X_{(1)} := \min_{1\leq k \leq n} X_k$ is about the best estimator we could consider if we're using squared loss (in a [minimax](https://en.wikipedia.org/wiki/Minimax_estimator) sense). If we normalize $X_{(1)}$ by subtracting $\theta$ and dividing by $1/n$ we get a non-Gaussian limit: $$ \begin{aligned} P\left(\frac{X_{(1)} - \theta}{1/n} \leq t\right) &= 1 - P\left(X_{(1)} - \theta > t/n\right) \\ &= 1 - P\left(X_1 - \theta > t/n\right)^n \\ &= 1 - (1 - t/n)^n \\ &\to e^{-t} \end{aligned} $$ as $n\to\infty$ hence $n(X_{(1)} - \theta) \stackrel{\text d}\to \text{Exp}(1)$, i.e. we get an Exponential distribution as our limit rather than a Gaussian. --- These are two very classical examples of tidy problems where we don't get a Gaussian limit. If we're doing "real world" modeling then all bets can be off. We may have deep dependence relationships that prevent the CLT from applying, like if our data have temporal or spatial correlations. Or maybe it's a non-stationary process so there isn't one "mean" for things to be Gaussian around. Or we might be predicting/forecasting or studying non-asymptotic problems where there is no sense of "converging" to a limit. The CLT and friends are great but there is a lot of behavior that they fail to describe.
null
CC BY-SA 4.0
null
2023-05-29T15:34:00.943
2023-05-29T15:34:00.943
null
null
30005
null
617223
1
null
null
0
7
I have data ranging from 2008 - 2016. I have been studying effects on the %satisfaction of creditors in debt relief (Czech Republic). I have used modified "One vs. All" method, where I forgo the maximalisation step of P and evaluate the size of effects using the underlying models. In this case I use 4 models where the explained variable changes depending on the satisfaction group (ie. 4 groups of satisfaction). The model I have used is something like this: ``` Y = debt + income + sex + marital status + age + ... (total 13 explanatory variables) ``` I could see inflation playing big role if I predicted time series or just maybe used debt or income to predict %satisfaction (simple regression), but I don't think inflation plays considerable role in model when I am predicting in space instead of predicting in time. The data supports that claim. I have rerun the whole model using deflated debt and income and the outcome was the same (changes were within 0,001 range). Am I completely wrong?
Should you deflate prices by index when doing multiple regression?
CC BY-SA 4.0
null
2023-05-29T15:37:34.423
2023-05-29T15:46:27.180
2023-05-29T15:46:27.180
362671
389083
[ "regression", "logistic" ]
617224
1
null
null
0
11
If I have some very heterogeneous and noisy data, for example, about the gut microbiome, are there any methods to get any useful information from them? The simplest thing I can assume is to calculate the relative representation of pathological bacteria over healthy ones.
Methods of feature derivation
CC BY-SA 4.0
null
2023-05-29T15:40:59.623
2023-05-29T15:40:59.623
null
null
385391
[ "feature-selection", "biostatistics", "feature-engineering", "bioinformatics", "noise" ]
617225
1
null
null
0
18
So im studying for an exam, and can't for the life of me figure out how to solve this question.[](https://i.stack.imgur.com/gaiP4.png) Anyone have any tips on how to do it? There are no aids allowed on the test.
Identify what periodogram most closely matches the sample ACF
CC BY-SA 4.0
null
2023-05-29T15:43:09.960
2023-05-29T15:43:09.960
null
null
389084
[ "time-series", "acf-pacf", "spectral-analysis" ]
617226
1
null
null
0
10
Say that you had data for the point spread of a basketball game: Team A points - Team B points (where team A is the home team and team B the away, if there is no home and away team, then team A is randomly allocated). You run the regression $spread=\beta_1 home + \beta_2 away + \beta_3 neither$ i.e. regress the point spread on three dummy variables and no intercept. When conducting inference it is obvious that $\beta_3$ should be statistically indifferent from zero because of random allocation to team A and team B (the sign of the point spread). However, im wondering if a necessary restriction to test in this model would be $\beta_1 + \beta_2 = 0$ because a home team achieving some point spread means the away team losing out. I feel like i am misinterpreting the model by imposing this restriction but i am struggling to convince myself why this is. Any help would be much appreciated.
Testing sum of dummy variables
CC BY-SA 4.0
null
2023-05-29T15:44:04.327
2023-05-29T15:44:04.327
null
null
389086
[ "regression", "inference", "causality", "categorical-encoding" ]
617227
2
null
617172
0
null
One classic mistake is assuming that something is a binary, when it's not. Giving only two options, assumed to be mutually exclusive and complete, makes the survey difficult to answer when reality is in-between or outside the binary in some way. When things are multifaceted and non-mutually exclusive, survey questions must reflect that: Allow for choosing multiple among multiple options, including self-specified and no-op.
null
CC BY-SA 4.0
null
2023-05-29T15:55:58.837
2023-05-29T15:55:58.837
null
null
93623
null
617228
1
617245
null
9
2995
I found some data on the number of self-employed people per one thousand inhabitants in Europe [here](https://ec.europa.eu/eurostat/databrowser/view/LFSA_ESGAED__custom_6375615/default/table). However, in the year 2020, for instance, some statistical units have over one thousand self-employed people per one thousand inhabitants. How is this possible? Am I mistaken? I am attempting to find and interpret a correlation between several variables concerning labour, such as self-employment, and the satisfaction level in European countries.
Can statistical units measured per thousand inhabitants be bigger than 1000?
CC BY-SA 4.0
null
2023-05-29T16:26:02.447
2023-05-30T21:41:07.180
2023-05-30T12:06:46.897
362671
389089
[ "distributions", "correlation", "units" ]
617229
1
null
null
3
37
I am running a random forest for classification of a data with three classes in R, and each class has around 20 samples. I am partitioning data into train and test in 80:20 ratio using caret package. As the sample size is low, I used a for loop 1 to 100 to build 100 models to see how accuracy changes with each partitioning. I obtained accuracy of 1 for all 100 models, and all test data can be classified 100% in two steps. Only change is in top n features based on gini index. I was using all top n features of 100 models, and using median of scores for the common features across them. I wanted to ask your suggestions as accuracy of 1 is indicator of over-fitting. # Updating to add an example script to illustrate RF model I built; ``` for (i in 1:100) { # partition set.seed(i) df.training.samples <- df$type %>% createDataPartition(p = 0.8, list = FALSE) df.train.data <- data.frame(df[df.training.samples, ], check.names = T) df.test.data <- data.frame(df[-df.training.samples, ], check.names = T) # build model set.seed(i) df.tree <- randomForest(type ~ ., data = df.train.data, importance = TRUE, ntree = 1000, maxnodes = 3) # add model to list model.list[[i]] <- df.tree # performance on test data df.actual <- df.test.data$type df.predicted <- predict(df.tree, df.test.data) acc.list[i] <- calc_acc(actual = df.actual, predicted = df.predicted) } ``` ```
Random Forest with Test Accuracy of 1
CC BY-SA 4.0
null
2023-05-29T16:36:42.297
2023-05-29T20:30:55.863
2023-05-29T20:30:55.863
389088
389088
[ "r", "machine-learning", "random-forest", "overfitting" ]
617230
2
null
617215
0
null
I'll try to answer it because I gave it some thought but I'm not 100% sure so I would like some to correct me if possible. If I understand well, you have the following process, you have some underlying true signal $\theta$ and you observed a corrupted signal $t$, the corruption is $\eta$. So, if we repeadeadly receive a signal the noise each time going to be different so, we will end with some data that look like $t_{1}, t_{2},..., t_{n}$ with noises $\eta_{1},\eta_{2},...,\eta_{n}$. In your case you have only one observation $t$. In order to find the posterior of $\theta$ you use the Bayes theorem $p(\theta|t,\eta) = \frac{p(t|\theta,\eta)p(\theta,\eta)}{p(t,\eta)}$ I assume that $\eta\perp t,\theta $ so we can rewrite the Bayes theorem as $p(\theta|t,\eta) = \frac{p(t|\theta,\eta)p(\theta)p(\eta)}{p(t)p(\eta)}\propto p(t|\theta,\eta)p(\theta)$ since we are only interested in terms of having $\theta$. Know I've seen that the marginal of $t$, $p(t)$ can be calculated with the following convolution since it is the sum of independet random variables $t=\theta +\eta$. $p_{t}(t) = \int p_{\theta}(\theta)p_{\eta}(t-\theta)d\theta$ Next we want to obtain the conditional distribution $p_{t}(t|\theta,\eta)$ since we have the marginilization of through the convolution, we just have to take a step back and get rid of the integral? (I'm not 100% about this). So, I guess the conditional will be $p_{t}(t|\theta,\eta) = p_{\theta}(\theta)p_{\eta}(t-\theta) = \mathbb{I}_{\theta\in(0,1)}\mathbb{I}_{(t-\theta)\in(-0.5,0.5)} = \mathbb{I}_{\theta \in (t-0.5,t+0.5)}$ Now getting back to the posterior $p_{\theta}(\theta|t,\eta)\propto p_{t}(\theta,\eta)p_{\theta}(\theta) = \mathbb{I}_{\theta \in (t-0.5,t+0.5)}\mathbb{I}_{\theta\in(0,1)}=\mathbb{I}_{\theta \in (t-0.5,t+0.5)}$ So, the posterior distribution of $\theta$ is a uniform over the interval $(t-0.5,t+0.5)$. Now, because I wasn't sure about my approach I did a numerical example in `R` and it seems that it is correct. ``` eta = runif(10000,-0.5,0.5) theta = runif(1,0,1) t = theta+eta plot(t) abline(h=theta,col='red') mean(t) mean(t-0.5) #this should be approx equal to min(t) mean(t+0.5) #this should be approx equal to max(t) min(t) max(t) ``` ```
null
CC BY-SA 4.0
null
2023-05-29T16:49:46.677
2023-05-29T16:49:46.677
null
null
208406
null
617231
1
null
null
1
11
I'm running a few simple mediation models with differing IVs, but the same M and DV. IV_1 = race (0 = white, 1 = minority) IV_2 = racial dissimilarity (continuous variable) M = engagement (continuous variable) DV = turnover (binary; 0 = did not turnover, 1 = did turnover) How can I compare the direct and indirect effects of the models? For instance, let's say in model 1: IV_1 > M > DV, the direct effect of race on engagement is .17 (p <.001). In model 2: IV_2 > M > DV, the direct effect of racial dissimilarity on engagement is .48 (p < .001). Should I do a bootstrap t-test similar to when you compare means of two samples? Or is there another way to compare the direct effects? Examine overall model fit of each? Similarly, how can I compare the indirect effects of the models? Let's say in model 1: IV_1 > M > DV, the significant indirect effect = -.07 In model 2: IV_2 > M > DV, the significant indirect effect = -.16. Since the DV is binary, is it as simple as comparing the odds ratios? Or do need to do some sort of bootstrapped hypothesis test on the difference? Or is it comparing overall model fit (though this wouldn't parse apart direct and indirect effects...maybe doing so isn't needed...?)? Part of the paper's hypothesis is that racial dissimilarity is a more impactful way of conceptualizing diversity, as opposed to categorical race, which is why I'm looking to compare the direct and indirect effects of the model.
Comparing direct and indirect effects between mediation models with different IVs
CC BY-SA 4.0
null
2023-05-29T17:02:01.123
2023-05-29T17:02:01.123
null
null
389091
[ "mediation" ]
617232
1
null
null
0
35
I need to compare two distribution $p$ and $q$. But I don't have access to the distribution $p$, I want to approximate it by distribution $q$ that I construct iteratively by choosing design point. What metric to choose ? Kl divergence ? JS metric ?....
Which metric to compare two probability density?
CC BY-SA 4.0
null
2023-05-29T17:09:11.620
2023-05-29T18:06:47.277
2023-05-29T18:06:47.277
260660
389092
[ "density-function", "kullback-leibler", "approximation", "metric", "divergence" ]
617233
2
null
617228
8
null
(THIS IS NO LONGER THE CORRECT ANSWER BUT IS KEPT UP AS AN INTERESTING IDEA. [THIS](https://stats.stackexchange.com/a/617245/247274) IS THE CORRECT ANSWER.) I find it plausible that the calculation, while reported as number of self-employed people per $1000$ people (which cannot exceed $1000$), is actually the number of self-employment jobs per $1000$ people. With $2020$ being a major COVID year with a lot of work-from-home, I find it plausible that people took on self-employment side hustles, perhaps in such a large quantity that there were more such side hustles than people. This could lead to a more than $1000$ per $1000$ people yet not break mathematics.
null
CC BY-SA 4.0
null
2023-05-29T17:11:03.503
2023-05-30T21:35:04.300
2023-05-30T21:35:04.300
247274
247274
null
617234
2
null
617229
0
null
A perfect score is all but screaming at you that overfitting has occurred. In order to check for overfitting, it is common to test our models on data that we’re not used for model training, which you have done and still achieved that perfect score. Because your performance is too strong to seem reasonable, my suspicion is that you have leaked data from your training data to the test data, allowing the model to cheat and see the test data before it is supposed to. (This could be considered analogous to a student stewing the exam to look at the questions and then scoring high on the test. The high score might sound like mastery of the material, but it is from cheating, not from knowing what he is doing.)
null
CC BY-SA 4.0
null
2023-05-29T17:17:31.610
2023-05-29T17:17:31.610
null
null
247274
null
617235
2
null
617215
0
null
The prior probability distribution of $\theta$ is $$ f_\theta(u) \, du = \begin{cases} 1\, du & \text{if } 0<u<1, \\ 0 \, du & \text{otherwise.} \end{cases} $$ The likelihood function is $$ L_{\theta\,\mid\,t}(u) = \begin{cases} 2 & \text{if } t-\tfrac12<u<t+\tfrac12, \\ 0 & \text{otherwise.} \end{cases} $$ (Note that I wrote $\text{“}du\text{”}$ for the prior and not for the likelihood. That is worth understanding.) Therefore the posterior probability distribution is \begin{align} & \text{constant} \times L_{\theta\,\mid\,t}(u) f_\theta(u)\, du \\[6pt]= {} & \begin{cases} \text{positive constant}\cdot du & \text{if } \max\{0,t-\tfrac12\} <u < \min\{ 1, t + \tfrac 12 \}, \\ 0\,du & \text{otherwise.} \end{cases} \end{align} That positive constant must be the reciprocal of the length of the interval, so that the integral will be $1.$ Thus the posterior distribution is uniform on the interval $$ \left( \max\{0,t-\tfrac12\}, \, \min\{1,t+\tfrac12\} \right). $$
null
CC BY-SA 4.0
null
2023-05-29T17:19:06.280
2023-05-29T17:19:06.280
null
null
5176
null
617236
1
617319
null
0
44
What is the difference between the following four random effect structures in R? ``` m1 = lmer(dv ~ pred1 + (time|id), data = df, REML = FALSE) m2 = lmer(dv ~ pred1 + time + (time|id), data = df, REML = FALSE) m3 = lmer(dv ~ pred1 + (1|id) + (1|time:id), data = df, REML = FALSE) m4 = lmer(dv ~ pred1 + time + (1|id) + (1|time:id), data = df, REML = FALSE) ``` Is there any reason for using one over the other? Do they offer different advantages?
Difference between four random effects structure
CC BY-SA 4.0
null
2023-05-29T17:55:27.903
2023-05-30T14:32:02.357
null
null
145482
[ "r", "regression", "mixed-model", "lme4-nlme", "panel-data" ]
617237
2
null
617218
0
null
According to section 2 of its [vignette](https://cran.r-project.org/web/packages/flexsurv/vignettes/flexsurv.pdf), the assumptions of the [flexsurv package](https://cran.r-project.org/package=flexsurv) include: > The individual survival times are also independent, so that flexsurv does not currently support shared frailty, clustered or random effects models. That poses two problems for your application. One is that, even without time-varying covariates, you would have to do extra work to deal with the lack of independence among observations on the same individual. That alone could be dealt with fairly easily, for example by modeling on repeated bootstrap samples where the bootstraps were done per individual instead of per observation. The second problem posed by the independence problem is more difficult. Unlike a proportional hazards model, an accelerated failure time (AFT) model requires the entire history of a covariate. This is simply explained in Section 2 of a [vignette](https://cran.r-project.org/web/packages/eha/vignettes/parametric.html) of the [eha package](https://cran.r-project.org/package=eha). The independence assumption made by `flexsurv` is why you find the statement you quote from Section 3.1 of the [flexsurv vignette](https://cran.r-project.org/web/packages/flexsurv/vignettes/flexsurv.pdf) that the necessary assumptions are: > not valid in general however for other forms of dependence on covariates, e.g. accelerated failure time models. The `aftreg()` function of the `eha` package does allow for an `id` argument to keep track of data by individual. That allows reconstruction of the entire covariate history for an individual so that you can fit an AFT model properly. That package does not, however, allow for gamma survival models. Alternatively, if it would make sense to reset `time = 0` at each event time for an individual (which can be done in the context of an Andersen-Gill model), then you could fit a gamma model with `flexsurvreg()` and then deal with the lack of independence via bootstrapping by individual. Think carefully about why you are modeling in the particular way you have chosen so far. You might be able to accomplish what you want in other ways, for example via a proportional hazards model instead. The question about `ggsurvplot()` from the [survminer package](https://cran.r-project.org/package=survminer) is perhaps too software-specific to be on-topic on this site. In general, if you have recurrent events you want a plot of cumulative incidence rather than a usual Kaplan-Meier plot of survival probability over time. I'm not sure how that particular package handled your data.
null
CC BY-SA 4.0
null
2023-05-29T18:04:47.137
2023-05-29T18:04:47.137
null
null
28500
null
617238
1
null
null
1
7
I have three proportional data H1, H2, and H3. And my null hypothesis H0 would be H2-H1=H3-H2. What would be the test statistics here? I tried to use the proportional test for H_0:P2=P1. But the test required the sample size for P2 and P1. So in my case, H1, H2, and H3 all have different sample sizes. What test should I go for it? Or should I use the statistical for trend analysis for it if that works better?
Hypothesis Test on Three proportional data with different sample size
CC BY-SA 4.0
null
2023-05-29T18:39:21.827
2023-05-29T18:39:21.827
null
null
389098
[ "r", "hypothesis-testing", "mathematical-statistics", "statistical-significance", "trend" ]
617239
2
null
14797
0
null
### Why does chi-square testing use the expected count as the variance? You can make the jump from standardized residuals$$\epsilon_i = \frac{O_i-E_i}{\sqrt{N(E_i/N)(1-E_i/N)}}$$ to the terms as used in the $\chi^2$ expression $$x_i = \frac{O_i-E_i}{\sqrt{E_i}}$$ This is the topic of the question [Obtaining the chi-squared test statistic via geometry](https://stats.stackexchange.com/questions/616847/) and the approach by the answer from Aksakal and the article by Pearson from 1900. --- However, it might be easier to imagine the multinomial distribution as the joint distribution of Poisson distributed variables, $O_i \sim Poisson(Np_i)$, constrained by the total sum, $T = \sum_i O_i \sim Poisson(N)$ being equal to $N$. - The unconstrained joint distribution is $$P(O_1=o_1,\dots,O_n=o_n) = \prod_{i=1}^n \frac{{(Np_i)}^{o_i} e^{-Np_i}}{o_i!}$$ and the constrained distribution is $$\begin{array}{} P(O_1=o_1,\dots,O_n=o_n| T=m)& = &\frac{P(O_1=o_1,\dots,O_n=o_n, T=N)}{P(T=N)} \\ &=& \frac{P(O_1=o_1,\dots,O_n=o_n)}{P(T=N)}\\& = &\frac{\prod_{i=1}^n \frac{{(Np_i)}^{o_i} e^{-Np_i}}{o_i!}}{\frac{{N}^{N} e^{-N}}{N!}} \\&=& \frac{N! }{\prod_{i=1}^n o_i!}\prod_{i=1}^n{p_i}^{o_i}\end{array}$$ - When $N$ is large then we can approximate the Poisson distribution with a multivariate normal distribution with the same constraint. - Following that we can normalize that multivariate normal distribution by using the divisions with $\sqrt{E_i}$ (which are not standard deviations of the multinomial distribution but they are the standard deviations of the joint Poisson distribution) - The divisions with $\sqrt{E_i}$ changes the constraint $\sum_i O_i = N$ into a different (but still linear) constraint $\sum_i x_i \sqrt{E_i} = \sum_i O_i - E_i = 0$. - The standardized multivariate normal distribution is spherically symmetric and the constraint $\sum_i x_i \sqrt{E_i} = 0$ is similar to constraining only a single variable. - The distribution of the sum of squares of the remaining $n-1$ variables is $\chi^2$ distributed with $n-1$ degrees of freedom. --- ### Isn't the variance here clearly not simply the expected value? > As a simple illustration of my confusion, what if we were testing whether two processes are significantly different, one that generates 500 As and 500 Bs with very small variance, and the other that generates 550 As and 450 Bs with very small variance (rarely generating 551 As and 449 Bs)? Isn't the variance here clearly not simply the expected value? The chi-squared test refers to multinomial distributed data or to count data, where each event, a specific count falling into a specific bin, is independent from the others. A process that generates these large numbers with such small variance, is likely a different type of process and the chi-squared test is not applicable. For the Poisson distribution and the Binomial/multinomial distribution, the mean and the variance are related and both can be estimated with a single observed count variable. With other distributions this is not the case and you need several observations from which you can estimate the variance.
null
CC BY-SA 4.0
null
2023-05-29T18:39:36.780
2023-05-29T19:33:59.043
2023-05-29T19:33:59.043
164061
164061
null
617240
1
null
null
1
17
In step one of judea pearls causal inference book it is to define your graphical causal model. The second step is identification of the estimand for estimation in step 3. Are there any cases where identification may not be possible? i.e. where our dowhy expression cannot be expressed in terms of conditional expectations.
In what cases is identification not possible in causal inference?
CC BY-SA 4.0
null
2023-05-29T18:52:48.033
2023-05-29T21:40:48.810
null
null
250242
[ "estimation", "causality", "identifiability" ]
617241
1
617244
null
2
253
Does MLE give a distribution of the possible parameter values? I don’t think so, but I’m not sure. I think MLE selects the best parameter assuming that we have, for each parameter value, the possibility of the current dataset given that value. (But when we don’t actually have all those values, we can still infer the best parameter subject to this criterion from other clues.) These possibilities each belong to a different conditional distribution and don’t even add up to one. Am I correct? Is there any possibility that we can get a distribution of possible parameters like in the Bayesian estimation case?
Does MLE give a distribution of the possible parameter values?
CC BY-SA 4.0
null
2023-05-29T18:56:19.713
2023-05-29T19:07:58.983
null
null
106930
[ "distributions", "maximum-likelihood" ]
617242
1
617257
null
1
27
Let's say in a research study, participants receive training, and I measure their learning gains before and after the therapy. At the beginning of the study, before the training, I also administered a questionnaire to measure participations' motivation using a LIKERT-type scale. I want to find out if the training has an effect, considering the role of motivation. Is this simply a two-way repeated-measures ANOVA? What if, instead of pre- and post-tests, I compared two types of training (with different participations in each training) and measured learning gains after each training while also measuring participants' motivation at the very beginning? Would this be simply a two-way independent-measures ANOVA? Normally, instead of motivation, if I used gender, a categorical variable, I would be sure about the design. But, I am not sure how to interpret it when a continuous (or ordinal) variable is involved. I appreciate any insight.
Is it still a factorial ANOVA when one independent variable is a continuous or ordinal variable
CC BY-SA 4.0
null
2023-05-29T19:03:43.753
2023-05-29T21:41:35.183
2023-05-29T19:15:24.397
91142
91142
[ "anova", "repeated-measures", "two-way" ]
617243
1
null
null
1
22
I am new to time series analysis. I need to determine whether my series is seasonal or not, and if it requires differencing for building an ARIMA model if it is possible? The time series data is monthly. Thank you! [](https://i.stack.imgur.com/cML2A.png)
SARIMA model selection for my data
CC BY-SA 4.0
null
2023-05-29T19:04:21.863
2023-05-30T10:00:51.827
2023-05-30T10:00:51.827
53690
389101
[ "time-series", "arima", "model-selection", "seasonality", "differencing" ]
617244
2
null
617241
8
null
A maximum likelihood estimator (MLE) gives a point estimate of a parameter. In that regard, no, you do not get a distribution. What might be reasonable to regard as a frequentist analogue of a Bayesian posterior distribution is the sampling distribution, but you get a sampling distribution from any frequentist estimation technique, not just MLE. Ways of obtaining sampling distributions include calculations based on theory to bootstrapping the data and calculating an estimate for each bootstrap sample. I would not want to interpret a sampling distribution as a posterior distribution, however, as parameters in frequentist statistics have fixed (unknown) values instead of distributions like they do in Bayesian statistics.
null
CC BY-SA 4.0
null
2023-05-29T19:07:58.983
2023-05-29T19:07:58.983
null
null
247274
null
617245
2
null
617228
40
null
This is not a rate per one thousand people, this is the absolute number of people, with one unit equating 1,000 people. So if you see something like 3,258.1, it simply means 3,258,100 people. This is not very explicit and not well-documented (to say the least), but you can see it in the "Unit of measure: Thousand persons" part of the table. The meaning of this mention itself is not well-documented on the page, but is explained on [another page of Eurostat](https://ec.europa.eu/eurostat/cache/metadata/EN/employ_esms.htm#unit_measure1678715083351): > Unit of measure Most results measure number of persons (thousands). Some indicators are reported as rates (employment, unemployment rates). Some variables are reported in other units (ages in years, working time in hours, etc.). Here is a screenshot of where to find this mention: [](https://i.stack.imgur.com/EamDV.png) Note that Eurostat has a [multilingual user support team](https://ec.europa.eu/eurostat/contact-us/user-support) that you can contact (even by phone, a rarity) in case you have doubts about interpreting their data.
null
CC BY-SA 4.0
null
2023-05-29T19:47:09.200
2023-05-29T19:57:37.973
2023-05-29T19:57:37.973
164936
164936
null
617247
1
null
null
3
10
Are there any competitions/challenges/datasets fit for testing Pearl's graphical causal inference methods? I do not necessarily mean live competitions. I would expect these setups to be different than any other ordinary dataset found in Kaggle, as we should be expecting the following components to be part of the setup - - A fully/partly specified causal model (DAG) of the system. - Ground truth or target variable values for different interventions to measure the causal model's performance. - Ground truth/target variable values for counterfactuals. This is optional. My aim is to test out majorly the inferential capabilities; challenges around causal discovery I want to avoid for now (however, if someone wants to cite such resources please feel free to). Hence the necessity of the DAG being specified, at least partially. It would be better if I can find examples both for non-time-series and time-series scenarios. This is because time-series datasets might be a bit more difficult to handle than non-time-series ones.
Competitions/datasets fit for exploring Pearl's graphical causal models
CC BY-SA 4.0
null
2023-05-29T19:53:47.973
2023-05-29T19:53:47.973
null
null
331772
[ "references", "causality", "graphical-model", "causal-diagram" ]
617248
1
617306
null
0
22
I have a dataset containing observations of wave lengths in milliseconds and the corresponding durations of noise in milliseconds. Each observation is labeled with a group (A or B) and a subject. I want to determine if the proportions of noise in the waves of group A are greater than or equal to those in group B. To quantify the noise proportion, I have a metric called "p," which is calculated by dividing the duration of the noise by the duration of the wave. Here's an example of my dataset: ``` grupo Subject noise(s) length(s) p A X1 1094 1520 0.719820213 A X2 150 1852 0.081245657 ... ... ... ... ... B X26 113906 136779 0.832774474 B X27 83327 142258 0.585743053 B X28 112903 147737 0.764213143 ``` I would like to know the proper way to perform a proportion test to compare the noise proportions between group A and group B. Should I average out the "p" column for each group and then proceed with the test? If so, how can I perform this test using a statistical software or library? Why Do I ask? I'm facing confusion regarding the appropriate approach for conducting a proportion test in a specific scenario. Typically, when performing a proportion test, we have the counts (n) and the total sample sizes (N) for each group. However, in my case, I have individual observations represented by proportions, and I need to calculate a statistic to assess if the noise proportion differs significantly between group A and group B. To clarify, the noise proportion is calculated by dividing the duration of the noise by the duration of the corresponding wave for each observation. It's important to note that a simple t-test comparing the mean noise durations is not appropriate because the noise duration is dependent on the wave duration. Therefore, I believe a proportion test is more suitable for this analysis. I would greatly appreciate guidance on how to proceed in this situation. Specifically, I'm looking for suggestions on the appropriate statistical methods, and if possible, references to articles, books, or research papers that discuss similar approaches. Thank you in advance for any insights or resources you can provide. here is my full dataset: ``` df <- data.frame( grupo = c("A", "A", "A", "A", "A", "A", "A", "A", "A", "A", "A", "A", "A", "A", "A", "A", "B", "B", "B", "B", "B", "B", "B", "B", "B", "B", "B", "B", "B", "B"), Subject = c("X1", "X2", "X3", "X4", "X5", "X6", "X7", "X15", "X16", "X17", "X18", "X19", "X20", "X21", "X29", "X30", "X8", "X9", "X10", "X11", "X12", "X13", "X14", "X22", "X23", "X24", "X25", "X26", "X27", "X28"), noise = c(1094, 150, 8303, 1203, 2133, 1443, 9117, 5177, 4482, 40057, 46129, 90512, 20294, 90888, 76439, 56250, 12095, 8046, 4141, 31651, 58280, 28082, 38608, 46389, 93565, 40294, 97831, 113906, 83327, 112903), length = c(1520, 1852, 12478, 16241, 21720, 27199, 32678, 76510, 81989, 87468, 92947, 98426, 103905, 109384, 153216, 158695, 38157, 43636, 49115, 54594, 60073, 65552, 71031, 114863, 120342, 125821, 131300, 136779, 142258, 147737), p = c(0.719820213, 0.081245657, 0.665404337, 0.074051034, 0.098219923, 0.053041614, 0.278977528, 0.067662175, 0.054660867, 0.457964591, 0.496291317, 0.919593764, 0.195312579, 0.830902015, 0.498893504, 0.354454487, 0.316979866, 0.184394298, 0.084321059, 0.579747946, 0.97014446, 0.428392012, 0.54353371, 0.403864003, 0.777487753, 0.320251289, 0.745093182, 0.832774474, 0.585743053, 0.764213143) ) ```
How to perform a proportion test comparing noise proportions in different groups?
CC BY-SA 4.0
null
2023-05-29T20:11:51.753
2023-05-30T12:41:15.160
null
null
389105
[ "hypothesis-testing", "t-test", "p-value", "chi-squared-test", "proportion" ]
617249
1
null
null
1
11
I want to understand how quantized networks can calculate activations like sigmoid and tanh. I stumbled over [this question](http://www.immobilienscout24.de/expose/140187433) which mentions the implementation of TF-Lite Micro as an example. Specifically, the implementation of the tanh function in [internal/reference/integer_ops/tanh.h lines 62 and 94](https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/kernels/internal/reference/integer_ops/tanh.h). I can see that a LUT based implementation using interpolation is used. However, I don't understand where the parameters `input_multiplier` and `input_left_shift` come from. Besides, I want to understand what exactly the bit shift operations are used for here. Is there some additional information on this available? I couldn't find any more in-depth information regarding quantization and activation functions.
How to use Activation Functions in Quantized Nerual Networks?
CC BY-SA 4.0
null
2023-05-29T20:33:09.960
2023-05-29T21:00:50.770
null
null
389106
[ "neural-networks", "tensorflow" ]
617250
1
null
null
0
16
I am Automatic Differentiation Variational Inference for bayesian inference using PyMC. I am using ADAM optimizer learning rate = 0.0005, number of monte carlo objects = 2, 20,000 iterations and full batch. Below is the -log(loss) graph. [](https://i.stack.imgur.com/KkkqN.png) It took 24 hours to run the above optimization process. I wonder what is the best value for learning rate and other parameters to make it faster and less noisy?
How to decide learning rate, iterations in ADVI with adam?
CC BY-SA 4.0
null
2023-05-29T20:58:58.667
2023-05-29T20:58:58.667
null
null
140835
[ "machine-learning", "bayesian", "optimization", "gradient-descent" ]
617251
2
null
617249
0
null
There is no accepted generic approach. Essentially, underlying activation function is re-implemented with a smaller precision i.e., arithmetic operations and operands that represents the activation function. Probably, bitshift operations mimics the internal results of operators. See also, [QPyTorch: A Low-Precision Arithmetic Simulation Framework](https://ieeexplore.ieee.org/abstract/document/9463516).
null
CC BY-SA 4.0
null
2023-05-29T21:00:50.770
2023-05-29T21:00:50.770
null
null
254337
null
617252
1
null
null
0
27
Let X be uniformly distributed in [1,3] and Z=1/X. How can I find the distribution and the mean of Z? First, I thougt Z would be uniformly distributed in [1/3,1]. Calculating the mean would be straight forward. But after thinking it through, I am sure it's wrong, as the distribution of Z a little above 1/3 is denser than a little under 1.
If X ~ U[1,3], what is the distribution of Z = (1/X) and what is the mean of Z?
CC BY-SA 4.0
null
2023-05-29T21:16:23.737
2023-05-29T21:16:23.737
null
null
140428
[ "distributions", "uniform-distribution", "transform" ]
617253
2
null
617207
1
null
This is an amazing result! Here is how you could derive a similar one for uniform distributions; it could be useful at least for similar questions. Here we can use that if $X_1, X_2, X_3$ are i.i.d. uniform random variables, then the distribution of $X_{(2)}$ conditioned on $X_{(1)}=a$ and $X_{(3)}=a+b$ is uniform on $[a, a+b]$. Therefore, we can write $X_{(2)}=a +b*T$, where $T$ is uniform on $[0,1]$. Express the sample variance in terms of $b$ and $T$, after some calculation: $$S_3^2 = \frac{1}{3}b^2(T^2-T+1),$$ and get the ratio $$U =\frac{b}{S_3} = \left(\frac{T^2-T+1}{3}\right)^{-\frac{1}{2}} = \sqrt{3}(T^2-T+1)^{-1/2}.$$ Since $T\in[0,1]$, we have $T^2-T+1 \in [3/4, 1]$, thus the ratio $U$ takes only values in $[\sqrt{3}, 2]$. This particular result does also hold if the $X_i$ are not uniform distributed, as $X_{(2)}$ always lies between $X_{(1)}$ and $X_{(3)}$. To get the distribution of $U$, we need to calculate $$ \begin{aligned} F_U(u) = P(U \leq u) &= P\left(\sqrt{3}\left({T^2-T+1}\right)^{-\frac{1}{2}}\leq u\right)\\ &= P\left((T-1/2)^2 \geq \frac{3}{u^2}-\frac{3}{4}\right) \\&= 2 F_T\left(\frac{1}{2}-\sqrt{\frac{3}{u^2}-\frac{3}{4}}\right) \end{aligned} $$ Taking the derivative, we find \begin{align}\label{eq:udens}\tag{$\ast$} f_U(u) &= 2\sqrt{3}\left(\frac{1}{u^2}-\frac{1}{4}\right)^{-1/2}u^{-3}f_T\left(\frac{1}{2}-\sqrt{3}\left(\frac{1}{u^2}-\frac{1}{4}\right)^{1/2}\right) \\&= 2\sqrt{3}\left(1-\frac{u^2}{4}\right)^{-1/2}u^{-2}f_T\left(\frac{1}{2}-\sqrt{3}\left(\frac{1}{u^2}-\frac{1}{4}\right)^{1/2}\right). \end{align} #### Uniform r.v. $X_i$ Using that $T$ is uniform when the $X_i$ are uniform, we obtain, for $u\in[\sqrt{3}, 2]$, that $$ F_U(u)=1-2\sqrt{3}\left(\frac{1}{u^2}-\frac{1}{4}\right)^{1/2} $$ and $$f_U(u) = \frac{2\sqrt{3}}{u^2\sqrt{1-\frac{u^2}{4}}}.$$ #### Normal r.v. $X_i$ For normal distributed variables, the distribution of $T=(X_{(2)}-X_{(1)})/(X_{(3)}-X_{(1)})$ is not uniform. The resulting density of $U$ looks however nicer than the one for uniform $X$. There must be an ingenious trick to derive it. If someone knows it, please post.see Kinfinity's edit: the result is known since the 1950's, and was apparently due to hard work solving integrals. The result for uniform variables is not too different from the result for normals, though. Due to the symmetry and smoothness of the normal density, $X_{(2)}$ is approximately uniform given $X_{(1)}$ and $X_{(3)}$ The paper by Lieblein (1952) gives the distribution of a closely related statistic, $Y_1=\min(Y_{11}, 1-Y_{11})$, where $Y_{11}=(X_{(2)}-X_{(1)})/(X_{(3)}-X_{(1)})$ (called $T$ above), by heroic integration (no Wolfram Alpha at hand that time). The density of $Y_1$'s distribution is calculated as Eq. (9) in Lieblein, and from that we find $$ f_T(t) = \frac{3\sqrt{3}}{2\pi(1-t+t^2)}. $$ Now we can go on and plug this density in to \eqref{eq:udens}
null
CC BY-SA 4.0
null
2023-05-29T21:17:10.557
2023-05-30T09:19:06.130
2023-05-30T09:19:06.130
237561
237561
null
617255
1
null
null
0
13
I am working on a logistic regression problem with 2 classes (let's say 0 and 1) where the positive class (1) is rare. In this case, I have found that Fractional-Random-Weight Bootstrap (FRW) is a good procedure for logistic regression. I was wondering if I get clarification of: i) FRW bootstrap concepts for logistic regression ii) example code of FRW bootstrap in R
FRW bootstrap in R
CC BY-SA 4.0
null
2023-05-29T21:40:19.140
2023-05-29T21:40:19.140
null
null
389108
[ "regression", "logistic", "rare-events" ]
617256
2
null
617240
1
null
The causal effect of $X$ on $Y$ is not identifiable in a number of cases. Pearl's Causality: Models, Reasoning, and Inference, 2nd Ed. (2009), on p. 90, has three examples. The simplest possible such example is the graph consisting of two vertices $X\to Y$ with also a confounding bow between $X$ and $Y$ (represented by bidirectional dashed arrows). In such a case, a do expression will not be reducible to an expression containing only conditional expressions that you can evaluate from the (right) data.
null
CC BY-SA 4.0
null
2023-05-29T21:40:48.810
2023-05-29T21:40:48.810
null
null
76484
null
617257
2
null
617242
1
null
The first scenario described could be analyzed as a one between-factor repeated-measures ANOVA (where the pre and post is the repeated measure and the motivation classification is the between factor). The second scenario would be two-way ANOVA where motivation and treatment are the factors. In the first scenario, the dependent variable is the score (observed at pre and post). In the second scenario, the dependent variable is the gain (from pre to post). However, in these cases, if motivation was measured as an ordinal variable, you would be treating it here as a categorical variable. Thus, if you have a statistically significant effect (or interaction), then you can only deduce that there is some difference for at least one of the levels of the motivation factor. If you wish to model the motivation variable as a true ordinal variable or as a scalar variable (if it was measured as such), then the key is to remember that ANOVA analyses are just multiple regression models. And, in these models, we can include categorical variables using an ordinal dummy coding scheme (using k-1 instrumental variables, where k is the number of k Likert-type options) or we can include the variable as a single scalar predictor variable. Happy to elaborate more if needed.
null
CC BY-SA 4.0
null
2023-05-29T21:41:35.183
2023-05-29T21:41:35.183
null
null
199063
null
617258
1
null
null
0
19
Consider the context of [this answer](https://stats.stackexchange.com/a/306387/280102): It claims that $$E[\hat{MSE}_{out}] = E[\hat{MSE}_{in}] + 2/N\sum_{i=1}^NCov(y_i, \hat{y_i})$$ However, the reference (ESLII Section 7.4; PDF page 248) does not prove this. According to [this post](https://stats.stackexchange.com/questions/228394/what-is-the-difference-between-in-sample-error-and-training-error-and-intuition), $2/N\sum_{i=1}^NCov(y_i, \hat{y_i})$ is actually the difference between in-sample error and the training error. (In the first post, $\hat{MSE}_{out}$ is the out-of-sample estimate of the MSE, which does not appear in section 7.4) Can we prove the equation in the first post if the model is assumed to be consistent?
How to express out of sample MSE in terms of training MSE?
CC BY-SA 4.0
null
2023-05-29T22:00:59.950
2023-05-29T23:41:10.607
2023-05-29T23:41:10.607
280102
280102
[ "machine-learning" ]
617259
1
null
null
0
24
We gathered driving data from two cohorts of drivers belonging to the same age group. The first cohort, Group A, utilized System A (treatment group), whereas Group B drove vehicles without this system equipped. These two groups were based in different locations. The primary objective of our study is to evaluate the impact, if any, of System A on drivers' safety or overall driving outcomes. However, the vehicles operated by Group A was a more luxurious and newer model. This is a confounding variables (the vehicle model and its year of production), which could impact driving outcomes. There was no pre-intervention data for Group A, these confounding variables are nested within the treatment. Group B (control group) drove the vehicle without System A equipped. Would propensity scores or instrumental variable be a potential solution for dealing with these nested confounding variables. However, there is no variance on confounding variables in the treatment group. Could you recommend an approach to this issue? Thank you.
Confounding variables are nested with treatment, not able to be measured, how to address the influence from confounding factors?
CC BY-SA 4.0
null
2023-05-29T22:16:53.637
2023-05-29T22:16:53.637
null
null
276935
[ "causality", "propensity-scores", "instrumental-variables", "observational-study", "regression-discontinuity" ]
617260
1
null
null
0
5
I am working with a dataset that consists of three different factory groups, each producing laptops and receiving feedback reports on their products. Each observation includes the total number of laptops made, the number of feedback reports received, and the count of negative feedback. My objective is to analyze the variation in the proportion of negative feedback among these factory groups and describe the variability of these proportions. For instance, here is a snippet of my dataset: ``` Factory Group Subject Laptops Made Feedback_Reports_Received Negative Feedback A X1 1548 10 4 A X2 5624 5 3 A X3 5214 9 5 ... C X30 8362 6 5 ``` To begin with, I am planning to perform a chi-square test for independence for each factory group to determine if the proportion of negative feedback is the same across groups. However, I'm uncertain about the best way to describe the variability of the proportions. One approach I have considered is calculating the coefficient of variation (CV) for the mean of the proportion of negative feedback. For instance, considering the 'Factory Group A' in the dataset above, I calculate the percentage of negative feedback for each observation, resulting in proportions ranging from 0.25 to 1.00. Then, by computing the CV of these proportions, I aim to describe the variability of the proportion of negative feedback within 'Factory Group A.' My question is: Is it appropriate to use the coefficient of variation (CV) to quantify the variability of proportions in this context? If not, are there alternative or better approaches to consider when analyzing and describing the variability of proportions, similar to the coefficient of variation used for continuous variables? Additionally, I would appreciate if you could reference or refer me to some articles or papers on the matter thank you so much guys indeed. Thank you in advance! here is my full dataset: python users import pandas as pd ``` data = { 'Factory Group': ['A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'C', 'C', 'C', 'C', 'C', 'C', 'C', 'C', 'C', 'C'], 'Subject': ['X1', 'X2', 'X3', 'X4', 'X5', 'X6', 'X7', 'X8', 'X9', 'X10', 'X11', 'X12', 'X13', 'X14', 'X15', 'X16', 'X17', 'X18', 'X19', 'X20', 'X21', 'X22', 'X23', 'X24', 'X25', 'X26', 'X27', 'X28', 'X29', 'X30'], 'Laptops Made': [1548, 5624, 5214, 521, 5245, 214, 2214, 521, 150, 521, 189, 619, 1050, 1480, 1910, 2340, 2770, 3200, 3631, 4061, 4491, 4921, 5351, 5781, 6212, 6642, 7072, 7502, 7932, 8362], 'Feedback_Reports_Received': [10, 5, 9, 8, 7, 6, 12, 20, 19, 3, 14, 14, 15, 16, 16, 8, 6, 4, 5, 5, 9, 10, 10, 10, 10, 5, 3, 7, 21, 6], 'Negative Feedback': [4, 3, 5, 8, 7, 3, 3, 19, 6, 2, 9, 1, 13, 5, 5, 2, 2, 2, 2, 5, 6, 1, 9, 7, 2, 2, 1, 7, 10, 5] } df = pd.DataFrame(data) print(df) ``` R users ``` df <- data.frame( 'Factory Group' = c('A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'C', 'C', 'C', 'C', 'C', 'C', 'C', 'C', 'C', 'C'), 'Subject' = c('X1', 'X2', 'X3', 'X4', 'X5', 'X6', 'X7', 'X8', 'X9', 'X10', 'X11', 'X12', 'X13', 'X14', 'X15', 'X16', 'X17', 'X18', 'X19', 'X20', 'X21', ```
Assessing Variation in Proportions and Describing Proportion Variability for Categorical Data Analysis
CC BY-SA 4.0
null
2023-05-29T23:31:55.250
2023-05-29T23:33:40.460
2023-05-29T23:33:40.460
389105
389105
[ "hypothesis-testing", "variance", "inference", "proportion", "variability" ]
617261
2
null
450377
1
null
Did you ever figure out the cause of the endless run for the compois family with glmmtmb? I am having the same issue and have found no resources on a solution.
null
CC BY-SA 4.0
null
2023-05-30T00:41:49.107
2023-05-30T00:41:49.107
null
null
389115
null
617262
2
null
617202
1
null
First, the CLT doesn't guarantee that the mean of samples will be normally distributed. But more importantly, what you seem to be getting at is that the CLT is sufficient for the expected value of your estimator to be equal to the population parameter of the mean. This is known as being "unbiased". Taking the population mean (assuming it's known) does result in an unbiased estimator, but in a trivial and not very useful way. Just having an estimator that's unbiased is an exceedingly unimpressive accomplishment. The mean is the absolute floor of machine learning performance. It's a benchmark that every other method had been improve upon, otherwise the method is pointless. For instance, suppose you're trying to predict what a student's college grades will be based on their high school grades and SAT score. The simplest machine learning model would be to simply take the mean of all the students at the college, and give that as output. That would an unbiased estimate, but in a pointless way. The goal of machine learning is to see how much better you can do than just taking the mean of all the X overall. It's about getting as educated of a guess about each individual in a sample, not predicting what the average over the whole sample will be. It's about identifying features of particular Xs that are informative as possible, and getting the mean of the Xs that have those particular features, rather than of all the Xs. For instance, if you have a student with a high school GPA of 3.4 and an SAT of 1400, you want to know what the mean college GPA of all students with high school GPA of 3.4 and an SAT of 1400 is, not what the mean college GPA over all students is.
null
CC BY-SA 4.0
null
2023-05-30T00:54:17.960
2023-05-30T00:54:17.960
null
null
179204
null
617263
1
null
null
1
18
I have a set of intensities from unordered independent events (with no date or timestamps), many of which constitute extremes, and I want to generate an extreme value distribution. The only information I have other than the intensities is the average number of events that occur per year. My approach is to generate a set of $N$ GEV distributions using max pool statistics on resampled data as a form of bootstrapping / Monte Carlo simulation. For example, I randomize the events, then generate blocks of $L$ years (using the average events per year) from which I can calculate block maximums and fit a GEV. The resampling and GEV fitting is performed $N$ times so that confidence intervals can be generated for extreme quantiles. One challenge when scaling this approach (I have this sort of data over many locations and analysis is high throughput) is that a larger block size $L$ produces better results in certain areas. For example, sometimes the GEV intensities explode at low return frequencies (an unlikely/impossible scenario). To address this, I perform the bootstrapping described (of $N$ simulations) above at each location for hyperparameter $L = 10, 20, 30, 40, 50, 60, 70, 80, 90, 100$. I have done some investigation and found the Cramer von Mises test to be sufficient for determining if $L$ is suitable for a single GEV produced from a single resampled dataset. In other words, when I fail to reject $H_0$ of the Cramer von Mises test that "the sample data comes from the fitted GEV distribution," I find that the GEV quantiles are not spurious. Now I want to scale the test for all $N$ GEVs/resampled datasets. How can I use $N$ Cramer von Mises tests (for an overall type-I error rate $\alpha=0.05$) to determine if a block size $L$ is suitable? Is it appropriate to use a multiple test correction (like Bonferroni by using $\frac{\alpha}{N}$)? I am not sure if this setup constitutes familywise comparison. Is it appropriate to use the average p-value from $N$ tests, or construct a 95% CI for the p-value? Maybe I can determine if the block size is suitable by checking if less than ${\alpha}*N$ tests reject the $H_0$?
Statistical assessment of block size for bootstrapped distribution fitting
CC BY-SA 4.0
null
2023-05-30T01:33:56.037
2023-05-30T14:49:36.043
2023-05-30T14:49:36.043
305921
305921
[ "bootstrap", "multiple-comparisons", "hyperparameter", "extreme-value" ]
617264
1
null
null
1
11
In multiple linear regression model can I use dependent variable as index (composite food security index) and also one independent variable as index. For example livelihood vulnerability index for every household with other determinants. Can I get the answer with proper reference?
Can we see relationship between two index variable
CC BY-SA 4.0
null
2023-05-30T01:38:29.393
2023-05-30T01:38:29.393
null
null
389117
[ "regression", "econometrics", "economics" ]
617265
2
null
598401
1
null
Bambi author here. Bambi uses formulae under the hood to construct design matrices and the `bs()` transformation is implemented in there (here [https://github.com/bambinos/formulae/blob/5c28351e5c429e367008a43a1ad7042509e6c5e6/formulae/transforms.py#L228-L361](https://github.com/bambinos/formulae/blob/5c28351e5c429e367008a43a1ad7042509e6c5e6/formulae/transforms.py#L228-L361)). The documentation of the transformation says it requires only the interior knots. The upper and lower bounds of the data are always used as the outermost knots.
null
CC BY-SA 4.0
null
2023-05-30T01:59:22.047
2023-05-30T01:59:22.047
null
null
389120
null
617266
1
null
null
0
25
I would like to obtain the p-value of an interaction term in a logistic regression model. I have tried two methods to accomplish this. First, I fitted a logistic model that includes the interaction term and obtained the p-value returned by the model (SELECT_GroupGene12:CDKiY, p-value = 0.994). Here's the code and summary of the model: ``` fit1 <- glm(CDKi_Response3 ~ SELECT_Group * CDKi + Patient_Dx_Age + Histology, tmp_Clin, family = 'binomial') summary(fit1) ``` The output is as follows: ``` Call: glm(formula = CDKi_Response3 ~ SELECT_Group * CDKi + Patient_Dx_Age + Histology, family = "binomial", data = tmp_Clin) Deviance Residuals: Min 1Q Median 3Q Max -1.1448 -0.9129 -0.7868 1.3027 1.6270 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 0.41881 1.80366 0.232 0.816 SELECT_GroupGene12 18.67230 3761.85360 0.005 0.996 CDKiY -0.46924 0.61670 -0.761 0.447 Patient_Dx_Age -0.01417 0.03301 -0.429 0.668 HistologyILC -18.03547 6522.63863 -0.003 0.998 HistologyOther -18.19134 6522.63862 -0.003 0.998 Histologyuk 19.05416 6522.63863 0.003 0.998 SELECT_GroupGene12:CDKiY -36.34491 4978.39200 -0.007 0.994 (Dispersion parameter for binomial family taken to be 1) Null deviance: 75.025 on 56 degrees of freedom Residual deviance: 60.682 on 49 degrees of freedom AIC: 76.682 Number of Fisher Scoring iterations: 17 ``` Another method I tried involves fitting two logistic regression models, one with the interaction term and one without. I obtained the p-value (0.01513) using the anova function: ``` fit1 <- glm(CDKi_Response3 ~ SELECT_Group * CDKi + Patient_Dx_Age + Histology, tmp_Clin, family = 'binomial') fit2 <- glm(CDKi_Response3 ~ SELECT_Group + CDKi + Patient_Dx_Age + Histology, tmp_Clin, family = 'binomial') anova(fit1, fit2, test = 'Chisq') ``` The output is as follows ``` Analysis of Deviance Table Model 1: CDKi_Response3 ~ SELECT_Group * CDKi + Patient_Dx_Age + Histology Model 2: CDKi_Response3 ~ SELECT_Group + CDKi + Patient_Dx_Age + Histology Resid. Df Resid. Dev Df Deviance Pr(>Chi) 1 49 60.682 2 50 66.564 -1 -5.8811 0.0153 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 ``` Here is the distribution of the outcome variable CDKi_Response3 stratified by SELECT_Group and CDKi. Upon reviewing this distribution, it appears that there is a significant interaction between SELECT_Group and CDKi. ``` table(tmp_Clin$CDKi_Response3, tmp_Clin$SELECT_Group, tmp_Clin$CDKi) ``` The table output is as follows: ``` CDKi = N Gene1 Gene12 0 16 0 1 11 3 CDKi = Y Gene1 Gene12 0 16 4 1 7 0 ``` I'm wondering why different pvalues were obtained by these two methods and which method should be used. Thank you.
Different p values of interaction terms in a logistics regression model obtained by two methods
CC BY-SA 4.0
null
2023-05-30T02:20:51.433
2023-05-30T15:08:34.057
null
null
339496
[ "logistic", "mathematical-statistics", "interaction" ]
617267
1
null
null
0
16
Let $\mathbf{u}_t = (u_{1t},u_{2t},\ldots,u_{pt})^{\prime}$ be a $p \times 1$ (stationary) random vector, and let $\mathbf{\Sigma}_{u}$ be the covariance matrix of $\mathbf{u}_t$. Further, denote the precision matrix $\boldsymbol{\Omega}_u=\boldsymbol{\Sigma}_u^{-1}$. Then there is an interesting reslut linking the precision matrix $\boldsymbol{\Omega}_u$ and the conditional uncorrelatedness which may be stated in the following paragraph: ``the sparsity of $\boldsymbol{\Omega}_u=\left(\omega_{u, i j}\right)_{p \times p}$ encodes the conditional uncorrelatedness (or conditional independence in the case of Gaussian distribution) relationships between all variables in the $p$ dimensional vector $\mathbf{u}_t$. More specifically, for $p$ nodes $U_1, \ldots, U_p$, each corresponding to one element of $\mathbf{u}_t, U_i$ and $U_j$ are connected if and only if $\omega_{u, i j} \neq 0$, meaning that $u_{i t}$ and $u_{j t}$ are uncorrelated conditioning on all the other dimensions $\left\{u_{k t}\right\}_{k \neq i, j}$.'' My question is: is there a proof for this interesting result? I just verifed this when $p=3$, but how about in a more general case?
Precision matrix and conditional uncorrelatedness
CC BY-SA 4.0
null
2023-05-30T02:30:18.460
2023-05-30T02:30:18.460
null
null
31982
[ "covariance-matrix", "partial-correlation", "precision-matrix" ]
617268
1
null
null
0
22
I have some vector $x$, with mean $\mu_x$ and standard deviation $\sigma_x$, coming from an unknown distribution. I’d like to perform the linear transformation $y = Wx + b$, where $W$ is a matrix and $b$ a vector. I know everything about $W$ and $b$, however I’m only given $\mu_x$ and $\sigma_x$ (i.e., I don't have $x$'s individual values). Is it possible to obtain an expression for the output mean and standard deviation, $\mu_y$ and $\sigma_y$, using only $\mu_x, \sigma_x$ and any statistics from $W$ and $b$? If not, is there any other information can I include about $x$ to make this possible?
Finding mean/std of matrix-vector multiplication
CC BY-SA 4.0
null
2023-05-30T03:08:37.010
2023-05-30T03:08:37.010
null
null
389125
[ "machine-learning", "probability", "distributions", "mathematical-statistics", "random-variable" ]
617270
1
null
null
1
38
In Bishop's PRML in section 1.5.2, the author introduces a loss function for classification, which is the expected loss, $$ E[L]=\sum_k \sum_j \int_{R_j} L_{kj}p(\textbf{x},C_k)d\text{x} $$ where Lkj is the loss matrix and $p(\textbf{x},C_k)$ is the joint prob distribution of input x and the assigned class Ck. From that, the author derives that the expected loss for choosing a region Rj becomes $$ \sum_kL_{kj}p(C_k|\textbf{x}) $$ How is this derived?
Minimizing the expected loss (PRML)
CC BY-SA 4.0
null
2023-05-30T03:46:26.673
2023-06-01T20:52:27.123
null
null
358344
[ "loss-functions", "pattern-recognition", "decision-theory", "artificial-intelligence" ]
617271
1
null
null
1
16
I've seen something like this in many "prescriptive analytics" software (one example [here](https://datastories.com/learn/case-potato-baking/)): [](https://i.stack.imgur.com/KvZcK.png) Basically a supervised ML model has been trained to predict a KPI (target variable). After training, one can explore (with a slider) how adjusting values from each individual feature may influence the output. There is also a related "optimiser" page, where input constraints can be set to maximize/minimise some KPI of interest: [](https://i.stack.imgur.com/y6yJ1.png) I would like to know what is happening under the hood, but obviously these softwares are not open-source (I do not need to know exactly what approaches are used in this particular example, but how would develop something similar with open-source alternatives). Specifically: - how can one explore how adjusting the inputs would influence the outputs (link to Python code / library would help)? I am very familiar with scikit-learn, but have not seen anything of the like there? - a recommended open-source optimiser that could perform such optimisation given a set of constraints (I doubt that state-of-the-art proprietary optimisers like Gurobi are being used). See the documentation for the software I mentioned, although it is not very transparent.
Exploring how adjusting the input variables influence the target variable?
CC BY-SA 4.0
null
2023-05-30T04:52:00.170
2023-05-30T06:05:28.410
2023-05-30T05:03:36.043
134691
134691
[ "machine-learning", "predictive-models", "optimization", "interpretation" ]
617274
1
null
null
3
33
We all know that for the OLS model, if you center both $X$ and $Y$, the estimated intercept would be 0. I was curious if we can do a similar thing for Quantile Regression. Would it be possible if we subtract the accordingly quantile from $X$ and $Y$, and then the estimated intercept would just be 0 as well? Honestly, I think this process should be wrong, but I can't think of other ways to exclude the intercept for Quantile Regression... Does anyone have thoughts about this question?
Intercept in Quantile Regression
CC BY-SA 4.0
null
2023-05-30T06:03:49.217
2023-05-30T06:03:49.217
null
null
372999
[ "regression", "quantile-regression", "intercept" ]
617275
2
null
617271
0
null
The slider settings give feature values. Send those feature values through the model the same as you would do for any other combination of feature values for which you want to know the predicted outcome. As far as a code pipeline, the sliders come from some interactive visualization. I have explored interactive plots in `matplotlib` and `bokeh` (have some posts on my Stack Overflow profile, I think), and other libraries with various capabilities must exist. Once the interactive tool gets values from the sliders, those form the input fields of the prediction method, such as `model.predict` in `sklearn`. [This](https://scikit-learn.org/stable/tutorial/statistical_inference/supervised_learning.html) documentation page discusses the `predict` method for `sklearn` in particular, but any software package that does predictive modeling must have something similar.
null
CC BY-SA 4.0
null
2023-05-30T06:05:28.410
2023-05-30T06:05:28.410
null
null
247274
null
617276
1
617282
null
5
164
When using a Poisson GLM, its dispersion parameter can be estimated as "residual deviance/degrees of freedom". But when analyzing Poisson data with a Poisson model (i.e., not overdispersed), I found that this estimated dispersion is not around 1 based on simulation. The deviation from 1 is strongly influenced by the sample size and the mean of the Poisson distribution. Because I was expecting it to be distributed around 1, I am confused. ``` nsim <- 1000 disp <- rep(NA,nsim) n <- 1000 for(i in 1:nsim){ disp[i] <- glm(rpois(n,1)~1,family=poisson)$deviance/(n-1) } hist(disp) ``` [](https://i.stack.imgur.com/aJcVt.jpg)
dispersion parameter in Poisson models
CC BY-SA 4.0
null
2023-05-30T07:05:07.980
2023-05-30T08:33:36.187
null
null
44163
[ "generalized-linear-model", "overdispersion", "dispersion" ]
617277
2
null
616882
1
null
After following the advice of the above comment, I came to the conclusion that the envisaged model can be fitted by replacing the NAs with zero. It seems that the coefficients of the simulated model can be accurately estimated by the following model: ``` lm(y ~ thesis + satisfaction_without_NA, data = dat) ``` Generating the following output: ``` Call: lm(formula = y ~ thesis + satisfaction_without_NA, data = dat) Coefficients: (Intercept) thesisYes satisfaction_without_NA 8.310 7.018 1.053 ``` Thanks a lot for the input and best greetings :-)
null
CC BY-SA 4.0
null
2023-05-30T07:11:12.637
2023-05-30T07:11:12.637
null
null
388806
null
617278
1
null
null
0
32
We have a manufacturing process $M$ with an unknown reliability $R \in (0,1)$, and, at the end of it, an automatic tool that sorts the good and bad products with a known efficiency of $E \in (0,1)$. Here is what I intend by 'reliability': - Let us denote by $X$ the random variable fabricated by a manufacturing process that represents the quality of a product and that can take on values in the set $\{ 'good','bad'\}$. Then, if the latter process has a reliability of $R$, we have $\mathbb{P}(X = 'good') = R$. We can show that the final reliability $R_f$ of $M$ is given by: $$ R_f = \frac{R}{R + (1-E)(1-R)} \ \ \ \ \ (1). $$ This is equivalent to $$ \mathbb{P}(X = 'good'|R) = R_f = \frac{R}{R + (1-E)(1-R)} $$ The question is how to estimate $R_f$ since $R$ is unknown? My first step was to suppose that $R \geq 0.5$ since it is a necessary condition for $R_f \geq E$ and that we can strongly hope that $M$ performs better than $0.5$. My second step was to set a uniform prior $p_R$ for $R$ on the interval $(0.5,1)$. Then, this leads to: $$ P(X = 'good') = \int_{0.5}^1 \mathbb{P}(X = 'good'|R) p_R(r) dr $$ $$ = \int_{0.5}^1 \frac{r}{r + (1-E)(1-r)} \cdot 2 dr $$ $$ = \frac{2}{E} [r - K\ln(K+r)]\big|_{0.5}^1 $$ $$ = \frac{2}{E} \cdot \left(0.5 + K\ln\left( \frac{K + 0.5}{K + 1} \right) \right) \ \ \ (2) $$ where $K = \frac{1-E}{E}$. Can one of you challenge a bit this result? is it a good idea to use the uniform prior as a non informative prior or other priors can be more appropriate? Furthermore, something seems strange. Why, when $E$ goes to zero the expression (2) goes to infinity? This should be never larger than one.
Bayesian model for the reliability of a manufacturing process with noninformative prior
CC BY-SA 4.0
null
2023-05-30T08:01:49.960
2023-05-30T15:00:38.950
2023-05-30T15:00:38.950
383929
383929
[ "probability", "bayesian", "conditional-probability", "reliability" ]
617279
1
null
null
0
10
We're analyzing patient MRI sections of the brain for tumour volume. We do this manually, frame by frame, drawing the borders of the tumour. To ensure more accurate results, we have used two different raters with similar experience. There are differences in two raters' results for different patients. What would be the best method to merge these two data?
How can I merge data from two different raters for the same dataset?
CC BY-SA 4.0
null
2023-05-30T08:10:01.627
2023-05-30T08:10:01.627
null
null
389136
[ "neuroscience", "neuroimaging" ]
617280
2
null
617220
2
null
ORIGINAL ANSWER Without knowing much about this (interesting!) subject, I'm not sure you need the year in the model at all. Perhaps you need it for some other reason, but not to investigate gender differences in the use of masculine language or in gender differences in the pre-post metoo change in the use of masculine language. So, I think the following model would work (I assume "post_metoo" is a two-level categorical variable indicating whether the tweet was posted prior to or after metoo): ``` mod<-lmer(masculine_lang ~ (1|user_id)+gender*post_metoo) ``` When it comes to trajectories, I'd suggest using latent (growth) curve models with gender as a time-invariant covariate. See for instance [this](https://www.publichealth.columbia.edu/research/population-health-methods/latent-growth-curve-analysis) and [this](https://lavaan.ugent.be/tutorial/growth.html) resource. EDITED IN: I see you do need the year. I suppose you can enter it as a categorical predictor with each year as level. Or if you believe year can be considered as a continuous predictor, you can enter it as numeric too.
null
CC BY-SA 4.0
null
2023-05-30T08:10:18.430
2023-05-30T10:23:41.750
2023-05-30T10:23:41.750
357710
357710
null
617281
1
null
null
0
43
So far I have using this process: 1)split data into training and test 2)do model selection(p,d,q, P,D,Q,etc) using training data(in this case, I used autoarima) ``` from pmdarima import auto_arima arima_model=auto_arima(x_train,start_p=1, start_q=1, d=1, test='adf', max_p=10, max_q=10,trace=True,seasonal=True) ``` - Doing the CV : I am using the whole data set, with hyperparameters found in step 2. the starting window is the training set from step 1: I train the model, forecast h steps, calculate metrics, then slide/expand the window(depending on rolling vs sliding), forecast/evaluate. repeat until the end. ``` from pmdarima.model_selection import RollingForecastCV from statsmodels.tsa.arima.model import ARIMA cv2 = RollingForecastCV(step=1, h=5,initial =window_size) cv_generator2 = cv2.split(d1) rmse2=[] mae2=[] for i in range(0,iterations): a=next(cv_generator2) model = ARIMA(d1.iloc[a[0]], order=(3,1,3)) model_fit = model.fit() yt_forecasted = model_fit.forecast(steps=5) rmse2.append(np.sqrt(mean_squared_error(d1.iloc[a[1]].to_numpy().flatten(), yt_forecasted))) mae2.append(mean_absolute_error(d1.iloc[a[1]].to_numpy().flatten(), yt_forecasted)) ``` - average the metric somehow( in this case I'm just taking a simple average). However I have been discussing with Bing Chat and they claim that the model selection with autoarima should be done in each iteration of the loop:" > The auto_arima function is used within the for loop in the example code to select the best ARIMA model for each training set. This allows the model to adapt to changes in the data over time. If the auto_arima function was used before the for loop, it would only select a single ARIMA model based on the first training set. This model may not be the best fit for subsequent training sets. Using the SARIMAX function within the for loop allows us to refit the selected ARIMA model to each training set. This ensures that the model is updated with the most recent data and can make accurate predictions for each test set." Is this a valid method, and if so, is it better than the steps I've been using? In fact, are the steps I'm using valid?
Exact steps for rolling window CV evaluation or sliding window CV evaluation for SARIMA
CC BY-SA 4.0
null
2023-05-30T08:10:46.413
2023-06-01T02:05:52.123
2023-06-01T02:05:52.123
301533
301533
[ "python", "forecasting", "cross-validation", "arima", "moving-window" ]
617282
2
null
617276
7
null
That's correct! You've found out why `glm` doesn't use deviance/df as an estimate of dispersion: it's not a very good one. It uses the better estimate based on the variance of the Pearson residuals (though for `family=poisson` it doesn't need to estimate). The estimate is bad because the deviance residuals aren't actually that close to $N(0,1)$ under the model -- they can't be, because $Poisson(1)$ is discrete, and $N(0,1)$ isn't. However, the Pearson residuals do have variance very close to 1 (exactly 1 if you didn't need to estimate the mean), and so give a better estimate of the dispersion. ``` > r<-replicate(1000,glm(rpois(n,1)~1,family=poisson)$deviance/(n-1)) > mean(r) [1] 1.146889 > s<-replicate(1000,summary(glm(rpois(n,1)~1,family=quasipoisson))$dispersion) > mean(s) [1] 1.000165 ``` For the Poisson family, the bias looks like this with varying mean. The blue is the deviance-based estimate; the orange is the estimate based on Pearson residuals [](https://i.stack.imgur.com/h6oQr.png) This is a related phenomenon to $\chi^2$ tests in contingency tables not being very accurate with small cell counts -- again, the $\chi^2$ approximation is based on Poisson distributions being approximately Normal, and when the mean is 1 they aren't. Note, however, that a bit of perspective is useful here. The dispersion estimate based on the deviance is biased when the mean is 1, but it's not all that biased. If you want to know the dispersion to within 10%, you need large sample sizes and you need to know the distribution is accurately Poisson. The bias in the estimator probably isn't your biggest problem.
null
CC BY-SA 4.0
null
2023-05-30T08:24:14.603
2023-05-30T08:33:36.187
2023-05-30T08:33:36.187
249135
249135
null
617283
1
null
null
0
10
I am using Process Macro model and the indirect effect (path a*b) is significant. However, the relationship between the predictor (X) and mediator (M) variables is non-significant. How would that be interpreted? Could it still be claimed that M is mediating the relationship between X and Y when there is no relationship between X and M?
Mediation Analysis through Process Macro Model
CC BY-SA 4.0
null
2023-05-30T09:00:05.953
2023-05-30T10:13:55.560
null
null
389143
[ "mediation" ]
617284
1
null
null
3
125
I want to calculate similarity between various samples, but I'm limited to only knowing the IQR and medians. I came across [a similar problem here](https://stats.stackexchange.com/questions/564532/how-to-test-statistical-difference-given-only-median-and-iqr) but the top answer states that the newly proposed statistic follows the same distribution, but doesn't explain why it does: $$t = \frac{\text{Mean}_1 - \text{Mean}_2}{\sqrt{\dfrac{s_1^2}{N_1}+\dfrac{s_2^2}{N_2}}}$$ $$u = \frac{\text{Median}_1 - \text{Median}_2}{\sqrt{\pi/2}\,\sqrt{J_1+J_2}}$$ where $$J_1=\frac{IQR_1^2}{1.82 N_1}, \ \ J_2 = \frac{IQR_2^2}{1.82 N_2}$$ I understand that essentially, we're comparing average values against variability. Specifically, I don't understand the following parts which should make these distributions similar: - Why do we divide the difference in medians by the square root of $\pi/2$ ? - Why do we divide the IQR by 1.82?
How to transform t-statistic into a statistic using median and IQR?
CC BY-SA 4.0
null
2023-05-30T09:03:05.393
2023-05-30T15:51:51.907
null
null
389140
[ "hypothesis-testing", "t-test", "median" ]
617285
1
null
null
0
8
I'm trying to understand few details about NT-Xent loss defined in the paper "[A Simple Framework for Contrastive Learning of Visual Representations](https://arxiv.org/abs/2002.05709)" by Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. The loss is defined as $$\mathcal{l}_{i,j} = -\log\frac{\exp(sim(z_i,z_j)/\tau)}{\sum_{k=1}^{2N}\mathbb{1}_{[k\neq i]} \exp(sim(z_i,z_k)/\tau)}$$ Where $z_i$ and $z_j$ represent two augmentations for the same image. What I don't understand is: at the denominator, I understand that we want to exclude the point $z_i$ using the indicator function but shouldn't we exclude also $z_j$? Otherwise we will have $k=j$ for some $k$. Essentially, why we let the positive sample at the denominator?
Definition of negatives in NT-Xent loss
CC BY-SA 4.0
null
2023-05-30T09:12:31.257
2023-05-30T15:17:07.157
2023-05-30T15:17:07.157
22311
254488
[ "self-supervised-learning" ]
617287
1
null
null
0
12
I have a (Bayesian) logistic mixed-effects model and a linear one, both fitted with brms. Both are fitted on data from human subjects that account for repetition, therefore, I have included a subject identifier as a random effect. Now, what I would like to quantify is how much the subject factor matters in the model, and compare the logistic and linear model. For instance, I would like to answer the question of whether the subject mattered more in the logistic than in the linear model, i.e., is the logistic model more individual-dependent/subjective than the linear one. Can I infer this information with ICC (intra-class correlation) coefficients and how cautious do I have to be about the type (logistic vs linear)? I have calculated ICCs with the icc function from the performance R package ([https://rdrr.io/cran/performance/man/icc.html](https://rdrr.io/cran/performance/man/icc.html)). However, it seems like the equations are different between logistic and linear models.
Comparing ICCs between logistic and linear mixed-effects models
CC BY-SA 4.0
null
2023-05-30T09:56:40.133
2023-05-30T09:56:40.133
null
null
256941
[ "mixed-model", "model-evaluation", "intraclass-correlation", "brms" ]
617288
1
null
null
-1
28
I did a wilcoxon test in R but I don't understand how to interpret the W value and what this means in comparison to another I did result on 2 data sets and the results are: W = 9, p-value = 0.1 alternative hypothesis: true location shift is not equal to 0 W = 4, p-value = 1 alternative hypothesis: true location shift is not equal to 0 I understand that the results are not significant because of the P value being less than 0.05 but what does a W value of 9 and 4 actually mean and what does this mean in terms of the first data set compared to to the second data set. Also what does "alternative hypothesis: true location shift is not equal to 0" mean. Thank you!
How to interpret W value of a wilcoxon test in R
CC BY-SA 4.0
null
2023-05-30T08:33:09.977
2023-05-31T21:01:43.260
2023-05-31T21:01:43.260
11887
389090
[ "r", "wilcoxon-mann-whitney-test" ]
617289
2
null
617288
0
null
It is the signed rank-sum of the test, and its value depends on what test you did (paired or not). Tha larger value of W indicates the larger difference between groups. However it doesn't indicates the P-values since the W needs to be compared to the null distribution. "true location shift is not equal to 0" indicates that the two distributions have different means.
null
CC BY-SA 4.0
null
2023-05-30T08:48:07.793
2023-05-30T08:48:07.793
null
null
387599
null
617290
2
null
617284
5
null
The answer that you cited says: This is because the variances of sample medians are $\pi/2$ times the variances of sample means, and the IQRs are 1.82 times the standard deviations. Here are more details: $t$ distributed variables with $d$ degrees of freedom are defined as the ratio of a standard normal (N(0,1)) distributed variable $X$ and an independent variable $S$, for which $d\cdot S^2$ has $\chi^2$ distribution with $d$ degrees of freedom. If sample 1 and sample 2 both come from an $N(\mu,\sigma^2)$ distribution, then the numerator $(\mathrm{Mean}_1 - \mathrm{Mean}_2)$ has a normal distribution with mean 0 and variance $\tau^2 = \sigma^2(1/N_1 +1/N_2)$. If you divide $(\mathrm{Mean}_1 - \mathrm{Mean}_2)$ by $\tau = \sqrt{\sigma^2(1/N_1 +1/N_2)}$, then you get an $N(0,1)$ distributed variable. If you divide the denominator, $\sqrt{S_1^2/N_1 + S_2^2/N_2}$, by $\tau$, then (denominator)$^2\cdot d$ has a $\chi^2$ distribution with $d=N_1+N_2-2$ degrees of freedom. Therefore, $$ T = \frac{\mathrm{Mean}_1 - \mathrm{Mean}_2}{\sqrt{S_1^2/N_1 + S_2^2/N_2}} = \frac{(\mathrm{Mean}_1 - \mathrm{Mean}_2)/\tau}{\sqrt{S_1^2/N_1 + S_2^2/N_2}/\tau} $$ follows a $t$-distribution with $d=N_1+N_2-2$ degrees of freedom. Now, if you replace sample means by sample medians, the numerator of your $t$-variable spreads more. It is approximately $N(0, \tau^2\cdot\pi/2)$ distributed, but by dividing with $\sqrt{\pi/2}$, you are approximately back at the initial $N(0,\tau^2)$ distribution for the original numerator, $\mathrm{Mean}_1 - \mathrm{Mean}_2$. Equally, the mean IQR in a standard normal is 1.349. IQR scales with the standard deviation for general normal distributions, $N(\mu,\sigma^2)$. Thus, $\mathrm{IQR}/1.349$ is an estimator for $\sigma$ and $\mathrm{IQR}^2/1.349^2=\mathrm{IQR}^2/1.82$ is an estimator for $\sigma^2$. Replacing $S^2$ with $\mathrm{IQR}^2/1.82$ gets you approximately back to the initial denominator.
null
CC BY-SA 4.0
null
2023-05-30T09:59:49.337
2023-05-30T15:51:51.907
2023-05-30T15:51:51.907
237561
237561
null
617291
2
null
617283
0
null
Tests of statistical significance have the obvious issues (e.g., lack of power due to a small sample), and the test(s) for ab are different from the test for the simple a coefficient (the product ab is often not normally distributed and therefore typically tested using asymmetric bootstrap confidence intervals, whereas the a path is usually tested with a regular t test). Therefore, the tests can lead to different results/conclusions in practice. In particular, if the b path is strong, this may lead to a "significant" ab product even when the a path alone is not statistically significant. I personally would be cautious when the a path is of questionable (small) magnitude. To me it does not make sense to say there is mediation when X does not influence M.
null
CC BY-SA 4.0
null
2023-05-30T10:13:55.560
2023-05-30T10:13:55.560
null
null
388334
null
617292
1
null
null
0
16
I have the following time series. Its clearly not a simple linear trend. I want to explore the relationship among these variables using a VAR, or even a time-varying VAR. The biggest issue in my data is stationarity. I prefer not to difference as it removes the long-term trends as well. What would be the best way to detrend them (I'm using R) and/or do my analyses? [](https://i.stack.imgur.com/19CKK.png)
Detrending these time series
CC BY-SA 4.0
null
2023-05-30T10:25:26.190
2023-05-30T10:40:08.607
2023-05-30T10:40:08.607
378010
378010
[ "r", "time-series", "arima", "vector-autoregression", "trend" ]
617293
1
617297
null
2
164
In Bayesian theorem, $p(y|x)=\frac{p(y)p(x|y)}{p(x)}$, we call p(y) the prior, p(x|y) the likelihood. While in machine learning, many models find the solution for parameters through the Maximum Likelihood Estimation(MLE) and then apply derivative to find the solutions. I thought the MLE here corresponds to the likelihood in Bayesian function, but it seems not true. For example, in Gaussian discriminative model, the MLE target is $p(t,X|\pi,\mu_1,\Sigma)$, which is a joint distribution. In logistic regression, the MLE target is $p(t|X, w)$, which looks like the "real likelihood" for me. I am confused about this. - The latter MLE is different from the likelihood in Bayesian function, right? - How to tell which kind of MLE to use in machine learning optimization?
Maximize Likelihood in Machine learning
CC BY-SA 4.0
null
2023-05-30T10:25:33.267
2023-05-30T11:25:41.020
null
null
388783
[ "maximum-likelihood" ]
617294
1
null
null
2
25
I would be grateful if you could help me clear up some confusion regarding conditional expectation and regression. I have seen two formulations of the linear regression framework: $$Y=a+bX+\varepsilon\qquad\qquad(1)$$ and $$\mathbb{E}[Y|X]=a+bX.\qquad\qquad(2)$$ Often, in an introduction to linear regression it is said that linearity of the relationship (between $X$ and $Y$) is assumed. For example, on [Wikipedia](https://en.wikipedia.org/wiki/Linear_regression) it reads "...a linear regression model assumes that the relationship between the dependent variable y and the vector of regressors x is linear." However, in my experience, in any given application linear regression is rather seen as a linear approximation, which makes much more sense to me than assuming a truly linear relationship between $Y$ and $X$ (i.e., draw $X$, transform it linearly and add some random noise $\varepsilon$), but I have not seen this mentioned in introductory textbooks (and maybe this is wrong). I'm wondering why that is and how (1) and (2) can be related. More specifically, my confusion is the following. In general, approximations can be seen as orthogonal projections. For example, for any r.v. $Y\in L^2$, the conditional expectation is the orthogonal projection on the space of (equivalence classes of) $X$-measurable r.v.s (i.e., functions of $X$ because of factorization). Therefore, it minimizes the MSE and we can write $$Y=\mathbb{E}[Y|X]+\varepsilon_1,\qquad\varepsilon_1:=Y-\mathbb{E}[Y|X].\qquad\qquad(3)$$ On the other hand, because the orthogonal projection is the unique minimizer of the MSE, if $a$ and $b$ in (1) are obtained by least squares, $a+bX$ is the orthogonal projection of $Y$ on the space of linear functions of $X$. But then how does the conditional expectation come into play here? Where is it gone? Here is the explanation I can come up with, maybe you can tell me if this makes sense. We would like to have (3), because we want the best approximation (for example to make predictions) of $Y$ given $X$. Moreover, we would like to have (2) because linear functions are nice. Unfortunately, in reality, except maybe if $X,Y$ are (approximately) jointly normal, we have neither. However, if we find the best $X$-linear approximation to $Y$, we simultaneously find the best $X$-linear approximation linear approximation to $\mathbb{E}[Y|X]$ (right? Because $\parallel\mathbb{E}[Y|X]-L(X)\parallel\leq\parallel\mathbb{E}[Y|X]-Y\parallel+\parallel Y-L(X)\parallel$ for $L$ linear is minimized by the best approximation $a+bX$ to $Y$), that is, $$\mathbb{E}[Y|X]=a+bX+\varepsilon_2,\qquad\varepsilon_2:=\mathbb{E}[Y|X]-(a+bX).\qquad(4)$$ In other words, we project twice, that is $L^2\overset{p_1}{\to}\{f(X)\}\overset{p_2}{\to}\{\tilde{a}+\tilde{b}X\}$. This also means that in the error in (1) there are actually two errors, that is, \begin{align} Y&=\mathbb{E}[Y|X]+\varepsilon_1\\ &=a+bX+\varepsilon_2+\varepsilon_1\\ &=a+bX+\varepsilon, \end{align} with $\varepsilon=\varepsilon_1+\varepsilon_2$, but of course they are not seperable. My questions are: - Does that make sense? - If yes, can you point me to a reference that discusses this explicitly in the social sciences, preferably a textbook? I found this question, where they reference Hansen (2022) who basically describes linear regression as I did here (except for the double projection) and this question that is very similar, but I have never seen in it the social sciences and I'm still confused for example by the Wikipedia quote (and related ones from other textbooks). What is going on here? - One last technical question: What if we find $a$ and $b$ not by the least squares but by some other algorithm (e.g., WLS, RML). Then we don't do orthogonal projection (right? Because it's unique?), so the error is not uncorrelated with $X$. Is that bad? What do we do? References: Hansen, B. E. (2022). Econometrics. Princeton University Press.
Relationship between conditional expectation and regression
CC BY-SA 4.0
null
2023-05-30T10:30:09.623
2023-05-30T10:30:09.623
null
null
388984
[ "regression", "least-squares", "linear", "conditional-expectation", "projection" ]
617295
2
null
616183
0
null
The standard way to proceed is to make predictions from the entire tree (or whatever model you are using). You then evaluate the predicted values compared to the true values in terms of some statistic(s) of interest, such as mean squared error for the regression tree you are using. This statistic of interest is then your measure of performance for the entire tree, not just of an individual node. There are many concerns downstream of this (discussed in the comments), and this is why methods like cross validation and bootstrap validation exist. However, all of them will begin with sending your features down the decision tree to get a prediction for the entire tree, just as you would do for any other supervised learning model.
null
CC BY-SA 4.0
null
2023-05-30T10:47:31.347
2023-05-30T23:40:22.057
2023-05-30T23:40:22.057
247274
247274
null
617297
2
null
617293
6
null
The $p(x|y)$ in Bayes' theorem is the likelihood function, whereas the MLE is the maximum of this function, which is a point. The likelihood function is not unique for Bayesian methods, it is also used in classical statistics. (Answer for 1.) I don't have a good answer for part 2). But I would like to point out some pitfalls: The likelihood function is not a probability distribution, despite its appearance, as it is (almost never) normalized to 1. To make things worse, the notation for $p(x|y)$ stems from mathematical logic, where the $x$ is investigated under the assumption -the vertical bar- that $y$ remains fixed. However, the MLE maximizes $p(x|y)$ as a function of $y$, while keeping the $x$ fixed. So the notation for $p(x|y)$ is counterintuitive.
null
CC BY-SA 4.0
null
2023-05-30T11:25:41.020
2023-05-30T11:25:41.020
null
null
382413
null
617298
1
null
null
0
10
I measured students' motivation using a likert type questionnaire and I want to check its effects on some dependent variables in a factorial ANOVA analysis. I wonder if it is okay to create categorical variables from it (e.g., low vs high motivated) and use it as a one factor in the analysis. Is this valid?
Binning to create categorical variable to be used as a factor in ANOVA
CC BY-SA 4.0
null
2023-05-30T11:29:31.793
2023-05-30T11:29:31.793
null
null
91142
[ "anova", "two-way" ]
617299
1
null
null
1
17
Setting There is a dataset in which every datapoint is a list of labels. While I don't have direct access to the whole of the dataset nor to the whole list of labels (I could probably arrange it, but it's huge and I'm hoping it's not necessary), I do have access to datapoints containing certain labels of my choice ($A, B$ etc.) and, in particular, to the number of such datapoints, let's call it $num(A), num(B)$ etc. Just for reference, the values of $num$ tipically vary from 1 to about 150.000. I can also access "intersection" datapoints containing certain sublists of labels, e.g. $num(A\text{ and }B)$: these values range from 0 (quite obviously) up to several thousands. My problem For a certain label $C$ I see a more evident association with $A$ than with $B$ -- in the sense that $num(C\text{ and }A)/num(A)$ is quite bigger than $num(C\text{ and }B)/num(B)$ -- and I'd like to check for statistical significance. In absence of intersections I'd consider a chi-squared test, but the intersection $num(A\text{ and }B)$ is up to 10% of $num(A)$ (for the labels I'm working with, $num(B)\gg num(A)$ with a difference of up to 3-4 orders of magnitude). My question Is there any standard(ish) technique to deal with this kind of scenarios and/or any sources you recommend to learn about it? I know embarrassingly little about statistics but I have a quite solid mathematical background. P.S. This is my first question on SE, so feel free to suggest any improvement (linguistic ones too, English is not my mother tongue).
How to check significance for non-mutually-exclusive classes (references welcome)
CC BY-SA 4.0
null
2023-05-30T11:38:10.977
2023-05-30T11:38:10.977
null
null
389141
[ "hypothesis-testing", "statistical-significance", "references" ]
617300
1
null
null
0
9
This is probably very stupid to think about but I am just confused between the difference in adding variables, for example, 'gender', or any group, as a fixed term in regression versus adding the terms as a 'random effect' Or for example, analysing how weight influenced grip strength between healthy and stroke individuals, should 'healthy vs stroke' be added as a categorical variable or random effect? In what scenario should I be adding the term categorical or random effect?
Difference between categorical variables and 'random term' in linear mixed effect
CC BY-SA 4.0
null
2023-05-30T11:58:01.383
2023-05-30T11:58:01.383
null
null
128526
[ "mixed-model", "multiple-regression" ]
617301
1
null
null
0
17
I'm sorry if the terminology is incorrect. I have tried to find some background information through standard google searches, but I've come up short. I work with data from a quesionnaire on health issues. Through a couple of manipulation, each answer/set of answers is converted to a dichotomous variable, where 1 codes for prescence of the variable in question (good health, bad health, risky gambling habits etc). This lets us calculate which proportion of our sample has got bad health etc. We're not satisfied with simply taking a random sample. Since we might have sampled individuals not representative of the population at large in respect to certain background variables (education, income, citizinship etc), so each observation is weighted according to these variables (using some kind of complicated procedure). This yieldes a so called calibration weight, which is used in the following way to calculate proportions: $$\hat{Y}=\frac{\sum{w_iy_i}}{\sum{w_i}} $$ Where $w$ is the calibration weight of the observation $i$ and $y$ is its variable value. Now to my question: Does this weighting process also affect the way I should calculate my standard deviations, and in that case how? This has mostly been done automatically "behind the scenes" until now, but I will have to do ut explicitly/manually in order to simplify our code. This topic was not covered in my undergraduate statistics classes and not hinted at in our documentation.
Is the standard deviation affected by using calibration weights?
CC BY-SA 4.0
null
2023-05-30T12:21:38.243
2023-05-30T12:21:38.243
null
null
91971
[ "estimation", "standard-deviation", "weights" ]
617303
1
617312
null
1
40
I have the problem that my logistic regression model overfits. even though I use the combined L1 and L2 penalty ('elastic net'). I have a data set with 496 features and 186 samples and want to predict a binary target. For this purpose, I thought of a logistic regression (because of binary classification) and regularization, specifically elastic net, as it enables the model to drop features completely (because of the L1 penalty), which is very important given my feature-sample-ratio (which is definitely not ideal, but as part of a course on digital science, my goal is a theoretically correct implementation of this kind of algorithm and not necessarily the best algorithm). After tuning the hyper-parameters using cross-validation, the accuracy of my model was 1 for my training data but only 0.53 for my testing data. I kind of thought that using penalties and cross-validation would preserve me from overfitting so badly, but I am very new to machine learning, so maybe I made some mistakes, which you could hopefully highlight. I used the following packages and functions: ``` from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LogisticRegressionCV from sklearn.model_selection import train_test_split import pandas as pd import scripts.functions as fn # my own functions sheet import numpy as np ``` I got my data: ``` # Setup ## Get data incl. target data = fn.get_data() data.dropna(inplace = True) X = data.drop(labels=["target"], axis=1) X = X.to_numpy() X_scaled = StandardScaler().fit_transform(X) Y = data["target"].to_numpy() # Split data into training and validation sets x_train, x_test, y_train, y_test = train_test_split(X_scaled, Y, test_size=0.2, random_state=35) ``` I initialized the hyper-parameters and the model: ``` # set hyper parameters l1_ratios = np.arange(0, 1, 0.01) cs = [1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 1e0, 1e1, 1e2, 1e3, 1e4] model = LogisticRegressionCV(cv=5, penalty='elasticnet', solver='saga', l1_ratios=l1_ratios, Cs=cs, max_iter=5000, n_jobs=-1, refit=True) ``` Then I fit my training data and retrieve the selected best values of C and L1-Ratio and the accuracy for the training data: ``` model.fit(x_train, y_train) acc_train = model.score(x_train, y_train) print(f"The accuracy of the model for the training data is {round(acc_train, 3)}") Out: The accuracy of the model for the training data is 1.0 print(f"The hyper-parameter C is: {model.C_[0]}.\nThe hyper-parameter L1-Ratio is: {model.l1_ratio_[0]}") Out: The hyper-parameter C is: 0.1. The hyper-parameter L1-Ratio is: 0.07 ``` Subsequently I wanted to validate the model using the 'independent' test data: ``` acc_val = model.score(x_test, y_test) print(f"The accuracy of the model for the test data is {round(acc_val, 3)}") Out: The accuracy of the model for the validation data is 0.526 ``` And now I am lost. To my understanding, using a model with some kind of penalty and then using cross-validation to tune this penalty are both steps to prevent overfitting. It seems like I would need a stronger penalty that the accuracies for my training and testing data converge, but this should be the result of the cross-validation, right? I assume that the problem is the data I have, meaning the proportionally large number of features for these few samples. But if there are other mistakes regarding my general proceeding, I'd be very thankful for any advice! P.S.: Unfortunately, I cannot share the data I use, and using different data would probably not reproduce the same problem. So please focus on my procedure.
Why does a Logistic Regression with 'ElasticNet' penalty overfit?
CC BY-SA 4.0
null
2023-05-30T12:26:48.677
2023-05-30T13:37:20.100
null
null
389155
[ "logistic", "cross-validation", "regularization" ]
617304
1
null
null
0
17
I'd like to run A/B Test for Ecommerce Coupon Promotion. - Random Sampling from Total User Group. - Random Grouping into Experimental(Give Coupon) / Control (Don't Give Coupon) 9:1 ratio Situation likes below : | |Num of User |Num of User who Orderd |Total Sales Price | ||-----------|----------------------|-----------------| |Experimental |90,000(90%) |5,500 |15,500,000 | |Control |9,000(10%) |500 |1,500,000 | It is obvious that Order Rate is Increased. (6.1% vs 5.5%) But My Interest is Total Sales Price Per User. In this Case Experimental 15,500,000 / 90,000 = 172.2 Control 1,500,000 / 9,000 = 166.6 [Total User] or Experimental 15,500,000 / 5,500 = 2818.2 Control 1,500,000 / 500 = 3000.0 [Ordered User] which Metric Should be tested ? +) If in Experimental Group, Last Week Orderd User is 4,000(May be 500 is same with Ordered in this Promotion) and Their Total Sales Price is 10,800,000 ( 10,800,000 / 4,000 = 2,700 ) If then, It is possible to say that In experimental Group, Total Sales Price / User (Maybe RPU?) is Increased 118(2818 - 2700) Due to Coupon ? I think It is not appropriate to compare Total Sales Price / User in Last Week and Promotion Week as their Denominator User (4,000 and 5,500) is Not Same. I'd like to check the effect of Promotion related to Sales Price(Not a Conversion Rate, I'd like to check that Revenue is increased due to Promotion). Is there any other Metric ? Thanks.
AB Test For Ecommerce Coupon Promotion
CC BY-SA 4.0
null
2023-05-30T12:30:57.513
2023-05-30T12:30:57.513
null
null
389153
[ "t-test", "ab-test" ]
617305
1
null
null
0
28
I'm trying to figure out the right way of calculating an expected win (or loss) for a single winnings combination of a slot base game spin with a bonus game (guess the colour). Let's say a player makes a `$1` bet and gets a combination that always pays `$10` (the base game). The player can either take their winnings (`$10`) or play the bonus game. In the bonus game the player has to guess the colour (red or black). If the player guesses it right, the winnings will be doubled (`2 × $10 = $20`). Otherwise, they lose the winnings (`$0`) and the bonus game stops. Again, the player can either take their winnings (`$20`) or play the next round of the bonus game. There are maximum of 5 bonus game rounds allowed. Thus, there are the following possible outcomes: `$10` (the player decides not to play the bonus game. Just takes the prize they win in the base game); `$20`, `$40`, `$80`, `$160`, or `$320` (the player plays `1`, `2`, `3`, `4`, or `5` rounds of the bonus game and takes that prize); `$0` (the player plays `1`, `2`, `3`, `4`, or `5` rounds of the bonus game and loses). Here is what I have tried so far: For the win/loss table, I think the initial bet of `$1` should be used in the `Loss` column, since the player cannot lost more than their bet. Probability of loss is $ p_{loss} = 1 - p_{win} $. But I'm not sure what probabilities of win should be used: - 0.5 for all cases or - 0.5 for the base game (probability the player will decide to play the bonus game) and $ 0.5^{round} $ for the bonus games (probability of getting all played rounds won) or - something else, since not all combinations are possible (e.g., for 3 rounds, combination win, loss, win is impossible, since the bonus game ends after the first loss). This looks like 0.5 should be used for all cases. |Game |Round |Win, \$ |Loss, \$ |pwin |ploss |Expected Win, \$ | |----|-----|-------|--------|----|-----|----------------| |Base |- |10 |0A |0.5B |0.5B |10 × 0.5 – 0 × 0.5 = 5 | |Bonus |1 |20 |1 |0.51 = 0.5 |1 – 0.51 = 0.5 |20 × 0.5 – 1 × 0.5 = 9.5 | |Bonus |2 |40 |1 |0.52 = 0.25 |1 – 0.52 = 0.75 |40 × 0.25 – 1 × 0.75 = 9.25 | |Bonus |3 |80 |1 |0.53 = 0.125 |1 – 0.53 = 0.875 |80 × 0.125 – 1 × 0.875 = 9.125 | |Bonus |4 |160 |1 |0.54 = 0.0625 |1 – 0.54 = 0.9375 |160 × 0.0625 – 1 × 0.9375 = 9.0625 | |Bonus |5 |320 |1 |0.55 = 0.03125 |1 – 0.55 = 0.96875 |320 × 0.03125 – 1 × 0.96875 = 9.03125 | A No loss is possible in the base game, since the guaranteed winnings is being analysed. B The player decides to take the winnings or decides to play the bonus game. Then, I think, the average of the expected values for the base and bonus games should be calculated as followsNOTE: $$ E[game] = \frac{E[spin]+\sum_{i=1}^5 E[round_i]}{n_{spins}+n_{rounds}} $$ $$ E[game] = \frac{5+9.5+9.25+9.125+9.0625+9.03125}{6} \approx $8.49 $$ So, the question is: what's the correct way to calculate the expected win for this case? --- NOTE I'm not 100% sure that the formula is written correctly in terms of notation, since I'm not very familiar with the probability and statistics.
Expected Win (or Loss) of Base (Spin) Game Winning Combination and Bonus (Guess the Colour) Game
CC BY-SA 4.0
null
2023-05-30T12:40:29.527
2023-05-31T01:22:45.743
2023-05-31T01:22:45.743
389049
389049
[ "probability" ]
617306
2
null
617248
0
null
You are making a big assumption that the proportion of "noise" to "length" should be constant over a wide range of "length" values, only differentiated by "group" membership. A simple plot shows an issue in your data that might require more attention: ``` library(ggplot2) ggplot(data=df,mapping=aes(x=log(length),y=log(noise),group=grupo,color=grupo)) + geom_point()+geom_smooth() ``` [](https://i.stack.imgur.com/UNdu4.png) Group A has a set of much lower values than Group B. The lowest/highest "length" values in Group A are 1520 and 158695. The corresponding value in Group B are 38157 and 147737. Almost half of the Group A "length" values are lower than the lowest value in Group B. Think about why the groups differ so much in the distributions of their "length" values first. Once you've done that there are a few ways to proceed, if your assumption is valid. [Beta regression](https://rcompanion.org/handbook/J_02.html) is one choice for proportion data, although I'm not sure it would be good here. You could structure this as a binomial regression. The R `glm()` function allows for proportion outcomes represented "as a numerical vector with values between 0 and 1, interpreted as the proportion of successful cases (with the total number of cases given by the weights)." You could use "length" as the weights argument, with your proportions as the outcomes. An alternative could be a regression model of "noise" as a continuous function of "length" that includes "group" as a predictor. Including an interaction term between "group" and "length" would allow for a different association between "noise" and "length" depending on "group." It doesn't seem that your data set is large enough to distinguish the groups reliably, but these are general methods to consider for this type of data.
null
CC BY-SA 4.0
null
2023-05-30T12:41:15.160
2023-05-30T12:41:15.160
null
null
28500
null
617308
1
null
null
3
186
I have trouble understanding how R² in a regression analysis makes sense visually in Ballantine diagrams. For instance: [](https://i.stack.imgur.com/cLelg.gif) Obviously the red region is ignored when estimating the coefficients for x on y and w on y. But the red region must still be in the model somehow, otherwise we lose information that is useful in predicting y. The typical practical example is high multicollinearity, a large R², but non-significant predictors. I assume that the red region is implicitly part of the model although it is not used when estimating the coefficients. But how exactly?
Visualizing Multicollinearity: How does overlapping region of IVs contribute to R²?
CC BY-SA 4.0
null
2023-05-30T13:14:21.303
2023-05-30T15:29:38.047
2023-05-30T15:29:38.047
143391
143391
[ "regression", "multicollinearity", "r-squared" ]
617309
2
null
616260
0
null
- Question: CCDFs are absolutely fine. Anything that describes the distribution's density instead of the absolute numbers can be used. Other options would be density estimators, in the simplest case scales histograms or for direct group comparison boxplots and their more involved cousin the violin-plot. If you want to also communicate the difference in sample size between your groups you could do so making some kind of double-plot with normal histogram + density, vary the width of the boxplots, or, very involved, with confidence intervals, which would allow the reader to infer the uncertainty about the true distribution of your small groups. It might very well be best to just leave it in the text. - Question: Having a kind of survival function makes perfect sense for lifetime. I would however say the log-scale on both x and especially the y-axis make it almost impossible to read. Also I'm not sure what units you are using. 103 doesn't appear to be anything on the plot. Anyway here is some R for the alternatives from Question 1: ``` library(tidyverse) n <- 1000 df <- data.frame(x = rexp(n), group = sample(c("A", "B", "C"), size = n, replace = T, prob = c(0.1, 0.3, 0.6))) ggplot(df, aes(x = x, fill = group, color = group)) + geom_histogram(aes(y = ..density..), alpha=0.3, position="identity") ggplot(df, aes(x = x, fill = group, color = group)) + geom_histogram(aes(y = ..density..), alpha=0.3, position="identity") + scale_x_log10() ggplot(df, aes(x = x, color = group)) + geom_density() + scale_x_log10() ggplot(df, aes(x = x, color = group)) + stat_ecdf() ggplot(df, aes(x = x, color = group)) + stat_ecdf() + scale_x_log10() ggplot(df, aes(x = x)) + geom_density(aes(color = "combined"), color = "black") + geom_density(aes(color = group)) + scale_x_log10() ggplot(df, aes(y = x, x = group)) + geom_violin(draw_quantiles = c(0.25, .5, .75)) + scale_y_log10() ggplot(df, aes(y = x, x = group)) + geom_boxplot(varwidth = T) + scale_y_log10() # double plot you might like ggplot(df, aes(x = x)) + geom_histogram() + geom_density(aes(y = after_stat(scaled) * 60, lty = "density estimate\n(scaled for display)")) + scale_x_log10() + geom_vline(aes(xintercept = quantile(x, 0.25), lty = "quartiles")) + geom_vline(aes(xintercept = quantile(x, 0.5), lty = "quartiles")) + geom_vline(aes(xintercept = quantile(x, 0.75), lty = "quartiles")) + facet_wrap(~ group, ncol = 1, labeller = "label_both") ```
null
CC BY-SA 4.0
null
2023-05-30T13:27:00.700
2023-05-30T14:00:05.570
2023-05-30T14:00:05.570
22047
341520
null