Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
βŒ€
ParentId
stringlengths
1
6
βŒ€
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
βŒ€
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
βŒ€
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
βŒ€
LastEditorUserId
stringlengths
1
6
βŒ€
OwnerUserId
stringlengths
1
6
βŒ€
Tags
list
617310
2
null
617308
4
null
I cover collinearity [here](https://hbiostat.org/rmsc/multivar.html) where you'll see reasons why collinearity is usually not to be feared when looking at overall model properties such as $R^2$. Collinearity comes into play when attempting to interpret an effect of a variable that is collinear with other variables. Descriptive analyses that help include variable clustering and redundancy analysis. Also, chunk tests are your friend.
null
CC BY-SA 4.0
null
2023-05-30T13:29:28.777
2023-05-30T13:29:28.777
null
null
4253
null
617311
1
null
null
0
24
I have two groups of different sizes (control and test) and I want to measure the impacte of a certain treatment on the test group. Unfortunately, the control group has a different baseline compared to the test group, before the treatment was put in effect. Despite this difference in baseline, I want to be able to determine if the treatment caused a significant change in the test group. To make it more concrete: let's say I want to measure the impact on order value as a result of treatment X. Treatment X is only applied to customers in the test group. Existing customers are randomly assigned either control (15%) or test group (85%), and the same is done for new customers. Example data might look like this: Before period (28 days before start of test period) |Group |Number of orders |Average order value | |-----|----------------|-------------------| |Control |1500 |50 | |Test |8000 |48 | Test period (starting treatment X up until today) |Group |Number of orders |Average order value | |-----|----------------|-------------------| |Control |2000 |52 | |Test |11000 |52 | In the before period, the average order value in the test group is significantly lower, as determined by a t-test. Given this scenario, what would be an applicable method and significance test to determine if treatment X caused a significant higher average order value in the test group? ### What I have tried so far --- - perform a t-test to test if the mean improvement percentages is different in the test group and the control group β†’ This doesn't really work for my scenario, since I did not measure improvement for every memnber in every group. Some customers placed orders in both before and after period, while some customers only place and order in either before or after period - perform a logistic regression β†’ I don't have a categorical variable, but rather a normally distributed real-valued variable - segmented intervention analysis β†’ This feels like a less favorable approach to me, given I have a control and test group which should allow me to 'correct' for any seasonality or other factors
Significance test for treatment when baseline of control and test groups are different
CC BY-SA 4.0
null
2023-05-30T13:34:25.250
2023-05-31T08:05:14.923
2023-05-31T08:05:14.923
389158
389158
[ "hypothesis-testing", "statistical-significance", "ab-test", "pre-post-comparison", "baseline" ]
617312
2
null
617303
2
null
Your sample size is not within a factor of 10 of being adequate to meet your goals. The list of "selected" variables will essentially be a random draw. See [here](https://hbiostat.org/bbr/hdata.html). In general, the chance of elastic net of finding the "right" variables is essentially zero (and is worse for lasso). As discussed [here](https://hbiostat.org/rmsc/lrm.html) the minimum sample size needed just to estimate the logistic model's intercept parameter is $n=96$. The minimum sample size needed to fit one pre-specified binary feature in a logistic model is $n=184$. Were the target to be a continuous variable with little measurement error, things would not be quite as bleak.
null
CC BY-SA 4.0
null
2023-05-30T13:37:20.100
2023-05-30T13:37:20.100
null
null
4253
null
617313
2
null
617308
3
null
From my experience, I have found the Venn diagram explanation for correlations (partial, semi-partial, etc.) to be somewhat confusing and not always an accurate model of what is actually happening (or might happen). The key here is that there are two types of correlations that we could examine in the presence of another variable. The first is the partial correlation. This correlation removes all of the influence of the 2nd independent variable from both the predictor and the response. Thus, using the Venn diagram, this would be $\frac{B}{Y+B}$ (for yellow and blue)...and technically, this is the square of the partial correlation (the "partial" coefficient of determination, if you will). However, there is also the semi-partial correlation. This correlation removes the influence only from the other predictor (but it does not remove the influence from the response variable). Thus, this correlation would be represented as $\frac{B}{Y+B+G+R}$ (again, this is technically the square of the semi-partial correlation). Now the tricky part is that there are two types of $R^2$ we might talk about here...and these depend on the way we choose to break down the variation of the response variable. The first of these $R^2$ uses the semi-partial correlations, and as a consequence, that shared overlap is not actually ignored. The second of these $R^2$ uses the partial correlations...and in this case, if you were to add up the corresponding "variability" estimates for the response variable, you actually would be missing some of it. (Or weirder still, you can actually end up with more variability than you start with...and this is why I find these diagrams to not be the best model for explaining the partitioning process very well.) Happy to clarify more if needed.
null
CC BY-SA 4.0
null
2023-05-30T13:42:25.717
2023-05-30T13:42:25.717
null
null
199063
null
617314
2
null
617125
1
null
Here is a way to arrive at the result - With $f(l) = \frac{l^re^{-l}}{r!}\simeq \frac{l^r}{r!}\propto l^r$ we have cdf $F(l) \propto l^{r+1}$ and quantile function $Q(p) \propto p^{\frac{1}{r+1}}$ - The minimum statistic can be expressed with by the transform of a beta distributed variable $$min \sim Q(p) \quad \text{where $p \sim Beta(1,n)$ or $f_P(p) = n(1-p)^{n-1}$}$$ - The mean can then be approximated with an integral $$\begin{array}{} mean(min)& \approx &\int_0^1 Q(p) f_P(p)\, \text{d}p \\&\approx& n \int_0^1 p^{\frac{1}{r+1}} (1-p)^{n-1}\, \text{d}p\\ &\approx&n Beta(1+\frac{1}{r+1},n) \\&\approx& n \Gamma\left(1+\frac{1}{r+1}\right) n^{-(1+\frac{1}{r+1})}\\& \approx &\Gamma\left(1+\frac{1}{r+1}\right) n^{-\frac{1}{r+1}} \end{array} $$
null
CC BY-SA 4.0
null
2023-05-30T13:51:15.673
2023-05-30T13:51:15.673
null
null
164061
null
617315
1
null
null
0
29
I'm interested in determining if an interaction exists between my continuous (SL) and factor (fZone) covariate as it has some basis in biology. An anova() and BIC suggest no interaction is needed, but I've heard the p-values in the anova shouldn't be too heavily relied upon and BIC will almost always choose the most parsimonious model. The summary() command tells me there could be some non-linear relationship being captured in the interaction, but maybe the sample size isn't large enough to differentiate it from noise? I'm trying to not rely too much on the p-values and consider at the effect sizes, but the relationship seems to be right on the border of capturing some kind of potentially important signal. Maybe the model just lacks power? I can't quit tell. I've used select=TRUE to kick-out terms if they aren't contributing much to the model as well as raised the "gamma" to 1.3 to add some extra smoothness, but it seems some non-linear relationship still remains (summary(smooth_inter_mod)) - but maybe I'm wrong and the effect isn't that impressive to keep in the model? Does the interaction seem justified or because the continuous variable SL is non-linear and highly significant it's making the interaction term seem important? I'm looking to infer a possible relationship, not prediction. ``` library(mgcv) library(gratia) > table(c_neb5$empty, c_neb5$fZone) Rankin West Whipray 0 156 318 31 1 45 118 11 no_inter_mod <- bam(empty ~ SL + fZone + s(sal) + s(temp) + s(ToD) + # Structural component s(fStation, bs = "re") + s(fCYR, bs = "re") + s(fStation, fCYR, bs = "re") + s(fStation, CYR.std, bs = "re"), data = c_neb5, method = 'fREML', discrete = TRUE, # speed benefit family = binomial(link = "logit"), select = TRUE, gamma = 1.3) para_mod <- bam(empty ~ SL*fZone + s(sal) + s(temp) + s(ToD) + # Structural component s(fStation, bs = "re") + s(fCYR, bs = "re") + s(fStation, fCYR, bs = "re") + s(fStation, CYR.std, bs = "re"), data = c_neb5, method = 'fREML', discrete = TRUE, # speed benefit family = binomial(link = "logit"), select = TRUE, gamma = 1.3) smooth_nointer_mod <- bam(empty ~ s(SL) + fZone + s(sal) + s(temp) + s(ToD) + # Structural component s(fStation, bs = "re") + s(fCYR, bs = "re") + s(fStation, fCYR, bs = "re") + s(fStation, CYR.std, bs = "re"), data = c_neb5, method = 'fREML', discrete = TRUE, # speed benefit family = binomial(link = "logit"), select = TRUE, gamma = 1.3) smooth_inter_mod <- bam(empty ~ s(SL) + fZone + s(SL, by=fZone, m=1) + s(sal) + s(temp) + s(ToD) + # Structural component s(fStation, bs = "re") + s(fCYR, bs = "re") + s(fStation, fCYR, bs = "re") + s(fStation, CYR.std, bs = "re"), data = c_neb5, method = 'fREML', discrete = TRUE, # speed benefit family = binomial(link = "logit"), select = TRUE, gamma = 1.3) AIC(no_inter_mod, para_mod, smooth_nointer_mod, smooth_inter_mod) df AIC no_inter_mod 23.16266 720.7283 para_mod 25.01311 717.7365 smooth_nointer_mod 25.21533 721.6533 smooth_inter_mod 27.20705 722.3590 BIC(no_inter_mod, para_mod, smooth_nointer_mod, smooth_inter_mod) df BIC no_inter_mod 23.16266 825.4379 para_mod 25.01311 830.8114 smooth_nointer_mod 25.21533 835.6422 smooth_inter_mod 27.20705 845.3517 anova(smooth_nointer_mod, smooth_inter_mod, test = "Chisq") Analysis of Deviance Table Model 1: empty ~ s(SL) + fZone + s(sal) + s(temp) + s(ToD) + s(fStation, bs = "re") + s(fCYR, bs = "re") + s(fStation, fCYR, bs = "re") + s(fStation, CYR.std, bs = "re") Model 2: empty ~ s(SL) + fZone + s(SL, by = fZone, m = 1) + s(sal) + s(temp) + s(ToD) + s(fStation, bs = "re") + s(fCYR, bs = "re") + s(fStation, fCYR, bs = "re") + s(fStation, CYR.std, bs = "re") Resid. Df Resid. Dev Df Deviance Pr(>Chi) 1 644.53 671.22 2 641.95 667.94 2.5795 3.2778 0.2827 summary(smooth_inter_mod) Family: binomial Link function: logit Formula: empty ~ s(SL) + fZone + s(SL, by = fZone, m = 1) + s(sal) + s(temp) + s(ToD) + s(fStation, bs = "re") + s(fCYR, bs = "re") + s(fStation, fCYR, bs = "re") + s(fStation, CYR.std, bs = "re") Parametric coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -1.29197 0.25387 -5.089 3.6e-07 *** fZoneWest 0.09404 0.26074 0.361 0.718 fZoneWhipray 0.20052 0.50182 0.400 0.689 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Approximate significance of smooth terms: edf Ref.df Chi.sq p-value s(SL) 1.742e+00 9 32.085 < 2e-16 *** s(SL):fZoneRankin 1.498e-04 8 0.000 0.245669 s(SL):fZoneWest 9.991e-06 8 0.000 0.721009 s(SL):fZoneWhipray 1.146e+00 8 3.023 0.047871 * s(sal) 1.236e-05 9 0.000 0.720209 s(temp) 4.846e-01 9 0.931 0.146920 s(ToD) 1.682e+00 9 19.385 2.89e-05 *** s(fStation) 4.388e-05 82 0.000 0.268759 s(fCYR) 4.945e+00 12 17.802 0.000412 *** s(fCYR,fStation) 4.276e-05 237 0.000 0.137209 s(CYR.std,fStation) 7.273e+00 80 13.549 0.004488 ** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 R-sq.(adj) = 0.131 Deviance explained = 13.6% fREML = 760.9 Scale est. = 1 n = 679 ``` Plot of smooth version: ``` > draw(smooth_inter_mod) ``` [](https://i.stack.imgur.com/gb3rY.png) ``` summary(para_mod) Family: binomial Link function: logit Formula: empty ~ SL * fZone + s(sal) + s(temp) + s(ToD) + s(fStation, bs = "re") + s(fCYR, bs = "re") + s(fStation, fCYR, bs = "re") + s(fStation, CYR.std, bs = "re") Parametric coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 0.160129 0.498157 0.321 0.74787 SL -0.035437 0.011524 -3.075 0.00211 ** fZoneWest 0.183462 0.573610 0.320 0.74909 fZoneWhipray -2.613570 1.439108 -1.816 0.06935 . SL:fZoneWest -0.003041 0.014187 -0.214 0.83027 SL:fZoneWhipray 0.057949 0.024981 2.320 0.02036 * --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Approximate significance of smooth terms: edf Ref.df Chi.sq p-value s(sal) 1.526e-05 9 0.000 0.698748 s(temp) 8.885e-01 9 2.465 0.071696 . s(ToD) 1.691e+00 9 19.538 2.7e-05 *** s(fStation) 2.506e-05 82 0.000 0.266617 s(fCYR) 4.830e+00 12 17.317 0.000483 *** s(fCYR,fStation) 7.348e-05 237 0.000 0.108609 s(CYR.std,fStation) 7.572e+00 80 14.262 0.003925 ** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 R-sq.(adj) = 0.129 Deviance explained = 13.6% fREML = 767.88 Scale est. = 1 n = 679 ``` Sample of data: ``` > example_data <- c_neb5[sample(nrow(c_neb5), 10), ] > dput(example_data) structure(list(CYR_Keyfield = c("C-2018-9-15-240", "C-2016-8-14-172", "C-2018-10-8-176", "C-2017-8-13-70", "C-2010-7-31-159", "C-2012-8-12-68", "C-2018-8-21-279", "C-2017-8-15-237", "C-2019-6-25-240", "C-2019-5-16-254" ), Species = c("Cynoscion nebulosus", "Cynoscion nebulosus", "Cynoscion nebulosus", "Cynoscion nebulosus", "Cynoscion nebulosus", "Cynoscion nebulosus", "Cynoscion nebulosus", "Cynoscion nebulosus", "Cynoscion nebulosus", "Cynoscion nebulosus"), ID = c("20189240_54", "20168172_42", "201810176_58", "2017870_66", "20107159_1", "2012868_1", "20188279_126", "20178237_35", "20196240_114", "20195254_11"), SL = c(29.77, 28.9, 29.97, 24.74, 33.16, 64.96, 52.18, 31.03, 26.46, 15.63), empty = c(0, 0, 1, 1, 1, 0, 0, 1, 0, 0), DateTime = structure(c(1537029540, 1471181160, 1539009840, 1502636160, 1280597280, 1344789960, 1534864020, 1502801520, 1561476120, 1558012140), class = c("POSIXct", "POSIXt"), tzone = ""), CYR = c(2018L, 2016L, 2018L, 2017L, 2010L, 2012L, 2018L, 2017L, 2019L, 2019L), Month = c(9L, 8L, 10L, 8L, 7L, 8L, 8L, 8L, 6L, 5L), DoY = c(258, 227, 281, 225, 212, 225, 233, 227, 176, 136), ToD = c(12.65, 9.43333333333333, 10.7333333333333, 10.9333333333333, 13.4666666666667, 12.7666666666667, 11.1166666666667, 8.86666666666667, 11.3666666666667, 9.15 ), JDay = c(5108, 4346, 5131, 4710, 2140, 2883, 5083, 4712, 5391, 5351), Zone = c("Rankin", "West", "West", "West", "West", "West", "Rankin", "Rankin", "Rankin", "Rankin"), Station = c(240, 172, 176, 70, 159, 68, 279, 237, 240, 254), Standard_collection_station = c(0, 0, 0, 1, 0, 0, 0, 0, 0, 0), Latitude = c(25.088, 25.104, 25.1, 25.132, 25.07, 25.121, 25.133, 25.126, 25.088, 25.098 ), Longitude = c(-80.828, -80.891, -80.866, -80.941, -80.907, -80.955, -80.797, -80.863, -80.828, -80.814), sal = c(37.22, 41.15, 36.4099999999999, 43.3, 43.5, 39.6, 34.6, 43, 37.61, 37.51), temp = c(32.3459999999998, 28.82, 25.5859999999998, 31.05, 34.5, 30.4, 31.1679999999999, 31.4, 33.75, 28.64), fCYR = structure(c(9L, 7L, 9L, 8L, 2L, 4L, 9L, 8L, 10L, 10L ), levels = c("2009", "2010", "2011", "2012", "2013", "2015", "2016", "2017", "2018", "2019", "2021", "2022"), class = "factor"), fMonth = structure(c(7L, 6L, 8L, 6L, 5L, 6L, 6L, 6L, 4L, 3L), levels = c("1", "3", "5", "6", "7", "8", "9", "10", "11", "12"), class = "factor"), fStation = structure(c(56L, 44L, 48L, 11L, 39L, 10L, 70L, 54L, 56L, 59L), levels = c("20", "21", "22", "23", "24", "40", "54", "65", "67", "68", "70", "71", "73", "101", "105", "106", "107", "111", "112", "117", "118", "119", "122", "123", "124", "130", "133", "134", "135", "137", "143", "144", "145", "146", "147", "156", "157", "158", "159", "167", "168", "169", "171", "172", "173", "174", "175", "176", "224", "225", "226", "227", "229", "237", "239", "240", "241", "253", "254", "255", "256", "257", "265", "266", "267", "268", "269", "270", "278", "279", "280", "281", "282", "284", "290", "291", "292", "294", "301", "302", "312", "609"), class = "factor"), fZone = structure(c(1L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L ), levels = c("Rankin", "West", "Whipray"), class = "factor"), CYR.std = c(9L, 7L, 9L, 8L, 1L, 3L, 9L, 8L, 10L, 10L), center_sl = c(-10.5116053019146, -11.3816053019146, -10.3116053019146, -15.5416053019146, -7.12160530191458, 24.6783946980854, 11.8983946980854, -9.25160530191458, -13.8216053019146, -24.6516053019146)), row.names = c(374L, 316L, 393L, 432L, 67L, 55L, 584L, 277L, 618L, 91L), class = "data.frame") ```
How to determine if an interaction is warranted (in a generalized additive mixed model (GAMM))
CC BY-SA 4.0
null
2023-05-30T13:51:24.870
2023-05-30T16:16:36.003
2023-05-30T16:16:36.003
337106
337106
[ "interaction", "model-selection", "generalized-additive-model", "mgcv" ]
617316
1
null
null
0
8
I have a dataset of people nested within groups (6 people per group). People within each group are assigned either a condition A or B and the proportions within each group change. Each person in the group provides ratings to some stimuli. I am hoping to generate a simulation that increases this dataset using this nested structure. How would you recommend doing that? here is a simulated data in r: ``` # Generate the groups num_groups <- 5 # Number of groups group_size <- 6 # Size of each group # Assign people to groups people <- rep(1:num_groups, each = group_size) # Assign conditions to people within each group conditions <- rep(c("A", "B"), times = num_groups * group_size / 2) # Generate random ratings ratings <- runif(num_groups * group_size, min = 1, max = 10) # Create the dataset dataset <- data.frame(Person = people, Condition = conditions, Rating = ratings) ```
Simulate new data based on existing data of people nested within groups in r
CC BY-SA 4.0
null
2023-05-30T13:55:33.323
2023-05-30T13:55:33.323
null
null
208288
[ "r", "simulation", "statsmodels" ]
617317
1
null
null
0
6
I have a sample of individuals who migrated from Texas. They are all measured at time 0, within different census tracts. I am interested in seeing how their tract level conditions changed (based on, let's say, variable j) through their move. The individuals are not all measured at the times time 2, but are observed one to two times between years 1 and 10 (which may or may not be the same as the year they moved). What are some good approaches to use to measure how variable j changed based on their move?
Analysis for sample at different times
CC BY-SA 4.0
null
2023-05-30T14:12:35.570
2023-05-30T14:12:35.570
null
null
313303
[ "panel-data" ]
617318
2
null
617266
2
null
The very high standard errors of your coefficients indicate that you have something approaching [perfect separation](https://stats.stackexchange.com/questions/11109/how-to-deal-with-perfect-separation-in-logistic-regression) in your combination of data and model. That means that some combination of predictors can almost completely distinguish the two outcomes in `CDKi_Response3`. In your situation, that's exacerbated by the small number of cases relative to the number of regression coefficients you are trying to estimate. In binomial regression you are at risk of overfitting if you try to estimate more than 1 coefficient per 15 or so cases in the minority outcome class. Your `CDKi = Y` outcomes only number 27, so you shouldn't try to estimate more than 2 coefficients. You are trying to estimate 7 coefficients beyond the intercept. So it's not surprising that some combination of predictors is highly associated with outcome in your particular data set. The danger is that your results are unlikely to apply to new data sets. In terms of why the p-values differ so much, note that there are 3 types of significance tests associated with models like binomial regression that are fit by maximum likelihood. See [this page](https://stats.stackexchange.com/q/144603/28500) for an explanation of the differences between the score, Wald, and likelihood-ratio tests. The coefficient p-values are Wald-test values, based on normality of the coefficient estimates. That assumption is unlikely with these data; with (near) perfect separation, Wald tests aren't reliable. The `anova()` test that you performed is a likelihood-ratio test, which in general is considered more reliable with small numbers of cases. I'd be reluctant to trust any tests on such overfit models, however. With such a small number of events, you could consider a penalized ridge regression to minimize overfitting. That's implemented for example in the R [glmnet package](https://cran.r-project.org/package=glmnet). As a comment from Ben Bolker on the first version of this answer notes, however, inference based on ridge regression is then problematic even if predictions from the ridge model are more likely to extend to new data. See [this page](https://stats.stackexchange.com/a/171462/28500) for some discussion.
null
CC BY-SA 4.0
null
2023-05-30T14:13:52.750
2023-05-30T15:08:34.057
2023-05-30T15:08:34.057
28500
28500
null
617319
2
null
617236
6
null
## in general, terms shouldn't be in RE if they're missing from FE (By "terms" here I mean "terms that vary among groups", not "grouping variables"; in formula notation, that means having a term like `(f|g)` where `f` is not in the fixed effects) Because random effects are mean-centered, having `f` vary among groups but not be included in the fixed effects specifies that it has an effect of exactly zero in the population but nevertheless varies among groups. This is usually not sensible, although it does apply in some cases (e.g., if the response variable itself has been standardized so that its population-level mean is zero, then you could have the intercept vary across groups (`(1|group)`) but drop the intercept from the fixed effects (`+0` or `-1`). (Therefore, models 1 and 3 are not generally a good idea.) ## is time a numeric variable or a factor? If time is numeric, the linear model corresponding to `~time` (we'll see why this is important in a minute) consists of an intercept and a slope with respect to time. If it's a factor, then the linear model consists of an intercept and a set of contrasts β€” i.e., one parameter per distinct time value β€” or (if we use `~0+x`) an indicator/dummy variable for each time value. Thus there are three ways that time could show up in a RE: - (time is numeric): (time|id) denotes a random-slope model, with among-group variances for the intercepts and slopes and an among-group covariance between intercepts and slopes: the random-effects component is $b_{i,0} + b_{i,1}(t)$ (where $i$ is a group index), $b \sim \textrm{MVN}(0, \Sigma)$, $\Sigma$ is $2 \times 2$ - (time is a factor/categorical): (time|id) denotes a model where the variability across groups is different at every time step, and where there are different correlations between every pair of times (an $n \times n$ covariance matrix if there are $n$ distinct times): RE component $b_{i,j}$ ($i$ = group, $j$ = time), $b \sim \textrm{MVN}(0, \Sigma)$, $\Sigma$ is $n \times n$. The latter model (which I'll call 'unstructured' time variation) tends to be data-hungry ($n(n+1)/2$ covariance parameters to estimate) - (1|time): here time is treated as a grouping variable rather than an effect (automatically converted to a factor). RE component $b_j$, $b_j \sim N(0, \sigma^2)$ ## nested random effects `:` is an interaction operator, so `(1|time:id)` corresponds to variation in the intercept across subject-by-time combinations. Thus `(1|id) + (1|time:id)` (which can also be abbreviated as `(1|id/time)` corresponds to variation in the intercept (denoted by the `1` to the left of the bar) across subjects and across times within subjects. The RE is $b_{1,i} + b_{2,ij}$, $b_1 \sim N(0, \sigma^2_1)$, $b_2 \sim N(0,\sigma^2_{2})$. This nested model is also called a (homogeneous) compound symmetric covariance structure because it corresponds to a model where variation among herds is the same at every time point and correlation across herds is the same for every pair of time points (e.g. see [here](https://bbolker.github.io/goettingen_2019/notes/glmm_lab.html), search for "compound symmetry") With all those concepts, we can say (leaving out everything inessential, including `pred1`: - ~(time|id): no time effect at the population/fixed-effect level (probably silly), random slopes or unstructured time variation depending on whether time is numeric or categorical - ~ time + (time|id): as above, but with a time effect at the population level - ~ (1|id) + (1|time:id): no time effect at the population level. Variation among subjects and among times within subjects - ~ time + (1|id) + (1|time:id): ditto, but with a population-level effect of time ## Recommendations - For a moderate-sized data set with many (more than 3 or 4) distinct time points, where the within-group trends are not obviously nonlinear, I would generally recommend the random-slopes model (model 2 with numeric times) - For a moderate to large data set with lots of times and nonlinear patterns, I would recommend a low-order polynomial model in time (if that's adequate), or a hierarchical GAM (e.g. see the excellent paper by Pedersen et al. 2019) - With a small number of distinct time points I would suggest the unstructured-time model for a large data set, falling back to the compound-symmetry model if necessary for parsimony --- Pedersen, Eric J., David L. Miller, Gavin L. Simpson, and Noam Ross. 2019. β€œHierarchical Generalized Additive Models in Ecology: An Introduction with Mgcv.” PeerJ 7 (May): e6876. [https://doi.org/10.7717/peerj.6876](https://doi.org/10.7717/peerj.6876).
null
CC BY-SA 4.0
null
2023-05-30T14:15:37.863
2023-05-30T14:32:02.357
2023-05-30T14:32:02.357
2126
2126
null
617320
1
null
null
0
17
I have a dataset containing different types of features, some are numerical and others are sequential. These sequential features consists of gaussian probability density function of a sequence. I want to give this gaussian pdf as input to the model. I am using standard models like Xgboost, Random Forest.
Give probability density function as input to machine learning model
CC BY-SA 4.0
null
2023-05-30T14:25:01.640
2023-05-31T03:25:23.437
2023-05-31T03:25:23.437
384352
384352
[ "machine-learning", "normal-distribution" ]
617321
1
null
null
0
39
I am interested in the analysis of interaction between variables in a regression model. First, the context : I work in marketing and the explanatory variables corresponds to marketing channels, therefore it is interesting to know what are the combinations of marketing channels that are worth in terms of sales volume (sales volume is the dependent variables $y_t$). For numerous reason, I work with the following model : $$ \ln(y_t) = \ln(\beta_0) + \sum_{i=1}^{K}\beta_i \ln(x_{i,t}) + \sum_{j=K+1}^{L}\beta_jx_{j,t} +\ln(\epsilon_t) $$ Now, here is my tentative to analyze these interactions : I have tried, at each period $t$, to take my estimated model $\ln(\hat{y}_t)$ and write it as a function of two explanatory variables of interest (by fixing the others) : $$ \ln(\hat{y}_t) = f(x_{3,t}, x_{7,t}) $$ and look at what happens when I change the value of $x_{3,t}$ and $x_{7,t}$ which is a fairly poor analysis... Ideally, after estimating the model I would like to represent the interactions between the $x_k$ in quantitative terms (like numbers or percentage). On the internet, I found nothing like this in the litterature, is this unrealistic ?
Quantifying interactions in regression models
CC BY-SA 4.0
null
2023-05-30T14:26:04.067
2023-05-31T09:46:35.923
2023-05-31T09:46:35.923
362671
375362
[ "regression", "interaction", "interactive-visualization" ]
617322
1
null
null
0
13
I am interested in plotting the marginal effects in my GAMLSS model. However, what puzzles me a bit is that when using the `term.plot()` function from the `gamlss` package I do get estimates for the baseline category of a factor as well. However, when fitting a GAM model with the package `mgcv` the baseline category is always set to zero, which is also more intuitive to me. Underneath you can find a worked out example using the `rent` dataset from the `gamlss` package. The two factor variables are `loc` (1 = location below average, 2 = average location, 3 = location above average) and `H` (0 = no central heating, 1 = central heating) and `A` (year of construction) as well as `Fl` (floor space in meters) are modelled as P-Splines. ``` library(mgcv) library(gamlss) r <- gamlss(R ~ pb(Fl) + pb(A) + H + loc, family = GA, data = rent) summary(r) term.plot(r, pages = 1) r2 <- gam(R ~ s(Fl, bs = "ps") + s(A, bs = "ps") + H + loc, family = "Gamma"(link = "log"), data = rent) summary(r2) termplot(r2) ``` GAMLSS-Output: ``` Family: c("GA", "Gamma") Call: gamlss(formula = R ~ pb(Fl) + pb(A) + H + loc, family = GA, data = rent) Fitting method: RS() ------------------------------------------------------------------ Mu link function: log Mu Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.0851197 0.5692315 5.420 6.70e-08 *** pb(Fl) 0.0103084 0.0004031 25.573 < 2e-16 *** pb(A) 0.0014062 0.0002893 4.861 1.26e-06 *** H1 -0.3008111 0.0225869 -13.318 < 2e-16 *** loc2 0.1886692 0.0299295 6.304 3.58e-10 *** loc3 0.2719856 0.0322862 8.424 < 2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 ------------------------------------------------------------------ Sigma link function: log Sigma Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -1.00196 0.01559 -64.27 <2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 ------------------------------------------------------------------ NOTE: Additive smoothing terms exist in the formulas: i) Std. Error for smoothers are for the linear effect only. ii) Std. Error for the linear terms maybe are not accurate. ------------------------------------------------------------------ No. of observations in the fit: 1969 Degrees of Freedom for the fit: 11.21547 Residual Deg. of Freedom: 1957.785 at cycle: 3 Global Deviance: 27683.22 AIC: 27705.65 SBC: 27768.29 ****************************************************************** ``` GAM-output: ``` Family: Gamma Link function: log Formula: R ~ s(Fl, bs = "ps") + s(A, bs = "ps") + H + loc Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 6.52315 0.02850 228.919 < 2e-16 *** H1 -0.30031 0.02292 -13.102 < 2e-16 *** loc2 0.18877 0.02972 6.351 2.65e-10 *** loc3 0.27184 0.03210 8.468 < 2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Approximate significance of smooth terms: edf Ref.df F p-value s(Fl) 1.723 2.136 309.3 <2e-16 *** s(A) 3.777 4.410 21.4 <2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 R-sq.(adj) = 0.376 Deviance explained = 36.7% GCV = 0.13924 Scale est. = 0.13255 n = 1969 ``` Noe that the estimates are relatively close to another for both methods. Baseline categories are `loc1` (below average location) and `H0` (no central heating). However, when displaying the marginal effects via termplots, the GAM sets the effect of the baseline categories at zero, whereas the GAMLSS estimates an own effect for the baselines. I tried to reproduce this estimated effect but I am not sure how it is computed and why it is not left at zero? Termplots-GAMLSS: [](https://i.stack.imgur.com/yXiVk.png) Termplots-GAM [](https://i.stack.imgur.com/tLtxH.png) I have already searched the literature, but I could not find an explanation of why the `term.plot` function is not placing the baseline effect at zero. Thank you very much for your answers!
Termplots in GAMLSS
CC BY-SA 4.0
null
2023-05-30T14:36:35.753
2023-05-30T14:36:35.753
null
null
389161
[ "mgcv", "gamlss" ]
617323
1
null
null
1
20
When comparing model performance, is it valid to use the Wilcoxon signed rank test for matched pairs, when the accuracy metric is the Brier score? (Here, the Brier score is used in calculating the OOB error, which is available using the Ranger RF package - but this question is asked in general terms. I'm comparing two random forest models on the same data data - where each has a different set of variables available.) I see many journal articles using AUC with the Wilcoxon test. Wilcoxon tests using the Brier score are scarce, but they exist. This question isn't about the best way to compare models, rather, is it asking if it's okay to use a Brier score with the Wilcoxon signed rank test.
Is the Wilcoxon Signed Rank Test appropriate when the Brier score is the accuracy metric?
CC BY-SA 4.0
null
2023-05-30T14:38:38.420
2023-05-30T20:53:07.977
2023-05-30T20:53:07.977
294655
294655
[ "random-forest", "statistical-power", "model-comparison", "wilcoxon-signed-rank", "scoring-rules" ]
617324
1
null
null
1
11
I consider a linear model with an interaction term. Y = b0 + b1X + b2Z + b3XZ X, Z are the independent variables. b1 to b3 are the regression coefficients. The variances of b1 and b3 are s11 and s33, respectively; the covariance of b1 and b3 is s13. Is there a name for s13 / (sqrt(s11) * sqrt(33))? It looks like the formula for the correlation coefficient, but it is not called the correlation coefficient, is it? If anyone knows what it is called, I would appreciate it if you could enlighten me!
Name of a ratio using a variance-covariance matrix of regression coefficients for main effects and interaction effects
CC BY-SA 4.0
null
2023-05-30T14:47:21.020
2023-05-30T14:47:21.020
null
null
161341
[ "variance", "interaction", "covariance" ]
617325
1
null
null
0
23
I need some help calculating a sample size. I have two groups of people to whom I need to fill out a questionnaire. Group A consists of 45,000 people and group B consists of 3,000 people. How do I know what is the right sample size for group A and group B? Thank you very much in advance!
Sample size calculation in two different population for administration of a questionnaire
CC BY-SA 4.0
null
2023-05-30T15:10:37.880
2023-05-30T15:10:37.880
null
null
389169
[ "sampling", "sample-size", "survey", "sample", "group-differences" ]
617326
1
617334
null
0
42
This question is a follow-up to [this question](https://stats.stackexchange.com/questions/616622/different-sets-of-features-selected-by-three-different-functions-in-r-for-runnin/617017?noredirect=1#comment1147325_617017) I asked here last week. I got an important and useful answer to it, but what that led to was surprising. And once again, [here](https://github.com/Spencermstarr/EER-Research-Project/tree/main) is a link to the GitHub Repo for this project. I am running N LASSO Regressions, one for each of N datasets I have in a file folder to serve as one of several Benchmark Variable Selection Algorithms whose performance can be compared with that of a novel Variable Selection Algorithm being proposed by the principal author of the paper we are collaborating on right now. I originally ran my N LASSOs via the enet function, but tried to replicate my findings via glmnet and lars, each selected different variables. The way I had previously set up my code to run them using the glmnet function was fundamentally misusing the s argument in the coef() function in the middle of the following code block: ``` set.seed(11) glmnet_lasso.fits <- lapply(datasets, function(i) glmnet(x = as.matrix(select(i, starts_with("X"))), y = i$Y, alpha = 1)) # This stores and prints out all of the regression # equation specifications selected by LASSO when called lasso.coeffs = glmnet_lasso.fits |> Map(f = \(model) coef(model, s = .1)) Variables.Selected <- lasso.coeffs |> Map(f = \(matr) matr |> as.matrix() |> as.data.frame() |> filter(s1 != 0) |> rownames()) Variables.Selected = lapply(seq_along(datasets), \(j) j <- (Variables.Selected[[j]][-1])) ``` Because for glmnet, the s is the lambda penalty term in each LASSO itself, and this is not the case for enet or lars. That meant I was arbitrarily setting the L1 Norm for every LASSO, not using cross-validation. So, I realized my error, and wrote this adjusted code which works: ``` grid <- 10^seq(10, -2, length = 100) set.seed(11) # to ensure replicability glmnet_lasso.fits <- lapply(X = datasets, function(i) glmnet(x = as.matrix(select(i, starts_with("X"))), y = i$Y, alpha = 1, lambda = grid)) # Use cross-validation to select the penalty parameter system.time(cv_glmnet_lasso.fits <- lapply(datasets, function(i) cv.glmnet(x = as.matrix(select(i, starts_with("X"))), y = i$Y, alpha = 1))) # Store and print out the regression equation specifications selected by LASSO when called lasso.coeffs = cv_glmnet_lasso.fits |> Map(f = \(model) coef(model, s = "lambda.min")) Variables.Selected <- lasso.coeffs |> Map(f = \(matr) matr |> as.matrix() |> as.data.frame() |> filter(s1 != 0) |> rownames()) Variables.Selected = lapply(seq_along(datasets), \(j) j <- (Variables.Selected[[j]][-1])) ``` But when I started re-running these glmnet LASSOs on all 260,000 of my datasets again, 5 or 10k at a time, I noticed their performances are noticeably poorer! For example, just running them both ways on the first 10 of my 260k datasets which are located in the folder called "ten" in the Repository, the original way I had it with an arbitrary lambda of 0.1 chosen for each of the 10 LASSOs, the resulting performance metrics were: ``` > performance_metrics True.Positive.Rate True.Negative.Rate False.Positive.Rate Underspecified.Models.Selected 1 0.97 0.03 0 Correctly.Specified.Models.Selected Overspecified.Models.Selected 4 6 All.Correct..Over..and.Underspecified.Models Models.with.at.least.one.Omitted.Variable 10 0 Models.with.at.least.one.Extra.Variable 6 ``` But when I re-do it the proper way using cross-validation, these are my performance metrics: ``` > performance_metrics True.Positive.Rate True.Negative.Rate False.Positive.Rate Underspecified.Models.Selected 1 0.819 0.181 0 Correctly.Specified.Models.Selected Overspecified.Models.Selected 1 9 All.Correct..Over..and.Underspecified.Models Models.with.at.least.one.Omitted.Variable 10 0 Models.with.at.least.one.Extra.Variable 9 ``` Have a made another mistake in my code, but a different one this time somehow? p.s. A quote from the README: Each dataset name is of the general form n1-N2-N3-N4.csv; where n1 is one of the following 4 levels of multicollinearity between the set all true regressors (structural variables/factors) for that dataset: 0, 0.25, 0.5, 0.7; N2 is the number of structural variables in that dataset (this is known with certainty for each synthetic dataset by construction which is the reason to create them via Monte Carlo methods in order to use them to explore the properties of new algorithms or estimators and/or compare the performances of such things against standard Benchmarks as I am doing here for this project) which can be any one of 13 different possible values from 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15; N3 is the Error Variance which is one of the 10 following monotonically increasing (integer) values: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10; and lastly, N4, which Dr. Davies decided to set to 500 for this project, is the number of different synthetic datasets with all three of the aforementioned character traits fixed to generate randomly. Just to make that last point a little clearer, 4 * 13 * 10 * 500 = 260,000 datasets.
Why are the performance of LASSOs implemented via glmnet in R going down after cross-validation?
CC BY-SA 4.0
null
2023-05-30T15:12:23.517
2023-05-30T17:30:55.390
null
null
373983
[ "r", "multiple-regression", "feature-selection", "reproducible-research", "glmmlasso" ]
617327
1
null
null
0
15
I am learning epidemiology and am puzzled with the following issue. Assume we have a prospective cohort study with the following results: - 1000 participants alive at baseline (N) - 12 years of follow-up - 200 participants died over the observation period - 12000 person-years of follow-up. According to my calculations, the incidence risk is given by $$ Risk_{1} = 200/1000 = 0.20 $$ And the incidence rate is: $$ Rate = 200/12000 = 0.01666667$$ So, the incidence rate is 0.017 deaths per person-year (approximately). We can convert the incidence rate to a risk again as follows: $$ Risk_{2} = 1 - e^{(-Rate \times t)} $$ where t is time in years. $$ Risk_{2} = 1 - e^{(-.01666667*12)} = 0.181 $$ Do you know why the two risks are different. Different assumptions?
Incidence risk vs -recalculated incidence risk from incidence rates
CC BY-SA 4.0
null
2023-05-30T15:27:39.203
2023-05-30T15:27:39.203
null
null
305274
[ "odds-ratio", "epidemiology", "risk", "incidence-rate-ratio" ]
617329
1
null
null
2
14
I have multiple time series on which I want to identify statistically significant (if any) trends. To that end, I started by conducting the Augmented Dickey Fuller (ADF) test to identify which series are not stationary (thus implying an underlying trend): [](https://i.stack.imgur.com/mX8sfm.png) ``` Augmented Dickey-Fuller Test (p-value): #0: 0.7400121258386816 -> (not stationary) #1: 0.003756531421338549 -> (stationary) #2: 0.8356431503570756 -> (not stationary) #3: 1.2618571533446908e-07 -> (stationary) ``` In order to identify relevant monotonic trends, I want to conduct the Mann-Kendall test, but I need to ensure that samples are independent and not [serially correlated](https://www.statisticshowto.com/mann-kendall-trend-test/). ACF/PACF analysis on the time series suggest/are influenced by trends, and therefore require de-trending transformations to achieve stationarity: [](https://i.stack.imgur.com/GS41Im.png) [](https://i.stack.imgur.com/Tz69Lm.png) However, I'm struggling on 1) how to detrend non-stationarity series - using gradient vs difference; It should be noted I'm not trying to model the trend at this stage (as I'm not yet sure a trend exists) and can't perform correlation analysis on the residuals without making assumptions on an underlying model. [](https://i.stack.imgur.com/QQKZPm.png) Additionally, I'm unsure about 2) which analysis tool to use: ACF, PACF or the Ljung-Box test? I am getting very different results with each: [](https://i.stack.imgur.com/SjqMFm.png) [](https://i.stack.imgur.com/BEoxom.png) ``` Ljung-Box Test (p-value): org_#0 grad_#0 diff_#0 org_#1 grad_#1 diff_#1 org_#2 grad_#2 diff_#2 org_#3 grad_#3 diff_#3 1 5.832414e-06 0.652845 0.130720 0.023136 0.649813 0.000918 6.256156e-05 0.692498 0.036099 0.574666 0.882558 0.001247 2 2.251164e-08 0.005893 0.030928 0.004227 0.027847 0.003637 1.582695e-06 0.003751 0.056359 0.825435 0.090845 0.004315 3 1.014388e-10 0.012420 0.038536 0.001468 0.035400 0.010453 8.091142e-08 0.008257 0.123269 0.895700 0.187108 0.012288 4 2.102519e-12 0.011219 0.049601 0.001637 0.072348 0.022559 6.321557e-09 0.018818 0.215027 0.789189 0.262034 0.021165 5 2.409707e-13 0.005992 0.051696 0.002315 0.109543 0.032003 6.076601e-10 0.026010 0.273688 0.854404 0.367491 0.028546 6 5.651045e-14 0.004371 0.040586 0.002245 0.151084 0.044142 4.085782e-11 0.015675 0.061854 0.918215 0.313839 0.044380 7 2.305528e-14 0.002721 0.022965 0.003423 0.167607 0.073019 4.044796e-11 0.020773 0.040071 0.914727 0.420027 0.026616 8 2.235907e-14 0.003622 0.036035 0.006228 0.218115 0.109756 4.466094e-11 0.028620 0.063734 0.439324 0.164661 0.003724 9 5.095045e-14 0.004166 0.034175 0.010811 0.238909 0.159675 7.262066e-11 0.039198 0.076998 0.540013 0.226233 0.002383 10 1.516367e-13 0.004739 0.051209 0.011125 0.248227 0.155707 2.031204e-10 0.040187 0.106888 0.452840 0.289605 0.003102 Ljung-Box Test (statistical significance, pvalue < 0.05): org_#0 grad_#0 diff_#0 org_#1 grad_#1 diff_#1 org_#2 grad_#2 diff_#2 org_#3 grad_#3 diff_#3 1 True False False True False True True False True False False True 2 True True True True True True True True False False False True 3 True True True True True True True True False False False True 4 True True True True False True True True False False False True 5 True True False True False True True True False False False True 6 True True True True False True True True False False False True 7 True True True True False False True True True False False True 8 True True True True False False True True False False False True 9 True True True True False False True True False False False True 10 True True False True False False True True False False False True ``` For context, each sample in original time series is a metric computed from a segment of a longer series of repetitive/periodic electrical measurements, therefore I don't expect any correlation to occur as each value should be independent (no overlap between consecutive segments). Nevertheless, I'm seeing some unexpected serial correlation among the samples even after de-trending, thefore I'm not sure if I can use the MK test here. What would be the best approach?
How to test for statistical independence on non-stationary time series?
CC BY-SA 4.0
null
2023-05-30T16:13:34.750
2023-05-30T16:13:34.750
null
null
388119
[ "time-series", "autocorrelation", "stationarity", "trend", "acf-pacf" ]
617331
1
null
null
0
26
I have a dataset with 2 features in it, I would like to obtain gradient for this case. Can anyone help me with this. I am looking for 2D solutions, partial derivatives with respect to x1 and x2. def predict_grad(x, k=0): ``` x_true = x x = x.reshape(-1,1) print("gp.X_train_", gp.X_train_.shape) print("x", x.shape) print("gp.X_train_", gp.X_train_.reshape(1,-1).shape) X = x - gp.X_train_.reshape(1,-1) print("X", X.shape) print("x", x) print(" ") print("gp.X_train_", gp.X_train_) print(" ") print("X", X) c = gp.kernel_.k1.constant_value l = gp.kernel_.k2.length_scale A = gp.alpha_ f = np.exp(-(X)**2 / (2*l**2)) df = (f * (-X / l ** 2)) print("f", f.shape) print("df", df.shape) print("A", A.shape) if k == 0: return c * f @ A elif k == 1: return c * df @ A else: raise Exception('Unknown parameter k: {}'.format(k)) ``` This is the code I am using to obtain derivative if x is 1D. This code is not working if x is 2D. In this code, I can obtain derivative of $$ f(x) $$ from GPR. But If I give a function with 2 variables $$ f(x, y) $$ I am not getting gradient. I would like to get $$ \partial f/ \partial x $$ and $$ \partial f/ \partial y $$ using GPR.
Computing gradients of Gaussian Process Regression
CC BY-SA 4.0
null
2023-05-30T16:43:21.250
2023-05-30T18:05:46.917
2023-05-30T18:05:46.917
389174
389174
[ "regression", "machine-learning", "gaussian-process", "derivative", "gradient" ]
617333
1
617443
null
1
25
In Rasmussen's Gaussian Processes for Machine Learning, the joint distribution of noisy function observations, $y=f(x)+\epsilon$, at $x$ and noiseless function evaluations, $f^\star$, at unseen points, $x^\star$, given the hyperparameters of the kernel and the noise variance is $$ \begin{bmatrix} y \\ f^\star \end{bmatrix}\sim N\biggr(0, \begin{bmatrix} K(x,x)+\sigma^2I & K(x,x^\star)\\ K(x,x^\star) & K(x^\star,x^\star) \end{bmatrix}\biggr ) $$ Rasmussen uses the above to derive $p(f^\star|y,X,X^\star)$, which Rasmussen refers to as the predictive distribution. He appears to construct $95\%$ confidence regions by taking the pointwise mean and 1.96 times the standard deviation given by the conditional distribution of $f^\star$. What is the interpretation of these pointwise intervals around the mean and its relationship to $f$? I believe the predictive distribution above is a distribution over functions which describes the set of plausible values $f^\star$ could take on if I sampled from the conditional. It isn't clear to me though that these intervals are directly related to the realized values of $f$ which occur in the training points. Is that accurate?
Pointwise Confidence Regions in Gaussian Process
CC BY-SA 4.0
null
2023-05-30T17:12:16.463
2023-05-31T14:18:49.190
2023-05-31T13:18:15.893
311086
311086
[ "confidence-interval", "gaussian-process", "multivariate-normal-distribution" ]
617334
2
null
617326
2
null
There is some statistical content underlying this question, having to do with what choice to make for the penalty factor when fitting a LASSO model. If you know that the "correct" model is sparse, `lambda.min` returned by `cv.glmnet()` might not be the best choice. To illustrate the points in my comment, examine what happens with a single data set. First, let `cv.glmnet()` work on its own. ``` lDat <- read.csv("~/Downloads/0-3-1-1.csv",skip=2) library(glmnet) set.seed(11) cv0311 <- cv.glmnet(x=as.matrix(lDat[2:31]),y=lDat$Y) cv0311 # # Call: cv.glmnet(x = as.matrix(lDat[2:31]), y = lDat$Y) # # Measure: Mean-Squared Error # # Lambda Index Measure SE Nonzero # min 0.03104 39 0.9792 0.06091 16 # 1se 0.13752 23 1.0316 0.05846 3 ``` In this case, if you use minimum mean-square error as the criterion, you get `lambda.min = 0.03104` and retain 16 of the 30 coefficients. There is no absolute necessity to make that choice, however. The same single invocation of the function also provides the highest penalty within one standard error of the minimum, for `lambda.1se = 0.13752` and only 3 non-zero coefficients. If you know that the "correct" model is sparse (as these data seems to be constructed), then `lambda.1se` can be more appropriate. I suspect that the arbitrary penalty values in your prior invocations of LASSO-related functions came closer to the `lambda.1se` values and thus retained fewer coefficients than what you have found with `lambda.min`. The next point is more specifically about implementation. If you pre-specify the grid of `lambda` values to evaluate instead of letting the function choose, you might be getting into trouble. ``` grid <- 10^seq(10, -2, length = 100) set.seed(11) cv0311FixedGrid <- cv.glmnet(x=as.matrix(lDat[2:31]),y=lDat$Y,lambda=grid) cv0311FixedGrid # # Call: cv.glmnet(x = as.matrix(lDat[2:31]), y = lDat$Y, lambda = grid) # # Measure: Mean-Squared Error # # Lambda Index Measure SE Nonzero # min 0.03054 96 0.9792 0.06095 17 # 1se 0.12328 91 1.0227 0.05798 3 ``` You retain about the same number of coefficients at each of the final `lambda` choices as previously, but note that the `Index` in your grid at which you identify those choices is near the top of your 100 values, unlike what happens when you just let `cv.glmnet()` specify its own data-driven grids. I would worry that your best penalty choices will be "off the grid" with the same fixed grid on other data sets.
null
CC BY-SA 4.0
null
2023-05-30T17:30:55.390
2023-05-30T17:30:55.390
null
null
28500
null
617336
1
null
null
0
12
Background: I have trained a deep neural network model with 3 hidden layers of size 32,32,16 for binary classification problem. I have 9 input features and a 0 or 1 as output. My aim is to use this trained model in an application where I provide the 9 features (computed dynamically in the application code), and I get a 0 or 1 to decide my course of action accordingly. The problem I am dealing with is an imbalanced classification where in the dataset the proportion of 1s overpowers the 0s. Hence, I use the AUC ROC metric to assess the performance and the binary cross entropy as the loss function. Below is the learning curve I get using a learning rate of 1e-4, batch size of 32, and using L2 regularization. [](https://i.stack.imgur.com/kJqrx.png) Methodology and problem faced: From the curve, I assume this trained_model can be accepted as the test performance was also good. So I apply this model in my actual scenario, i.e., the application. In this application, I collect the same 9 features, normalize them using the minimum and maximum used in the original training set, and then use trained_model.predict(). However, I find that it is not performing well in the application i.e., predicting a 0 even for the straightforward cases where it should have been a 1! My analysis: I am confused about where I am going wrong. Because the way the 9 features are generated/computed is the same in the training dataset generation and application codes. So, there aren’t any distribution changes, I believe. For e.g., features 1 and 2 are from a Poisson distribution in both cases, others are computed using some formulas. I generate the synthetic data using a Python script say File_A.py, train and use model in another File_B (the application). This File_B is the same as File_A except that it doesn't collect the dataset but rather applies the trained_model() in its code where a time-intensive operation exists. Ofcourse, the values of the input parameters change in both files but not the distribution they come from. Or is it suggested I add any more features that better affect the outcome? But in any case, keeping in view the learning curve of the model, even this trained_model should not at least perform this badly even for simple straightforward cases. I also did feature importance analysis using XAI Shap and decided on these 9 features. Can anyone throw some light on this behavior and suggest anything?
Wrong predictions when using a trained model in deployment
CC BY-SA 4.0
null
2023-05-30T17:49:50.687
2023-05-30T17:49:50.687
null
null
346726
[ "machine-learning", "classification", "predictive-models", "dataset" ]
617337
1
null
null
0
11
I am working on implementing a model in R. I thought I had a pretty good approach, but one of my collaborators has pointed out that the model degrees of freedom from lmerTest don't make sense for the interaction term. First, a description of the experimentIn my data I have two fixed effects: "Species" is categorical, with 4 levels. "Induction" is categorical, with 3 levels. I also have a random effect "Genotype". In the experiment, each Species had three possible genotypes, thus a given individual has a Species, Induction, and Genotype. However, the Genotypes are unique to each species. For example, Genotype A,B,C belong only to Species 1, while Genotypes D, E, F belong only to Species 2, etc. I also have a continous covariate "biomass". Some example data: ``` test_dat <- data.frame( "response" = c(rnorm(288)), "Induction" = c(rep(c(rep(c(rep("1", 2), rep("2",2), rep("3",2)),12)),4)), "Species" = c(rep(c(rep("A",18), rep("B", 18), rep("C",18), rep("D",18)),4)), "GenotypeID" = c(rep(c(rep(c(rep("1", 6), rep("2",6), rep("3", 6)),4)),4)), "biomass" = rnorm(288)) ``` To make the unique nature of each genotype explicit, you can run: ``` test_dat$UniqueGeno <- paste(test_dat$Species,test_dat$GenotypeID, sep="") ``` The model I thought I could run is ``` ex_mod <- lmer(response ~ Induction * Species + biomass +(1|Genotype), test_dat) ``` However this results in the following output from `anova(ex_mod, type = 2)` ``` Type II Analysis of Variance Table with Satterthwaite's method Sum Sq Mean Sq NumDF DenDF F value Pr(>F) Induction 4.2500 2.12498 2 267.078 2.1428 0.1193 Species 3.6591 1.21971 3 8.056 1.2300 0.3602 biomass 0.0025 0.00252 1 274.821 0.0025 0.9599 Induction:Species 8.3486 1.39144 6 267.143 1.4031 0.2135 ``` My understanding is that the DenDF for `Induction` and `Induction:Species` should be much lower, as it should be working with the mean of genotypes. Since each genotype is unique to a species, I am wondering if there is a more complex way of nesting the error term to get a more accurate model. Thanks for your help
Struggling to understand lme4 model syntax and nesting
CC BY-SA 4.0
null
2023-05-30T17:50:52.117
2023-05-30T17:50:52.117
null
null
187422
[ "r", "lme4-nlme" ]
617338
1
null
null
0
28
sorry if this has been asked before but I have searched the internet a lot but have not been able to find a satisfactory answer. I have a time series (approximately 25 points) and I have implemented Binary Segmentation to detect a changepoint in the time series. My question is, how do I see if this changepoint is statistically significant? I.e I want to see if there is a shift in mean around that certain point in the time series. Ideally, I want to summarise this using a p-value. The standard t-tests do not apply I assume because the samples belong to a time series and hence the samples have some correlation. I would be extremely thankful for any help. Thank you
How to test if the mean of a time series has significantly shifted at a certain point?
CC BY-SA 4.0
null
2023-05-30T18:07:30.010
2023-05-31T19:16:11.380
2023-05-30T18:45:40.073
919
389183
[ "time-series", "t-test", "change-point" ]
617339
2
null
617228
7
null
The specific question from the body text is answered by J-J-J but the title question can have more explanations > Can statistical units measured per thousand inhabitants be bigger than 1000? - The number can be bigger if the count is not inhabitants per inhabitants. For example the number of shoes per inhabitant is most likely exceeding beyond 2. - But also for a count like people per inhabitants the ratio can exceed 1. For example in a particular city with a large industry, commercial properties and/or tourism the number of workers per inhabitants can exceed 1 if many of the workers live outside the city. - In addition. Figures can exceed 100% when they are measured with some source of error. In technical applications this can happen for instance when yield is computed and some experiment weighs before and after some treatment. If the process is close to 100% yield then it might sometimes exceed 100% due to measurement errors with weighting or because the process has some residue from a previous experiment (when I put 100 gram beans in my coffee mill then sometimes it produces more than 100 gram coffee grounds). With demographics this might occur when the ratio is based on two independent estimates/measurements. - Miscalculation or falsified numbers can also be a reason for unphysical values.
null
CC BY-SA 4.0
null
2023-05-30T18:27:04.163
2023-05-30T21:41:07.180
2023-05-30T21:41:07.180
164061
164061
null
617340
1
null
null
-1
34
let's say we have a car park, and each 5 minutes we record the number of existing cars and the number of free stands, can we actually aggregate this data per hour for example, if yes how can we do this? is there any special analysis for traffic data?
Can we aggregate traffic data?
CC BY-SA 4.0
null
2023-05-30T18:28:23.303
2023-05-30T18:28:23.303
null
null
353824
[ "mathematical-statistics" ]
617341
1
null
null
1
18
Consider $X(ij)$ for $i = 1, ..., n$ and $j = 1, ...,n$ be random variables. I proved that for each i, the sequences $X($i$j)$ converge in distribution to random variable Y as $n$ tends to $\infty$. More precisely, $ \begin{bmatrix} X(11) & X(12) & X(13) & X(14) & ... & \rightarrow_d Y \newline X(21) & X(22) & X(23) & X(24) & ... & \rightarrow_d Y \newline X(31) & X(32) & X(33) & X(34) & ... & \rightarrow_d Y \end{bmatrix} $ I would like to prove that based on this result the following sequence of random variables $$X(11), X(22), X(33), ...$$ will also converge to $Y$ in distribution.
Convergence of a subsequence of arrays of random variables
CC BY-SA 4.0
null
2023-05-30T18:30:18.497
2023-05-30T18:30:18.497
null
null
365245
[ "distributions", "self-study", "convergence", "subset" ]
617342
1
null
null
3
29
Does it have to do with the interdependence of one equation on the lagged values of its own as well as the other equations? If I remember correctly, in simultaneous equations, cross-causality is also a cause for endogeneity, but in VAR models the causality moves in one direction, the past impacts the future and not vice-versa.
Why are the variables in a VAR model considered endogenous?
CC BY-SA 4.0
null
2023-05-30T18:42:31.733
2023-06-02T19:24:11.730
2023-05-30T19:28:15.500
53690
367205
[ "vector-autoregression", "endogeneity", "exogeneity" ]
617344
1
null
null
1
23
I have come to know about scatterplot matrices, where pairwise scatterplots for covariates are built and then binary responses are jittered. This plot is helpful to give an idea about how each covariate has an effect on the response without fitting a logistic regression. But I actually don't understand the plot really. Here is the code that gives this type of plot: ``` library(MASS) library("ggplot2") train <- rbind(Pima.tr, Pima.tr2) train$type <- as.integer(train$type)-1L library(GGally) pairs(subset(train, select= - c(type)), col = as.factor(train$type)) ``` The red dots are maybe the points where the response is 1. How can I interpret this plot below? [](https://i.stack.imgur.com/QrnVm.png)
How to Interpret scatter plot for binary response?
CC BY-SA 4.0
null
2023-05-30T18:59:46.170
2023-05-31T00:30:33.683
2023-05-30T23:05:51.753
345611
387609
[ "r", "logistic", "interpretation", "binary-data", "scatterplot" ]
617346
1
null
null
1
14
I have two arrays of data: |# of Files |Time to Process in seconds | |----------|--------------------------| |1 |8 | |2 |20 | |3 |31 | |4 |76 | What I'm wanting to do is come up with an estimate of how long it will take to process n number of files. I have processed 1-4 files and have noted the time it takes in seconds to process n number of files. I guess I'm trying to figure out a usable equation or ratio that will produce results close to what I have in the table above. I'm not sure if I should use variance, co-variance, correlation or some other statistical method to find what I'm looking for.
Need to come up with an equation or method to calculate a ratio between two arrays of numbers
CC BY-SA 4.0
null
2023-05-30T19:16:21.647
2023-05-30T19:16:21.647
null
null
389192
[ "mathematical-statistics", "variance", "covariance" ]
617347
1
null
null
1
8
What properties should there of a subscale to be used independently of the whole scale? I am looking for a scale that measures a specific perceived parenting style, but there is no scale for it. Instead every other scale has multiple constructs. Now, the issue is the number of items along with other constructs of no interest. So, is it possible to just one subscale from the whole scale?
Independent Subscales
CC BY-SA 4.0
null
2023-05-30T19:32:41.470
2023-05-30T19:32:41.470
null
null
389143
[ "scales" ]
617349
1
null
null
2
26
In class today we saw the following example: Imagine in a university there are 1000 students, and we know that the height of students has a normal distribution (with a specific mean and variance). The normal distribution is so "powerful", that even if you take the heights of 5 random students, (on average, over many experiments) you still get pretty close to the actual average height of the population! Here is an R example that shows this: ``` set.seed(123) my_data = data.frame(id = 1:100, height = rnorm(1000, 140, 5)) results = list() for (i in 1:1000) { sample_i = my_data[sample(nrow(my_data), 5), ] mean_i = mean(sample_i$height) results[[i]] = data.frame(i, mean_i) } final = do.call(rbind.data.frame, results) > mean(final$mean_i) [1] 139.8691 plot(hist(final$mean_i, main = "Distribution for the Average Height of 5 Randomly Selected Students (1000 Random Samples)")) ``` [](https://i.stack.imgur.com/upTCK.png) Now, suppose the 10 tallest people in the world join the university - repeating the same experiment produces a different histogram: ``` set.seed(123) my_data_1 = data.frame(id = 1:100, height = rnorm(1000, 140, 5)) my_data_2 = data.frame(id = 100:110, height = rnorm(11, 230, 5)) my_data = rbind(my_data_1, my_data_2) results = list() for (i in 1:1000) { sample_i = my_data[sample(nrow(my_data), 5), ] mean_i = mean(sample_i$height) results[[i]] = data.frame(i, mean_i) } final = do.call(rbind.data.frame, results) plot(hist(final$mean_i, main = "Distribution for the Average Height of 5 Randomly Selected Students (1000 Random Samples)")) > mean(final$mean_i) [1] 140.7412 > mean(my_data$height) [1] 140.9521 ``` [](https://i.stack.imgur.com/QzGRY.png) Even in this case, it still only takes 5 random samples (on average, over many experiments) to get a pretty close answer - however, the first example provides less extreme results. Having said this, I had the following question: Is it possible to measure the "adversity" of a probability distribution function? - For example - in the first case, it only took 5 random samples to get an estimate "close" to the actual mean. - But in the second case, even though 5 random samples averaged over many experiments gave a close estimate to the actual mean - there were some instances in which a set of 5 random samples provided an estimate that was significantly different from the actual mean. In a certain way, we could argue that the probability distribution in the second case provided us more "adversity" compared to the probability distribution in the first case. - Thus, is there some metric which can be used to measure the "adversity" of a probability distribution? As an example - suppose the heights of students in University A are distributed with a normal distribution and the heights of students in University B are distributed with a gamma distribution : If you were only able to take a set of 5 random samples from either of these universities, which of these sets has a higher probability of its mean being closer to the actual mean? Are such comparisons possible? Thanks!
Measuring the "Adversity" of a Probability Distribution?
CC BY-SA 4.0
null
2023-05-22T03:27:36.033
2023-05-30T19:53:10.460
null
null
77179
[ "probability" ]
617351
1
null
null
0
14
I am trying to build a logistic regression model. My test and training sets are generating almost the same AUC. Ideally- this means that the model is performing very well. But since, I got this result at my very first attempt of building this model- I am a bit skeptical in accepting the fact that I have come up with a great model. I want to make sure that nothing else is going on with my process. Test AUC 0.7606003279815455 Train AUC 0.7613571317318779 Note: The output class is imbalanced: 10%-90% split of 1 and 0 values. So I implemented undersampling by selecting all the observations from the minority class and 3*(number of observations from minority class) from majority class. ``` Here is my code (After data processing) df_item_date_level_v6 = df_item_date_level_v5.loc[:,~df_item_date_level_v5.columns.duplicated()].copy() X=df_item_date_level_v6.drop(["sales_ind"], axis=1) y=df_item_date_level_v6.loc[:,["sales_ind"]] X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, train_size = .75) lr = LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True, intercept_scaling=1, max_iter=1000, multi_class='ovr', n_jobs=1, penalty='l1', random_state=None, solver='liblinear', tol=0.0001, verbose=0 ) #lr = LogisticRegression() lr.fit(X_train, y_train) train_score=lr.score(X_train, y_train) test_score=lr.score(X_test, y_test) print(lr.intercept_) print(lr.coef_) #Plotting AUC #predict_proba generates the predicted value of Y in the very first column. train_fpr, train_tpr, thresholds = metrics.roc_curve(y_train,lr.predict_proba(X_train)[:,1]) test_fpr, test_tpr, thresholds = metrics.roc_curve(y_test, lr.predict_proba(X_test)[:,1]) plt.plot(train_fpr, train_tpr, label="trainAUC="+str(metrics.auc(train_fpr,train_tpr))) plt.plot(test_fpr, test_tpr, label="test AUC ="+str(metrics.auc(test_fpr, test_tpr))) plt.legend() plt.xlabel("FPR") plt.ylabel("TPR") plt.title("ROC for Train and Test data with best_fit") plt.grid() plt.show() ```
Test and Train AUC are almost exactly the same
CC BY-SA 4.0
null
2023-05-30T20:04:01.460
2023-05-30T20:04:01.460
null
null
389193
[ "regression", "machine-learning", "logistic" ]
617352
1
null
null
0
7
Here's a function to print a couple of pdf's: all necessary data and libraries have been provided; n1 and n2 are used to split a long data frame in two. ``` print_data_frame<-function(){ d_am<-data[,c(1,2,3,16)] ; colnames(d_am)<-c("Date","Systolic","Diastolic","Regime") d_pm<-data[,c(1,5,6,16)] ; colnames(d_pm)<-c("Date","Systolic","Diastolic","Regime") tt <- ttheme_default(base_size=8) n1<-c(1:ceiling(nrow(data)/2)) ; n2<-c((1+ceiling(nrow(data)/2)):nrow(data)) pdf("c:/aaa/plots_data_am.pdf",height=11) D1<-tableGrob(d_am[n1,],theme=tt) D2<-tableGrob(d_am[n2,],theme=tt) marrangeGrob(list(D1,D2),nrow=1,ncol=2,top="Morning Readings") ; dev.off() pdf("c:/aaa/plots_data_pm.pdf",height=11) D1<-tableGrob(d_pm[n1,],theme=tt) D2<-tableGrob(d_pm[n2,],theme=tt) marrangeGrob(list(D1,D2),nrow=1,ncol=2,top="Evening Readings") ; dev.off() } ``` If I ignore the function and just execute each line by itself (in RStudio) it produces the two pdfs, just as I want. But if I run the function (as print_data_frame() ) it produces 2 tiny pdf files that are unreadable by Acrobat. Can anyone see why the function version isn't working correctly?
pdfs via marrangeGrob
CC BY-SA 4.0
null
2023-05-30T20:09:43.193
2023-05-30T20:09:43.193
null
null
95831
[ "r" ]
617353
1
null
null
0
8
I plan to run a logistic regression model to understand the influence of temperature on the occupancy of a hare species. However I can't decide on which aspect of temperature should I consider as my covariate? Is it minimum temperature that I should take? maximum? mean? how do I determine which one would be the best one for my analysis?
How do I choose one covariate out of many covariates that might have similar effects?
CC BY-SA 4.0
null
2023-05-30T20:17:22.463
2023-05-30T20:17:22.463
null
null
114568
[ "predictor" ]
617354
1
null
null
0
34
I have some data that I would like to fit using a dependent Dirichlet process (DDP) but my data contains replicates. For example, I have longitudinal data (say measured at 6 time points) and I would like to fit a DDP to that data but I am unsure how to handle the replication. Are there examples literature on this topic that someone could point me to?
Literature on Dependent Dirichlet Processes with Replicates
CC BY-SA 4.0
null
2023-05-30T20:18:03.183
2023-05-31T22:20:51.217
2023-05-31T22:20:51.217
227508
227508
[ "bayesian", "dirichlet-process" ]
617355
1
null
null
0
34
I am reading a topic about point estimation of model parameters. I understood it as follows: > We have "observed" a sample $X_1,...X_n$ where we know it's distribution, i.e. the sample is identically distributed w.r.t. $\mathcal{F}_\theta$ where $\theta\in \Theta$ is a model parameter (not necessarily one dimensional). For example we can have that $X_1,...X_n\stackrel{iid}{\sim}\mathcal{N}(\mu, \sigma^2)$ where $\mu$ is unknown. Then in general we want to find a best approximation for $\theta$, or in our example for $\mu$. Here graphically it would mean that we want to find the center of the density function of $\mathcal{N}(\mu, \sigma^2)$, i.e. is it moved to the left or to the right. There are several ways to do this for example one is the maximum likelihood estimator, then in our example we would get $\hat \mu:=\frac{1}{n}\sum_{i=1}^n X_i$ which is the maximum likelihood estimator of $\mu$. But as we see $\hat \mu$ is a random variable, not a number. Our prof. then told us that if we take $\omega \in \Omega$ and look at $\hat \mu(\omega)=\frac{1}{n}\sum_{i=1}^n X_i(\omega)$ then we get a number which is called an estimate of $\mu$. But now I have the following questions: - I don't see is what does this do graphically, i.e. what happens graphically if I fix $\omega$? - if I fix different $\omega$'s then I would get different estimates, but isn't this a problem I mean how do I then know that the best one is? Thanks a lot for your help.
What is the intution between point estimation of model parameters in statistics?
CC BY-SA 4.0
null
2023-05-30T20:22:47.193
2023-05-30T20:25:43.353
2023-05-30T20:25:43.353
389195
389195
[ "probability", "maximum-likelihood", "estimation" ]
617357
1
null
null
0
146
Suppose I run AB testing for a website that does subscription business. The company offers a free trial for 7 days before automatically enrolling users to subscriptions and charging them unless they cancel before the trial ends. The treatment is trial reminder message at the checkout page and a push notification before trial ends. Users arrive at the checkout page will enter the experiment (only test group will see the trial reminder message on checkout page and get notification after), but they will only start trial after they finished checkout by entering their credit card. If we want to look at the impact on cancellation rate and average charge per user, do I use all users in the experiment as the denominator or only those who started the trial for both cancellation rate and average charge per user? I feel we should use everyone in the experiment as denominator, but only those who started trial can cancel, hence using those who started trial can give us better measure of cancellation rate. However, if users starting trial at different rate between test and control group, I would have biased result for cancellation rate if I only include those who started trial?
AB testing, which denominator to use for lower funnel metrics
CC BY-SA 4.0
null
2023-05-30T20:25:39.603
2023-05-30T20:53:33.210
null
null
389199
[ "experiment-design", "ab-test" ]
617358
2
null
617116
1
null
Hello again (I commented on your question on Maths SE). I think you can get more desirable results using a different cost function for the set of change points rather than completely starting again with your own method. [Here](https://github.com/AlexanderDBolton/Regimes_RJMCMC/blob/master/PELT_Code/PELT.py) is my own coding of PELT in Python. To start, I can recover something like the segmentation that you're describing if I run ``` import PELT x = [9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 28.0, 13.0, 9.0, 10.0, 10.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 9.0, 10.0, 10.0, 10.0, 9.0, 9.0, 9.0, 9.0, 9.0, 10.0, 31.0, 31.0, 35.0, 35.0, 37.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0] PELT.PELT_cp_n(x, len(x) - 1, penalty = "BIC")[-1] ``` `PELT_cp_n` outputs the optimal set of change points that was determined at each time point, so `[-1]` selects the final set of change points. I get output ``` [0, 7, 8, 12, 29, 32, 37, 38, 40, 42, 43] ``` which looks like this: [](https://i.stack.imgur.com/BIk3k.png) I agree with you that this is too many change points, particularly the segmentation of the long sequence of 9s and 10s, and it's happening because PELT can get a perfect fit on the long sequence of 9s, and it's actually more expensive for it to include a few 10s in the segment than to start a new segment for the 10s. Because I'm using the BIC penalty, every time PELT throws down a new change point it has to pay the complexity penalty $\beta = 2\log(n)$, where $n$ is the number of data points. If you increase the penalty $\beta$ to be more strict then you can enforce a sparser model that only triggers more serious change points. For example, if you change the start of the `PELT_cp_n` to be ``` def PELT_cp_n(x, end_time, likelihood_model = "gaussian", K = 0, penalty = "BIC"): if penalty == "BIC": beta = 2.0 * np.log(end_time + 1) elif penalty == "AIC": beta = 2.0 * 2.0 elif penalty == "aluchko": beta = 25 ``` and run ``` PELT.PELT_cp_n(x, len(x) - 1, penalty = "aluchko") ``` then you get output `[0, 7, 9, 38, 43]`, depicted below. You could experiment with different penalties until you find one that works in general for the kind of data that you have. [](https://i.stack.imgur.com/pKrxx.png)
null
CC BY-SA 4.0
null
2023-05-30T20:37:56.913
2023-05-30T20:37:56.913
null
null
78857
null
617359
1
null
null
0
17
I am trying to model semi-continuous data with a mass at zero and a long tail, as seen in the image below "Actual - train", using Tweedie. However, the prediction distribution "predicted - train" has an odd peek that I can't explain/understand. I think it's important to note that I get similar results using a range of Tweedie variance powers. Also, I get similar results if I truncate the data to remove observations that represent the far right tail. Can anyone offer insight as to why my predictions would take this shape? [](https://i.stack.imgur.com/Fossu.png)
Tweedie distribution
CC BY-SA 4.0
null
2023-05-30T20:51:32.220
2023-05-30T20:51:32.220
null
null
389196
[ "tweedie-distribution" ]
617360
2
null
617357
0
null
This is a very good question. Let's first outline a few assumptions and draw a DAG. - First, it seems like users are randomized after arriving at the checkout page. This means the "treatment" is applied prior to the checking out. - A reminder of a trial could possibly effect conversion per randomized user. Ostensibly, you are reminded that you will be paying in the future which may change the probability you convert. - There is an additional treatment applied conditional on having checked out. With these in hand, a reasonable dag might be [](https://i.stack.imgur.com/Eqoqz.png) OP correctly notes that conditioning on those users who enter their credit card (Enter CC in the dag) results in a bias. If I recall correctly, this would be selection bias (because there are unmeasured confounders (U in the dag) which might be associated with entering the credit card and conversion. One of these might be motivation. Motivated users are going to enter their credit card and covert because they really want to, which can bias estimates. However, the resulting estimate from simply analyzing the experiment using an indicator for having seen the reminder (Treatment #1 in the dag) is not the direct effect of the reminder since the push notification (Treatment #2) is on the causal path to conversion. There are a two options then: - Make the reminder the only treatment. Those users arriving on the checkout page enter the experiment and the denominator is the count of said users. - Make the push notification the only treatment. Those users entering the free trial enter the experiment and the denominator is the count of said users. An additional option might exist so that some sort of causal effect for treatment #1 and #2 can be identified, but I think this would rely on mediation analysis which I am not convinced is as reliable as some may say. I'm about to leave for a little bit, but I intend to come back and flesh out my answer some more. Lastly, this answer relies on the assumption that treatment #1 will affect credit card entry. The extent to which this is a reasonable assumption relies on knowledge only OP has.
null
CC BY-SA 4.0
null
2023-05-30T20:53:33.210
2023-05-30T20:53:33.210
null
null
111259
null
617361
2
null
616912
4
null
### Sketch of the t-test Let $x_1, \dots, x_n \sim N(\mu_1,\sigma^2)$ and $y_1, \dots, y_n \sim N(\mu_2,\sigma^2)$ be independent samples. Let's define - The raw effect $\theta = \mu_2-\mu_1$ - The estimate of the raw effect $\hat{\theta} = \bar{Y} - \bar{X} \sim N \left(\theta,\frac{2}{n} \sigma^2 \right)$ - The standard error of the raw effect $\text{se}(\hat\theta) = \sqrt\frac{2}{n} \hat\sigma = \sqrt\frac{2}{n} \sqrt{\frac{\sum_{i=1}^n (X_i-\bar{X})^2 + \sum_{i=1}^n (Y_i-\bar{Y})^2}{2n -2}} \sim \sigma\sqrt{\frac{2}{n}} \sqrt{\frac{1}{2n-2}} \chi_{2n-2}$ where $a\chi_{2n-2}$ means a scaled chi distribution (also known as a case of the gamma distribution). And the t-statistic is defined as $$t = \frac{\hat\theta}{\text{se}(\hat\theta)} \sim t_{2n-2,\theta\sqrt{n/2}}$$ this statistic follows a [non-central t-distribution](https://en.wikipedia.org/wiki/Noncentral_t-distribution) where $2n-2$ are the degrees of freedom and $\theta \sqrt{n/2}$ is the non-centrality parameter. A typical hypothesis test will regard the significance based on whether or not the t-statistic is above some level. (and the reason for all this hassle with the t-statistic is that it is a [pivotal statistic](https://en.wikipedia.org/wiki/Pivotal_quantity) that does not depend on the standard deviation $\sigma$ of the population.) ### Geometric view of the t-test A geometric view of the t-test can be made with a scatterplot with $\hat{\theta}$ on the horizontal axis and $\text{se}(\hat{\theta})$ on the vertical axis. We do this below in an example with simulations for the null hypothesis $\theta = 0$ and for the alternative hypothesis $d = \frac{\theta}{\sigma} = 0.5$ and $d=2$ (where the [effect size](https://en.wikipedia.org/wiki/Effect_size) $d$ is expressed relative to the population deviation, see also the question [Power Analysis and the non central t distribution: what is the non-centrality parameter?](https://stats.stackexchange.com/questions/491720/)). The simulations are made with samples of size $n=5$. (click on the image to view a larger size) [](https://i.stack.imgur.com/TsXmT.png) Figure 1: Simulations of results for an independent two sample t-test with sample sizes 5. We simulated 3000 points under the null hypothesis of zero effect size (upper image) and under the alternative hypotheses of an effect size equal to $d = 0.5$ and $d=2$. The effect size is used for the horizontal axis and the standard error for the vertical axis. The t-statistic is proportional to the ratio of the two axes $t = \frac{\hat{\theta}}{\text{se}(\hat{\theta})}$. The angle relates to the t-statistic and points at a smaller angle will have a larger t-statistic. Points with $|t|>2.3$ are considered significantly different, and in the case of the null hypothesis this occurs approximately 5% of the time. ### Graphical illustration of the functions Let's focus on the middle case of the three simulations, the t-tests when the true effect size is $d=0.5$. We can plot the distribution of the effect size for the cases when the observation is significant and for the cases when the observation is not significant: [](https://i.stack.imgur.com/7mkml.png) Figure 2: Histogram of the 3000 cases in the middle plot from Figure 1, when the true effect size is $d=0.5$. Based on these histograms one can compute power, s-type error and m-type error. The consequence of the hypothesis test is that mostly relatively large effects are accepted/reported (smaller effects than the true effect size can be reported, if the estimated standard deviation is small). This makes that reported values have a bias and are often larger than the true effect sizes. - The rejection region is based on the distribution of the null hypothesis and for a two sided test it is such that $\alpha/2$ percent of the results on both sides of the distribution are falsely rejected if the null hypothesis is true. In the first panel of Figure 1, the simulations when the null hypothesis is true, we reject the null hypothesis when $|t|>2.3$, this occurs in 5% of the cases. - The power is the probability to reject when the alternative hypothesis is correct. In the Figure 1 we see that this occurs in respectively 10% and 80% of the cases for relative effect sizes $d=0.5$ and $d=2$. - The S-type error is the fraction of the rejected/significant cases with the wrong sign (this occurs mostly when the power is small). In the Figure 2 this is the fraction of the red area compared to the total of the red and green area. $\frac{0.4}{9.97} \approx 0.04$ - The M-type error or exaggeration is the mean of the absolute observed effect of the rejected/significant cases relative to the true effect. In Figure 2 this is the mean of the red and the green points divided by the true effect. In the example this is approximately 1.70. ### Connection between power and S-type and M-type errors For two given samples with size $n$ and significance level $\alpha$ the power is a function of the relative effect size ([Cohen's d](https://en.wikipedia.org/wiki/Effect_size#Cohen%27s_d)) as demonstrated in the question: [Power Analysis and the non central t distribution: what is the non-centrality parameter?](https://stats.stackexchange.com/questions/491720/). The type-S and type-M errors are similarly functions of the effect size (explained further in the last section). We can plot them side by side as monotonous functions of the effect size. Since the three values, power, type-S and type-M errors, all have relationships with the effect size, this makes that they all have relationships among each other as well. These relationships may not be easy to express with a simple mathematical expression, but one can use the graphs to find one from the other. For example, with a given power one estimates the effect size, then for this effect size one computes the s-type and M-type errors. (or for several different effect sizes one computes both power an error, then plot those two versus each other as horizontal and vertical axis in a scatterplot, see also Figure 4) [](https://i.stack.imgur.com/ge3aP.png) Figure 3: example of relationships of three values (power, type-S error and type-M error) as function of the effect size for sample size $n=5$. --- ### Small difference to the retrodesign function Note, in the article from Gelman and Carlin, they have a function `retrodesign` which they use to make the computations by using a shifted t-distribution. Here I used a non-central t-distribution which represents a t-test more accurately. Also for computing the M-error with a t-test, the computation of the M-error should not simulate only the t-values but instead both the t-value and the effect size. The M-error is the mean absolute observed effect relative to the true effect, given that the observed effect is significant (we need to use the distribution in Figure 2 that is based on simulations like in Figure 1). --- ### Generalization This answer discusses the case of the two-sample t-test as in the reference of the question, but for other tests the computations might be different. There is no single fixed relationship between power and S-type and M-type errors. The image 4 below demonstrates this for different values of $n$ and two types of statistical tests. (although arguably one may consider the cases to be very close) [](https://i.stack.imgur.com/xp6qF.png) Figure 4: Example of different relationships between type S and type M errors with power, or different testing conditions. --- ### Replacing simulation with exact computations Note that the histograms from Figure 2 can be expressed in terms of the distribution functions of the normal distribution and $\chi$ distribution. The histogram of the total of the points is normal distributed. The histogram of the significant points can be expressed as the density of the normal distribution multiplied with the cdf of a chi distribution. Potentially one might compute the m-type error based on this instead of using a simulation. In the case of large $n$ the chi distribution approaches a singular distribution and the distribution of the significant/non-significant cases will become a truncated normal distribution. ### Code for figures With the code below one can make figures 3 and 4 ``` library(retrodesign) ### for shifted t-distribution significance tests ### compute power and error rates for two sample t-test retropower = function(d, n, alpha = 0.05, n.sim = 10^4) { nu = 2*n-2 ### boundary for alpha level t-test tc = qt(1-alpha/2, df = nu) ### power power = pt(-tc, df = nu, ncp=d/sqrt(2/n)) + 1-pt( tc, df = nu, ncp=d/sqrt(2/n)) ### s-error rate type_s = pt(-tc, df = nu, ncp=d/sqrt(2/n))/power ### simulate experiments x0 = rnorm(n.sim,0) s = sqrt(rchisq(n.sim,nu)/nu) ### m-error type_m = sapply(d, FUN = function(di) { x = abs(x0+di*sqrt(n/2)) significant = x/s>tc return(mean(x[significant == 1]/sqrt(n/2))/di) }) return(list(power = power, type_s = type_s, type_m = type_m)) } ### some settings set.seed(1) d = seq(0,3,0.05) #### ### creating plots for image 4 #### layout(matrix(1:2,1, byrow = TRUE)) par(mgp =c(2,1,0), mar = c(4,4,3,1)) plot(2,2, log = "xy", type = "l", xlim = c(0.05,1), ylim = c(0.0001,1), xlab = "power", ylab = "type-s error") m = c(5,20,100) i = 0 for (mi in m) { i = i+1 r = retropower(d/sqrt(mi/5),mi) q = retrodesign(A = as.list(d/sqrt(mi/5)), s = sqrt(2/mi), df = 2*mi - 2) lines(r$power,r$type_s, col = i) lines(q$power,q$type_s, col = i, lty = 2) } plot(2,2, log = "xy", type = "l", xlim = c(0.05,1), ylim = c(1,20), xlab = "power", ylab = "type-m error") m = c(5,20,100) i = 0 for (mi in m) { i = i+1 r = retropower(d/sqrt(mi/5),mi) q = retrodesign(A = as.list(d/sqrt(mi/5)), s = sqrt(2/mi), df = 2*mi - 2) lines(r$power,r$type_m, col = i) lines(q$power,q$type_m, col = i, lty = 2) } title(main="type S/M errors versus power for different type of tests", outer=TRUE, line=-1, cex = 1) legend(0.08,18, c("n=5","n=20","n=100"), col = c(1,2,3), lty = 1, cex = 0.7, title = "non-central\n t-distribution", box.lwd = 0) legend(0.3,18, c("n=5","n=20","n=100"), col = c(1,2,3), lty = 2, cex = 0.7, title = "shifted\n t-distribution", box.lwd = 0) #### ## creating plots for image 3 ### d = seq(0,5,0.025) r = retropower(d,5) layout(matrix(1:3,3)) par(mgp =c(2,1,0), mar = c(4,4,2,1)) plot(d,r$power, type ="l", xlab = "effect size in terms of sigma", ylab = "power", main = "power for two sample test with n = 5" , ylim = c(0,1)) plot(d,r$type_s, type ="l", xlab = "effect size in terms of sigma", ylab = "error rate", main = "S-type error for two sample test with n = 5" , ylim = c(0,0.5)) plot(d,r$type_m, type ="l", xlab = "effect size in terms of sigma", ylab = "magnification", main = "M-type error for two sample test with n = 5" , ylim = c(0,30)) ```
null
CC BY-SA 4.0
null
2023-05-30T21:30:25.283
2023-06-03T11:34:20.033
2023-06-03T11:34:20.033
164061
164061
null
617362
1
null
null
0
6
[](https://i.stack.imgur.com/P6TLF.png) Above is a snippet of a dataset I am working with. It is a pull from a research tool presenting a set of many different and distinct attributes that have been sorted into numerous "categories" and "insights", some of which only occur once in the dataset (e.g. "Horror" or "Entertainment Lifestyle" on the far right.) and calculated "composition" and "indexing" percentages compared to the general population and sample "audience". I would like to make all of the fields with categorical variables into dummy variables and then use one of those dummy variables as a target/independent variable in a logistic regression model. The numerical variables are "composition %" and "index" on the far right. My question(s) is: a. would using a logistic regression model even make sense if the target dummy variable only occurs once in the sample dataset? (i.e. there is only one "1" value in the entire field corresponding to the specific category after converting to a dummy variable since the categories are all very unique and the categories I am interested in are very distinct) b. if I were to use the numerical variables as predictors for this logistic model, would I need to do anything to make the corresponding coefficients more interpretable in model results? c. Are there any other models that would be more helpful or is this dataset simply not workable from a machine learning standpoint? I would be grateful for any pointers or ideas or links to similar discussions.
What model should I use for this weird dataset? (Survey Response attributes)
CC BY-SA 4.0
null
2023-05-30T22:34:38.723
2023-05-30T22:34:38.723
null
null
389204
[ "regression", "logistic", "dataset", "survey" ]
617363
2
null
617344
0
null
I find it a little strange that you were taught to use both `ggplot2` and `GGally` but then are forced to use base R plots, but I guess that's a separate issue. First, it helps to know what was done before this. You have taken two datasets from the `MASS` package (the `Pima.tr` and `Pima.tr2` datasets), bound their rows with `rbind` to make it one dataset, and converted a categorical variable of "type" into a dummy-coded one (using $0$ and $1$ as the coding). The `pairs` code runs a bunch of scatterplots on whatever numeric data you have (the `subset` function here is used to take out the one variable that isn't because it forces an error if you don't). The `col = as.factor(train$type)` argument basically colors the factors listed, here the factor of "type", and plots them in R. Otherwise you would normally get a bunch of black and white scatterplots. So basically, the plot has colored the previously "Yes" or "No" type data into black and red so you can differentiate the two. Since you are already working with `GGally`, you can create a `ggplot2` version with the following code (though this version replaces the redundant scatter plots on the upper triangle/diagonal of the matrix with density plots and correlations). ``` #### GGally Version #### p <- ggpairs(train, columns = 1:7, ggplot2::aes(color=as.factor(type)))+ scale_color_manual(values = c("black","red"))+ scale_fill_manual(values = c("black","red")) p ``` [](https://i.stack.imgur.com/p1vsf.png)
null
CC BY-SA 4.0
null
2023-05-30T22:52:26.857
2023-05-31T00:30:33.683
2023-05-31T00:30:33.683
345611
345611
null
617364
2
null
616912
3
null
To avoid notational difficulties, I will use notation to Gelman and Carlin, with the effect size represented as $\theta$ and upper-case $D$, $D^\text{rep}$, etc., used to represent the data as a random variable. We will consider a test with null hypothesis $H_0: \theta = 0$ with a test statistic $D$ which is an estimator for the true effect size. We assume that the test is constructed with an evidentiary ordering that is more in favour of the alternative hypothesis for larger (absolute) magnitude of $D$, so the null hypothesis is rejected if the absolute magnitude of $D$ falls too far from zero. Given a significance level $\alpha$ we let $d_\text{crit}$ denote the (positive) critical point of the test so that the the null is rejected if $|D| > d_\text{crit}$. (Note that all of our analysis will implicitly depend on $\alpha$.) In Gelman and Carlin there are some further simplifying assumptions made about the test that lead to a particular form for the sampling density for the test statistic. Here we will proceed in generality by giving formulae that apply for any distribution of the test statistic. To facilitate analysis of the quantities of interest, we define the intermediate quantities: $$H_-(\theta) \equiv \mathbb{P}_\theta(D^\text{rep} < -d_\text{crit} ) \quad \quad \quad \quad \quad H_+(\theta) \equiv \mathbb{P}_\theta(D^\text{rep} > d_\text{crit} ).$$ The quantities of interest can then be written as: $$\begin{align} \text{Power} (\theta) &\equiv \mathbb{P}_\theta(p(D^\text{rep}) < \alpha) \\[18pt] &= \mathbb{P}_\theta(|D^\text{rep}| > d_\text{crit}) \\[18pt] &= \mathbb{P}_\theta(D^\text{rep} < -d_\text{crit} ) + \mathbb{P}_\theta(D^\text{rep} > d_\text{crit} ) \\[18pt] &= H_-(\theta) + H_+(\theta). \\[24pt] \text{Type-S Error Rate} (\theta) &\equiv \mathbb{P}_\theta(D^\text{rep} < 0 |p(D^\text{rep}) < \alpha) \\[18pt] &= \mathbb{P}_\theta(D^\text{rep} < 0 | |D^\text{rep}| > d_\text{crit} ) \\[10pt] &= \frac{\mathbb{P}_\theta(D^\text{rep} < 0, |D^\text{rep}| > d_\text{crit} )}{\mathbb{P}_\theta(D^\text{rep} < 0, |D^\text{rep}| > d_\text{crit} ) + \mathbb{P}_\theta(D^\text{rep} \geqslant 0, |D^\text{rep}| > d_\text{crit} )} \\[6pt] &= \frac{\mathbb{P}_\theta(D^\text{rep} < -d_\text{crit} )}{\mathbb{P}_\theta(D^\text{rep} < -d_\text{crit} ) + \mathbb{P}_\theta(D^\text{rep} > d_\text{crit} )} \\[6pt] &= \frac{H_{-\text{sgn}(\theta)}(\theta)}{H_-(\theta) + H_+(\theta)}. \\[16pt] \text{Exaggeration Ratio} (\theta) &\equiv \frac{1}{\theta} \cdot \mathbb{E}_\theta(|D^\text{rep}| |p(D^\text{rep}) < \alpha) \\[10pt] &= \frac{1}{\theta} \cdot \mathbb{E}_\theta(|D^\text{rep}| ||D^\text{rep}| > d_\text{crit}) \\[10pt] &= \frac{1}{\theta} \Bigg[ \mathbb{E}_\theta(D^\text{rep}| D^\text{rep} > d_\text{crit}) \cdot H_+(\theta) - \mathbb{E}_\theta(D^\text{rep}| D^\text{rep} < -d_\text{crit}) \cdot H_-(\theta) \Bigg]. \\[10pt] \end{align}$$ (Note that even though we are proceeding here for the case where $\theta>0$, to make things easier I have given the more general formula for the Type-S error in the last step.)
null
CC BY-SA 4.0
null
2023-05-30T23:11:56.657
2023-05-31T07:06:09.397
2023-05-31T07:06:09.397
21054
173082
null
617365
1
null
null
0
11
I am trying to estimate the effect of an experimental intervention (random assignation to treatment and control groups) with the percentage of people in each group assisting to government services as the outcome variable. If i just wanted to know the impact of the intervention, a simple difference in the percentages would be it, right? However, if I want to use a regression to control for other predictor of the variance in order to get a more precise estimate, my guess is that I should use a logistic regression, right? Then, I can obtain the average marginal effects to calculate the effect of the intervention on the probability of a person being enrolled in government services. Nontheless, I find that this unnecesarily complicates the analysis. Would it be wrong to use an MCO instead? I have read [here](https://stats.stackexchange.com/questions/284843/percentage-as-dependent-variable-in-multiple-linear-regression) that it might be problematic. But is that still the case in an experimental setting, where I am only interested in the coefficient of a dummy independent variable?
Study of an experimental regression with a percentage as the dependent variable
CC BY-SA 4.0
null
2023-05-30T23:12:20.923
2023-05-30T23:12:20.923
null
null
322599
[ "regression", "logistic", "multiple-regression", "econometrics" ]
617366
1
null
null
0
21
When dealing with non-stationary time series (for instance, in auto-correlation analysis), differencing (computing absolute differences between consecutive samples/observations) is often regarded as the simplest method of de-trending the data. In theory, the first derivative (similar to what is obtained when computing the [gradient using central differences](https://numpy.org/doc/stable/reference/generated/numpy.gradient.html)) should also remove any underlying trends in the time series. What would be the advantages/drawbacks of using one over the other?
Gradient vs differences to remove non-stationarity in time series?
CC BY-SA 4.0
null
2023-05-30T23:18:22.397
2023-05-31T05:55:06.200
2023-05-31T05:55:06.200
53690
388119
[ "time-series", "stationarity", "trend", "derivative", "differencing" ]
617367
1
617406
null
3
223
I was reading this [amazing article](https://towardsdatascience.com/the-fwl-theorem-or-how-to-make-all-regressions-intuitive-59f801eb3299) about FWL theorem and it's application to causal inference. In the article, there are some examples showing that the coefficients of an OLS estimator is the same when estimating the coefficients using the FWL theorem. If that's the case, what is the point of using causal inference to reduce multivariate regressions into univariate ones? The article mentions couple of reasons, but if the coefficient in question is the same for both approaches I'm having difficulty seeing the benefits of it. TIA!
Why use causal inference if coefficients are same in an OLS?
CC BY-SA 4.0
null
2023-05-30T23:49:14.490
2023-05-31T08:54:54.277
null
null
186166
[ "inference", "causality" ]
617368
1
617398
null
2
52
I am struggling with the following problem: $X_1, X_2 \sim N(0, 1)$ are independent random variables. Let $Y_1 = \frac{1}{\sqrt{2}}(X_1 + X_2)$ and $Y_2 = \frac{1}{\sqrt{2}}(X_1 - X_2)$. Show that $Y_1, Y_2$ are independent, and have $N(0, 1)$ distibution. So $$Y_1 ∼N\left(0, \left(\frac{1}{\sqrt{2}}\right)^2\times 1 + \left(\frac{1}{\sqrt{2}}\right)^2\times 1\right) = N(0, 1)$$ Same goes for $Y_2$. I calculated their covariance to be 0, and now I want to use the general property that when $X_1,\ldots,X_n$ have joint normal distribution, then $X_1, \dots, X_n$ are uncorrelated $\iff$ $X_1, \dots, X_n$ are independent. So my goal is to show that $(Y_1, Y_2)$ has a normal join distribution. I do not know how to do that though. Edit with solution: Since $X_1, X_2$ are independent, their joint distribution densitiy is $f(x_1, x_2) = f(x_1)*f(x_2) = \frac{1}{2\pi}\exp(-\frac{1}{2}(x_1^2+x_2^2)$, so $(X_1, X_2) ∼ N_2(0, I)$. Now $$\begin{bmatrix}Y_1\\Y_2\end{bmatrix} = \begin{bmatrix}\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}\\\frac{1}{\sqrt{2}}&-\frac{1}{\sqrt{2}}\end{bmatrix} \begin{bmatrix}X_1\\X_2\end{bmatrix}$$Using theorem provided by @utobi, $(Y_1, Y_2) ∼ N_2\left(0, \begin{bmatrix}\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}\\\frac{1}{\sqrt{2}}&-\frac{1}{\sqrt{2}}\end{bmatrix}^2\right) = N_2(0, I)$. From this, and the fact that $Cov(Y_1, Y_2)=0$ follows that $Y_1, Y_2$ are independent.
Calculate joint distribution from marginal distributions
CC BY-SA 4.0
null
2023-05-30T23:52:09.510
2023-05-31T10:46:40.680
2023-05-31T10:46:40.680
375261
375261
[ "normal-distribution", "random-variable", "independence", "multivariate-normal-distribution" ]
617369
2
null
580316
0
null
Consider the example below. ``` set.seed(2023) N <- 1000 x1 <- runif(N, -2, 2) x2 <- runif(N, -2, 2) x3 <- runif(N, -2, 2) y <- 0*x1 + 0*x2 + 0*x3 + x3^2 cor(x1, y) # -0.02706921 cor(x2, y) # 0.02323476 cor(x3, y) # -0.001507549 ``` Here, there are three possible variables (`x1`, `x2`, `x3`) to predict the outcome (`y`). The correlation between each predictor variable and `y` is close to zero and would be eliminated by your proposed method of screening out variables that have a low correlation with `y`. However, squaring `x3` is a perfect predictor of `y`. If you have a flexible model that can catch these kinds of nonlinear functions of the original data, you are depriving the model of data that could be highly predictive when used the right way (which a good model should figure out). For instance, a neural network is able to detect the nonlinear relationship between `x3` and `y` without being explicitly programmed to look for such a relationship, yet screening based on the correlation would deprive such a model of that crucial `x3` variable. ``` library(nnet) L <- nnet::nnet(y ~ x1 + x2 + x3, size = 3, linout = T) plot(x3, y) lines(x3, predict(L)) ``` [](https://i.stack.imgur.com/EHNT9.png) (Yes, there are alternatives to the Pearson correlation used here. Spearman correlation could be an alternative, though the Spearman correlations here are all quite low, too, and would not be particularly helpful.) > Is it ok to employ linear correlation to dismiss some variables because of their LINEAR relationship to then use a model which not necessarily models a linear relationship? The example above hopefully demonstrates how such an approach can do serious harm to your analysis.
null
CC BY-SA 4.0
null
2023-05-31T00:13:09.470
2023-05-31T00:22:08.243
2023-05-31T00:22:08.243
247274
247274
null
617370
2
null
614985
0
null
> I got the residual mean deviance over 600, which seems a lot? This is exactly what should happen. By dividing your `Prob_1` outcome variable by $100$, you change the units (such as going between centimeters and meters). When you do divide by $100$, you get that the error is $0.06002\space m^2$. When you do not divide by that $100$, you get that the error is $600.2\space cm^2$. $$0.06002\space m^2 = 600.2\space cm^2$$ You might not be working in meters and centimeters, but these error values all have units, and you are getting the same answer whether you divide by $100$ or not, just in different units. > What are considered to be good values of MSE, when can I say the tree is predicting correctly?` See [this](https://stats.stackexchange.com/a/414350/247274) for why that requires a context. > The % Var explained= 0.69, which is quiet low, and when changeing the mtry, the value is some times even with a - sign (for example -3.22). Again, whether or not a particular measure of performance is any good requires a context, and it might be that your value of $0.69$ is pretty good! For instance, I have seen papers in top journals with values a tenth as high as that. Regarding the alues below zero, in a nonlinear regression like a random forest, the notion of "proportion of variance explained" is a bit dubious, as I explain [here](https://stats.stackexchange.com/questions/551915/interpreting-nonlinear-regression-r2). However, you can regard that value as being a comparison of the mean squared error of your model to that of a baseline model that you must beat. If your value is less than zero, your model is doing a worse job of predicting than that "must beat" model is doing. If you get a result that your model performance can range from a solid value of $0.69$ to a totally unacceptable value less than zero, it would seem that your predictions are unstable.
null
CC BY-SA 4.0
null
2023-05-31T00:29:28.763
2023-05-31T00:37:46.917
2023-05-31T00:37:46.917
247274
247274
null
617371
2
null
617368
0
null
- $C(Y_1,Y_2) = E[Y_1Y_2] - E[Y_1]E[Y_2] = E[Y_1Y_2]$ - $E[2Y_1Y_2] = E[X_1^2-X_2^2] = 0$ So $C(Y_1, Y_2) = 0$ - $f_{X_1,X_2}(x_1, x_2) = f_{X_1}(x_1) f_{X_2}(x_2)$ so idp. Borrowing from wikipedia: $f_{X_1, X_2}(x,y) =$ $$ \frac{1}{2 \pi \sigma_X \sigma_Y \sqrt{1-\rho^2}} \exp \left( -\frac{1}{2\left[1 - \rho^2\right]}\left[ \left(\frac{x-\mu_X}{\sigma_X}\right)^2 - 2\rho\left(\frac{x - \mu_X}{\sigma_X}\right)\left(\frac{y - \mu_Y}{\sigma_Y}\right) + \left(\frac{y - \mu_Y}{\sigma_Y}\right)^2 \right] \right) $$ For the MVN $$ f(\mathbf{x})= \frac{1}{\sqrt { (2\pi)^k|\boldsymbol \Sigma| } } \exp\left(-{1 \over 2} (\mathbf{x}-\boldsymbol\mu)^{\rm T} \boldsymbol\Sigma^{-1} ({\mathbf x}-\boldsymbol\mu)\right) $$ $\Sigma$ becomes diagonal when correlations are zero so you get $f(x) = \prod_j f_j(x_j)$
null
CC BY-SA 4.0
null
2023-05-31T00:29:33.717
2023-05-31T00:29:33.717
null
null
54458
null
617372
1
null
null
0
17
I have a data set in which all patients have a particular disease. The disease has three varieties labeled as 0, 1, and 2 in the data set. I want to know the association of a particular drug with this disease. Some of the patients are taking the drug some are not. For the case-control study, If I assume the patients with disease as the outcome variable, then we don't have any control because all patients have the outcome (All of them have the disease). If I assume the patients on the drug as the outcome variable and the disease as the exposure variable, then we can't have any unexposed in the 4x4 table. So a case-control study is not possible. What study design to use for it? I shall be highly thankful for your guidance. Thanks
Which study design is best for the following?
CC BY-SA 4.0
null
2023-05-31T02:05:32.000
2023-05-31T05:04:38.517
null
null
389211
[ "case-control-study" ]
617373
2
null
64585
0
null
You can check this great answer here: [https://stats.stackexchange.com/a/22228/193114](https://stats.stackexchange.com/a/22228/193114). In general, it depends on the application you are working on. Euclidean distance is better than DTW when temporal alignment is not what you want. Euclidean distance is better than DTW when you wish to group time series that behave exactly the same at each time.
null
CC BY-SA 4.0
null
2023-05-31T02:13:04.270
2023-05-31T02:13:04.270
null
null
193114
null
617374
2
null
599439
0
null
The following article may be helpful for you. Direct and indirect effects in a logit model. The Stata Journal Volume 10 Number 1: pp. 11-29 Maarten L. Buis Department of Sociology TΓΌbingen University TΓΌbingen, Germany [email protected] [https://www.stata-journal.com/article.html?article=st0182](https://www.stata-journal.com/article.html?article=st0182)
null
CC BY-SA 4.0
null
2023-05-31T02:16:49.637
2023-05-31T02:16:49.637
null
null
389212
null
617375
1
null
null
0
22
I am reading a [paper](https://arxiv.org/pdf/2207.07758.pdf) where, on the page 13, the authors state that the survival times $T_i$ are generated from a proportional hazards (PH) model with hazard function: $$\lambda_T(t\mid X_i, W_i)=\exp(\beta X_{i1}+(-0.5-\gamma_1 X_{i2})W_i)\sqrt{t}/2$$ where $W_i$ is the treatment assignment. The censoring times are generated from a Weibull distribution with hazard function $\kappa^\rho$ where $\kappa$ is the scale parameter and $\rho$ is the shape parameter. In the supplementary code for the paper, the survival times are generated as ``` survival.time <- (-log(runif(n)) / exp(beta * X[ ,1] + (-0.5 - gamma * X[ ,2]) * W))^2 ``` and the censoring times are generated as ``` censor.time <- (-log(runif(n)) / (kappa ^ rho)) ^ (1 / rho) ``` However, the parameterization the authors are using doesn't seem to be consistent with [existing approaches](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3546387/) for generating survival times from PH models so I'm not sure where these lines of code are coming from. Any ideas?
Generating survival and censoring times from proportional hazards model
CC BY-SA 4.0
null
2023-05-31T02:18:38.867
2023-05-31T13:35:19.033
2023-05-31T02:37:39.703
177990
177990
[ "survival", "cox-model" ]
617376
1
null
null
0
8
What is the minimum sample size appropriate for each group of t test and ANOVA? Also, is it fine to have a big difference of sample sizes between the groups?
Sample size for comparison tests
CC BY-SA 4.0
null
2023-05-31T02:39:59.510
2023-05-31T02:39:59.510
null
null
389143
[ "pre-post-comparison" ]
617377
1
null
null
0
27
My understanding of principal components regression (PCR) is that it is a linear regression performed on all or a subset of predictors obtained via PCA. All the resources I've read only apply linear regression following the PCA step, but I've found nothing that explicitly warns against using other regression models. Does the regression model in PCR always have to be linear/OLS or can we use other GLM regression models (such as logistic, Poisson, or negative binomial regression)?
Principal components regression vs other regression models following PCA
CC BY-SA 4.0
null
2023-05-31T02:43:06.630
2023-05-31T16:35:19.063
2023-05-31T16:35:19.063
270462
270462
[ "regression", "generalized-linear-model", "pca", "dimensionality-reduction" ]
617378
1
null
null
-1
8
guys I have a problem to proof the monotomic function of Value at Risk Monotonicity. Value at Risk is a monotone risk measure. Proof. If X β©½ Y then P(Y β©½ x) = P(X β©½ Y β©½ x) β©½ P(X β©½ x), x β©Ύ 0, hence P(Y β©½ x) β©Ύ p =β‡’ P(X β©½ x) β©Ύ p, x β©Ύ 0, The question is, how can P(Y β©½ x) = P(X β©½ Y β©½ x) β©½ P(X β©½ x). Can u explaine me more in mathematics formula guys, i really need your explaination
Explaine proof formula of monotonic coheren
CC BY-SA 4.0
null
2023-05-31T03:14:30.023
2023-05-31T03:14:30.023
null
null
376350
[ "mathematical-statistics", "risk", "mathematica" ]
617379
1
null
null
1
15
When presenting data under the form of a [table](https://en.wikipedia.org/wiki/Table_(information)), an error can be to omit mentioning the unit of measure, leading to possible misinterpretation. What are some other possible mistakes to avoid? Additionally, what are some things to do to improve a table? (while not doing these things won't necessarily generate misinterpretation or other problems). I'm interested in particular in the case of tables in print or online publications, which may have their specific pitfalls. I'm also interested in references on the subject, particularly if they rely on usability testing or real-life observations relative to the impact on readers. (But I'd be grateful for answers without references too!). N.B.: This is not a homework question, I'm interested in identifying possible mistakes I make when presenting tables, and possible sources of improvement. Thanks,
Do's and don'ts when presenting data in tables
CC BY-SA 4.0
null
2023-05-31T04:17:58.060
2023-05-31T04:17:58.060
null
null
164936
[ "data-visualization", "references", "tables" ]
617380
1
617382
null
3
247
I have 20 years' worth of observations that either say YES or NO to the question of whether breeding was observed. I want to present the averages of the YES observations across the years, demonstrating what the average breeding effort looks like across a 20-year period. My issue is that early on a lot fewer observations (n=100) were made compared to present day (n=1000). I believe this might skew the results and result in a graph that is misleading in presenting a genuine trend over the years. My question is whether I should weight my averages, log transform them or disregard certain data based on confidence intervals? Any help would be greatly appreciated. This is what the data looks like at the moment: [](https://i.stack.imgur.com/o8TgT.png) [](https://i.stack.imgur.com/G7MYF.png)
How do I present averages from different sample sizes across years?
CC BY-SA 4.0
null
2023-05-31T04:21:14.500
2023-05-31T21:51:02.447
2023-05-31T04:56:05.397
258581
258581
[ "r", "mean", "trend", "population", "ecology" ]
617381
1
null
null
0
56
So I estimated a particular statistic $\Phi$ (custom made) by bootstrapping 1000 samples from the original dataset to generate 1000 different $\Phi$s. The issue is that saving those 1000 bootstrap statistics was not at all memory efficient, so I decided to just save the summary statistics from the bootstrap samples. Saving the samples is also difficult because there are lot of iterations of the bootstrap. So for example, for each iteration, I have 1000 samples of my dataset, I get 1000 different $\phi$, from which I calculate the summary (mean, median etc). I would eventually report the summary statistic, but I also have to analyze the distribution of my bootstraps. I have saved the mean, standard deviation, median, first and third quartiles, the min and max as well as the 5th and 95th percentiles. I do presume that the bootstrap distribution should look normal. But I wanted a robust way to generate my samples back (1000 samples) for further analysis based on these summary statistics. Here is what I have tried so far. Assuming the bootstrap sampling distribution to be normal, I tried using the `truncnorm` distribution in Python to specify the min,max,mean and std deviation and generate 1000 samples. Then I find the index of the corresponding percentiles (medians and the others) in those 1000 samples, and just change them to the summary statistics I have. I tried searching different StackOverflow forums for an answer but this is the best I could come up with till now. It would be helpful if I can get further insights on this.
How do I re-generate my bootstrap statistic based on saved summary statistics from bootstrap
CC BY-SA 4.0
null
2023-05-31T04:33:37.763
2023-06-01T02:35:39.857
2023-06-01T02:35:39.857
249863
249863
[ "distributions", "python", "bootstrap", "descriptive-statistics" ]
617382
2
null
617380
10
null
The usual thing to do here would be to include "error bars" around your sample average giving a confidence interval for the true average from the sampled data each year. For binary data you can get a good confidence interval estimator from the [Wilson score interval](https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval#Wilson_score_interval). This estimator will take account of the sample size for each year, and years with a smaller sample size will tend to have a wider interval (subject to some other factors). Your reader will then be able to see that there is more uncertainty in the true average in the earlier years than in the later years.
null
CC BY-SA 4.0
null
2023-05-31T04:52:32.270
2023-05-31T21:51:02.447
2023-05-31T21:51:02.447
173082
173082
null
617383
1
null
null
0
12
How would I smooth a series of data without losing the ends of a series? For example, if I have $i_1, i_2, \dots, i_n$, and I apply a moving average, I lose the start and end of the series ($i_1$ and $i_n$, certainly, and perhaps more). I can't seem to find any smoothing method that lets me keep both ends of the series. Is this possible?
Smoothing without losing data points
CC BY-SA 4.0
null
2023-05-31T04:58:53.433
2023-05-31T18:05:58.480
2023-05-31T18:05:58.480
389220
389220
[ "time-series", "smoothing" ]
617384
1
null
null
0
6
I am hypothesizing that a certain event caused prices of commodity A to become more volatile. I obtained monthly price data for commodity A as well as a price index for a larger bucket of products similar to commodity A (let's call it B) which I will use as a control. I have 10 years (120 periods) of data for both before and after the event. I am using a simple percentage change to calculate monthly returns on these prices, and defining volatility as the standard deviation of these returns. Using the four sets of data (pre-event A, post-event A, pre-event B, post-event B), how can I set up a difference-in-differences analysis to prove that there is a significant increase in volatility in commodity A? I see that usually difference-in-differences analyses are conducted using OLS regression. Is it possible to deviate from that norm and simply calculate each of the four standard deviations and use f-tests for hypothesis testing? Thank you!
Difference-in-differences of standard deviation
CC BY-SA 4.0
null
2023-05-31T04:59:16.287
2023-05-31T04:59:16.287
null
null
389219
[ "difference-in-difference", "volatility" ]
617385
2
null
617372
0
null
if you dont have data for patients who dont have the disease then you cant model the effect of drug on disease.
null
CC BY-SA 4.0
null
2023-05-31T05:04:38.517
2023-05-31T05:04:38.517
null
null
249098
null
617386
1
null
null
-1
34
An enemy that can drop loot has a 10% probability of spawning out of all enemies. Only one model type can drop loot out of 8 models (1/8). What is the probability of a specific enemy dropping loot, if they are the model that can drop loot? Would 0.1 + 0.125 = 22.5% chance be correct?
Probability of Event if Another Occurs
CC BY-SA 4.0
null
2023-05-31T05:08:39.550
2023-05-31T09:14:51.280
2023-05-31T06:21:04.680
369002
355935
[ "probability", "binomial-distribution" ]
617387
2
null
617086
1
null
if you solve the equation of maximizing variance by choosing a vector subject to constraint $\Vert a\Vert = 1.$ Then the vector which maximizes this variance is the first eigen vector. PCA is just an application of this property. See [here](https://rich-d-wilkinson.github.io/MATH3030/3.4-svdopt.html) for more detail.
null
CC BY-SA 4.0
null
2023-05-31T05:14:59.197
2023-05-31T05:21:21.417
2023-05-31T05:21:21.417
362671
249098
null
617388
1
null
null
2
24
I've made the gam in the code below (in R), but I'm struggling to interpret the results. Specifically, the partial response plots for all but one of the variables is linear, and the CI lines cross in the middle. I've done some looking around and can't find out what this means. Given these plots, is this model valid? Is there something wrong with the model? If so, how would I make a correction? Here's the partial response plots: [](https://i.stack.imgur.com/RsbMh.jpg) the output figures of gam.check (the residuals passed a normality check, just fyi) [](https://i.stack.imgur.com/naxz7.jpg) and the model code with a summary() ``` cpue.GAM <- gam(CPUE ~ s(CHL) + s(BEUTI) + s(PDO) + s(SST) + s(HCI) + s(ONI) + s(NPP), data = master2, method = "REML") > summary(cpue.GAM) Family: gaussian Link function: identity Formula: CPUE ~ s(CHL) + s(BEUTI) + s(PDO) + s(SST) + s(HCI) + s(ONI) + s(NPP) Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 5.7861 0.2406 24.05 <2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Approximate significance of smooth terms: edf Ref.df F p-value s(CHL) 1.000 1.000 2.985 0.08964 . s(BEUTI) 1.000 1.000 12.379 0.00088 *** s(PDO) 1.000 1.000 0.788 0.37868 s(SST) 1.000 1.000 6.543 0.01331 * s(HCI) 2.021 2.564 2.104 0.10650 s(ONI) 1.000 1.000 3.901 0.05327 . s(NPP) 1.000 1.000 1.499 0.22603 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 R-sq.(adj) = 0.337 Deviance explained = 42.1% -REML = 132.26 Scale est. = 3.7038 n = 64 ```
GAM partial response plot interpretation
CC BY-SA 4.0
null
2023-05-31T05:24:41.053
2023-05-31T05:24:41.053
null
null
389221
[ "r", "model", "generalized-additive-model", "smoothing", "partial-plot" ]
617389
1
null
null
2
41
I would like to use a generalised linear model to analyse data on the relationship between the size of a host and probability of parasitism in a wild population to determine the minimum host size at which parasitism is likely. The independent variable is continuous (host size) and the dependent variable is binomial (unparasitised/parasitised). A plot of the parasitism data resembles a logistic regression, except the maximum probability is not 1. In other words, the asymptotic maximum probability of parasitism M in different populations is 0 < M < 1. It seems that I need to use a custom link function in R that looks something like `family=binomial(link = M*logit)`. Can someone suggest appropriate R code to do this? My initial research suggested that this data set can be modelled by the four-parameter logistic model, but that is not the case.
Custom link function needed for generalised linear model in R
CC BY-SA 4.0
null
2023-05-31T05:25:37.960
2023-06-01T07:44:20.253
2023-05-31T05:40:21.677
362671
389224
[ "r", "regression", "logistic" ]
617390
2
null
498238
1
null
adding to dave answer. You should also be careful if your train and test data were "seperate" with no look ahead and leakage.
null
CC BY-SA 4.0
null
2023-05-31T05:38:48.813
2023-05-31T05:38:48.813
null
null
249098
null
617391
1
null
null
1
12
I am interested in adding a variable to a repeated measurements model, LMM (3 measurements of QOL). But unlike gender, for example, which does not change between measurements, the variables I want to add are dichotomous indicating whether a patient was hospitalized between the first and second measurement and likewise between the second and third measurement. The goal is to examine the effect of the first variable on the result of the second measurement and the effect The second variable on the third measurement. A similar question when you want to add a continuous variable between the measurements indicating the length of hospitalization between the first and second measurement and between the second and third measurement. I run the model in R using lmer function The data is in long format. I would appreciate any help! Thanks!
add variable occurs between measurements LMM
CC BY-SA 4.0
null
2023-05-31T05:44:24.990
2023-05-31T05:44:24.990
null
null
389225
[ "r", "regression", "mixed-model", "lme4-nlme", "repeated-measures" ]
617392
1
null
null
1
4
I have seen in many simulation studies, Signal to noise ratio (SNR) is incorporated. Even in glm where usually we do not have error (logistic). I do not get this fact. Why it is necessary for regression? What does it actually imply? Thanks.
Necessity of incorporating Signal to noise ratio in data simulation for glm
CC BY-SA 4.0
null
2023-05-31T05:52:33.337
2023-05-31T05:52:33.337
null
null
386962
[ "regression", "generalized-linear-model", "noise" ]
617393
1
null
null
0
11
How many samples are required to identify the most frequent element? I'm assuming the frequency of the elements in the distribution follow the power law.
How many samples are required to identify the most frequent element in a power law distribution?
CC BY-SA 4.0
null
2023-05-31T06:01:08.983
2023-05-31T06:15:34.580
2023-05-31T06:15:34.580
98845
98845
[ "sampling" ]
617394
1
null
null
0
7
I have a score (ranging from 0 to 1000) which predicts a binary event. The score is based on a regression. Scores are binned. There should be a maximum of $x$ (e.g. $10$) bins. Predicitions within a bin should be homogenous meaning they have a similar average rate of the target. This is normally tested by splitting all observations of a bin into two parts according to the score and then a z-test or t-test with $$ H_0: AR_{bin_i part_1} = AR_{bin_i part_2} $$ is applied. In case $H_0$ cannot be rejected it is conluded that they are similar enough. This works well in case the score is not too perfect nor the sample extremly large. As the sample size increases the score naturally gets better and the test above will start to reject $H_0$ more often if the number of bins does not increase at the same time. How can the test be altered to show that scores within a bin are as homogeneous as it gets, knowing that at max I am willing to have $x$ number of bins? A completely different approach for testing homogenity within buckets given a fixed number of bins is also welcome.
Testing homogeniety within a bin of scores given max number of bins
CC BY-SA 4.0
null
2023-05-31T06:42:30.927
2023-06-01T12:13:43.187
2023-06-01T12:13:43.187
161138
161138
[ "hypothesis-testing", "t-test", "z-test", "credit-scoring" ]
617396
2
null
616623
0
null
What is the penalty value for PELT? PELT segments the data $y_{1:n} = (y_1, \dots, y_n)$ with change points $(\tau_0, \tau_1, \dots, \tau_{m+1})$ (where $\tau_0 = 0, \tau_{m+1} = n$) to make segments $y_{(\tau_0 + 1):\tau_1}, y_{(\tau_1 + 1):\tau_2}, \dots, y_{(\tau_m + 1):\tau_{m+1}}$. In the [paper where PELT is described](https://arxiv.org/pdf/1101.1438.pdf), it says that it chooses the segmentation to minimise the function $$\beta m + \sum_{i=1}^{m+1}C\left(y_{(\tau_i + 1):\tau_{i+1}}\right),$$ so $\beta$ is a penalty term for each change that we add, in order that we won't just put too many changes, and $C$ is a cost function (the default $C$ is twice the negative normal log-likelihood). If you choose $\beta = 2\log(n)$ then you are asking PELT to choose the change points to minimise the [Bayesian Information Criterion](https://en.wikipedia.org/wiki/Bayesian_information_criterion) (BIC) for a model with 1 parameter fit per segment. Each change point adds two parameters to the model (1 for the segment parameter and 1 for the new change point). As you get lots of data, the BIC is likely to be lowest for the correct model, it's a heuristic to balance the good fit of model complexity with a penalty for complicated models. How to choose an optimal range? It really depends on your application and what cost function you're using. Without seeing the data and PELT's proposed change points it's hard to say whether the change points are over/underfitting to the data. In general the BIC is a good starting point but it generally chooses too many change points. Having the penalty as a function of $\log(n)$ will help to keep the number of change points in check. I would just experiment with different penalty values on the type of data that you're using. If you wanted a scheme of penalties to try you could start with $2\log(n)$ and then keep doubling it until the change points fit for the application you have in mind. In my answer to [this question](https://stats.stackexchange.com/questions/617116/detecting-events-in-a-series) I just experimented with larger penalties until it was only selecting the larger mean shifts in the data.
null
CC BY-SA 4.0
null
2023-05-31T07:39:45.163
2023-05-31T08:31:59.487
2023-05-31T08:31:59.487
78857
78857
null
617397
1
null
null
0
10
I have a dataframe of 411 rows where each row represents a pass (football) and four columns. Two columns are the x and y coordinates of the location where the pass starts, and the other two columns are the x and y coordinates of the location where the pass ends. I am trying to implement pseudocode from an academic paper which applies nearest neighbours to the passes - this is all the information I am given. My goal is to find the nearest neighbours using all of the information: that is, by using the information of all the coordinates such that neighbours are found by accounting for start and end coordinates together. I apply KNN (with an arbitrary k) in the following manner: ``` start_coordinates <- df[, c("location.x", "location.y")] end_coordinates <- df[, c("end_location.x", "end_location.y")] k = 17 nearest_neighbours <- get.knnx(start_coordinates, end_coordinates, k = k) ``` `nearest_neigbours` provides me with the index of the 17 nearest neighbours for each pass, as well as the distances. However, I am unsure whether providing the start locations as the data and the end locations as the query achieves what I describe above. If someone could provide some insights as to what this piece of code achieves, that would be much appreciated. Thanks!
How can I interpret KNN clustering using FNN package in R for analyzing football passes?
CC BY-SA 4.0
null
2023-05-31T07:41:38.030
2023-05-31T07:41:38.030
null
null
389237
[ "r", "k-nearest-neighbour" ]
617398
2
null
617368
3
null
Yours is a particular case of the following theorem. Theorem. Let $X\,\sim\, \text{N}_p(\mu, \Sigma)$, $\underset{q\times p}{A}$ a and $\underset{q\times 1}{c}$ a fixed matrix and a fixed vector, respectively, and let $Y = AX+c$. Then $$Y\,\sim\, \text{N}_q(A\mu+c, A\Sigma A^\top).$$ Proof. We will use the [characteristic function](https://en.wikipedia.org/wiki/Characteristic_function_(probability_theory)), which for a random $p$-vector $X$, is defined as $$\phi(t) = E(e^{i t^\top X}),\quad t\in\mathbb{R}^p.$$ First note that if $V\sim \text{N}(\mu, \sigma^2)$, then $\varphi_V(t) = \exp(it\mu - t^2\sigma^2/2)$ and if $W\sim\,\text{N}_p(\mu, \Sigma)$, $\varphi_W (\underset{p\times 1}{s}) = \exp(i s^\top\mu-s^\top \Sigma s/2)$. Then \begin{eqnarray*} \varphi_{Y}(\underset{q\times 1}{u}) & =& \mathbb{E}\{e^{iu^\top Y}\} = \mathbb{E}\{e^{iu^\top(AX+c)}\} = e^{iu^\top c}\mathbb{E}\{e^{i (A^\top u)^\top X}\}\\ && \,\,\color{gray}{\text{($y=Ax+c)$}}\\ &=& e^{iu^\top c} \mathbb{E}\{e^{i s^\top X}\} = e^{iu^\top c}\varphi_X(s) = e^{iu^\top c}e^{i s^\top\mu-\frac{1}{2}s^\top \Sigma s}\\ &=& e^{iu^\top c} e^{i(u^\top A \mu) - \frac{1}{2} u^\top A \Sigma A^\top u}\\ &\overset{(s=A^\top u)}{=}&\exp\{iu^\top(A \mu + c) - \frac{1}{2} u^\top (A \Sigma A^\top ) u\}. \end{eqnarray*} Thus, since $\varphi_Y(t)$ the c.f. of a random vector $Y$ has the form of a c.f. of a multivariate normal distribution, by the properties of the c.f. (check the wiki link if you do not know these properties), we have proved that $Y\sim \text{N}_q(A\mu+c, A\Sigma A^\top)$. Now look for $A$ and $c$ in your particular case and you are done.
null
CC BY-SA 4.0
null
2023-05-31T07:43:04.993
2023-05-31T07:43:04.993
null
null
56940
null
617400
2
null
614148
1
null
if you hava few and categorical covariates variable(pre-experiment measurements) , you can try [block](https://stats.stackexchange.com/questions/20806/what-is-a-block-in-experimental-design): This approach achieves almost perfect balance between covariates, and degrees of freedom don't matter if you have many or continuous covariates variable, you can try [Rerandomization](https://stats.stackexchange.com/questions/138375/what-is-re-randomization): here the degrees of freedom still don't matter here
null
CC BY-SA 4.0
null
2023-05-31T08:31:36.930
2023-05-31T08:31:36.930
null
null
347393
null
617401
1
null
null
-1
16
I have a function that takes a variable, transport costs, and returns an average commuting distance. I want to calculate what happens to the average commuting distance in % terms if we increase transport costs from the base transport cost by 1%. ``` transport_cost = -0.07 get_avgcommute <- function(transport_cost) { get_beta(alpha) get_alpha(beta) get_pi_ij(alpha, beta) get_commutes(pi_ij) #returns avg_commute } ``` If you want the rest of the code, it's not too long, I just didn't think it's necessary, cause it shouldn't be too complicated. I could always save the results for different values of transport_cost in a dataframe, and then make a log regression and then calculate it like that, but then I would have to make sure that the average price is the same as the base price. I just feel like there must be a simpler way.
How do I calculate demand elasticity when what I have is a function?
CC BY-SA 4.0
null
2023-05-31T08:40:36.187
2023-05-31T09:19:49.913
2023-05-31T09:19:49.913
350375
350375
[ "r", "regression", "function", "variable", "elasticity" ]
617402
2
null
294450
0
null
As I lay out in [this answer](https://stats.stackexchange.com/a/339523/121522), doing hypothesis tests on simulated data doesn't make much sense (other than as a way to evaluate a statistical method). To quote that answer: > The null hypothesis of no difference between treatments is false by design. The p-value is therefore meaningless to begin with. Note that you can still use the ANOVA framework to partition the variance explained by the different factors - it's the p-values that are useless. If you were working with numerical predictors and were concerned about nonlinearities in the data, you could use flexible methods like GAMs or random forests to describe variation in the data. Since all your predictors are categorical (if I understand correctly), this would not be very useful to you.
null
CC BY-SA 4.0
null
2023-05-31T08:43:36.547
2023-05-31T08:43:36.547
null
null
121522
null
617404
2
null
208043
0
null
If you had some real data, then you could leverage GMMs or Generative AI models to create synthetic data to mimic your data. If you don't have any real data, then could go for declarative approaches, but it would be difficult to achieve real data value from it. If you have some idea of the type of data and properties (e.g., range, distributions, etc) you want to generate, take a look at the [Synner project](https://github.com/huda-lab/synner), it looks promising. If you eventually get some real data (or another real dataset that somewhat matches your requirements -- even if it is from another domain), then have a look at the [ydata-synthetic](https://github.com/ydataai/ydata-synthetic) project to create synthetic data.
null
CC BY-SA 4.0
null
2023-05-31T08:47:31.663
2023-05-31T08:47:31.663
null
null
389244
null
617405
2
null
617143
1
null
The simplest visualisation here is just a map. You could: - overlay circles (or any other shape) on the countries and make the size area proportional to the quantity of interest. Or use circles of the same size for every country and use colour to indicate variation in number. - colour the countries by the quantity of interest. EDIT: As whuber notes below, chloropleth maps (option #2) have weaknesses. I think this is true of most mapping options, but it's worth considering alternatives. This thread has a few: [Data Visualization: Alternatives to Choropleth maps for spatial data and statistical graphics](https://stats.stackexchange.com/q/136007/121522)
null
CC BY-SA 4.0
null
2023-05-31T08:47:49.233
2023-05-31T17:31:43.197
2023-05-31T17:31:43.197
121522
121522
null
617406
2
null
617367
9
null
## Causal inference is all about what to estimate, not about how to estimate it The point of causal inference is not to reduce multivariate regressions into univariate ones. The point of causal inference is to identify what estimand to estimate to begin with. The article in question gives you multiple ways of running the same regression. That's great, but how would one know what regression to run in the first place? That's the question causal graphical models help answer, and it's also how it's used in the article you cited. Without the causal graph shown in their example, we would not know what to include in our regression model, and the FWL theorem could not help us. Once we are given the causal graph, we know what we would like to estimate and we can then use different techniques (e.g. FWL) to do so.
null
CC BY-SA 4.0
null
2023-05-31T08:54:54.277
2023-05-31T08:54:54.277
null
null
250702
null
617407
2
null
616749
0
null
There's some confusion here about null hypothesis statistical testing (NHST) and about causation. In the first question, you would reject the null hypothesis of no effect. That does not automatically mean that you accept a specific other hypothesis, but it may be consistent with it. Note that the relationship between a statistical hypothesis and a research/scientific hypothesis is complicated - they are not the same thing. Depending on which philosophy you buy into, the goal of a study is arguably to try to falsify your research hypothesis - though this practice is rare in many fields. In the second question, we can say very little about causation based on the limited information you have provided. Causal inference is complicated and requires a number of conditions to be met for a statistical model to provide strong evidence for it. Take a look at the [causality](/questions/tagged/causality) tag for relevant threads. This is a good place to start: [Statistics and causal inference?](https://stats.stackexchange.com/questions/2245/statistics-and-causal-inference)
null
CC BY-SA 4.0
null
2023-05-31T09:01:42.787
2023-05-31T09:01:42.787
null
null
121522
null
617409
1
null
null
0
9
[](https://i.stack.imgur.com/HE4pZ.png) If X is observed and unobserved, how many free parameters are needed respective?
How many free parameters are needed in Bayesian Network?
CC BY-SA 4.0
null
2023-05-31T09:08:24.600
2023-05-31T09:08:24.600
null
null
388783
[ "bayesian-network" ]
617410
2
null
617381
2
null
It is not feasible in general to try to retrieve the bootstrap samples from summary statistics: there is simply not enough information in them. Therefore you need some other way to keep information about the actual bootstrap samples, or to regenerate them quickly. #### 1. approach: keep bootstrap sample information In a situation where the problem is that data items are "bulky", I'd split the bootstrapping into two steps. This requires that you have a dataset consisting of identifiable items (rows), and that you can pick the $i$-th element easily. 1.) generate samples of indices that point to elements from the original sample to be included in a particular bootstrap sample. You can also save these on disc if you need to repeat the experiment multiple times, and free / reuse memory. 2.) select the bootstrap samples one by one from the original data according to indices that were generated in step 1.), and calculate the complicated summary statistics. #### 2. approach: regenerate bootstrap sample information Alternatively, if saving indices on disk is not an option, regenerate the samples from scratch, starting with the same state of random number generator (rng). In python, use `random.seed`, in R:`set.seed`. Then you can strip your code from all time consuming calculations of summary statistics and should retrieve the same bootstrap samples again in very short time. Caveat!!: this approach only makes sense if calculating the summary statistics does not involve further simulations, such as MCMC. Simulation based statistics call the random number generator again. This would advance the rng state uncontrollably between bootstrap resampling steps.
null
CC BY-SA 4.0
null
2023-05-31T09:11:51.830
2023-05-31T15:21:02.447
2023-05-31T15:21:02.447
237561
237561
null
617411
2
null
617386
0
null
Your description is a bit ambiguous and lacks details, but let me try to answer. > What is the probability of a specific enemy dropping loot, if they are the model that can drop loot? You said that "An enemy that can drop loot has a 10% probability of spawning out of all enemies.", so I guess that you answered yourself: it's 10% if by "probability of [...] dropping loot" you mean "probability of spawning out of all enemies". If they are not the same, your question does not contain such information. An alternative reading of your question is that you are asking about the probability that you pick the model that can drop the loot ($1/8$) and it drops the loot ($1/10$). In such a case it's $1/10 \times 1/8 = 1/80$. > Would 0.1 + 0.125 = 22.5% chance be correct? Definitely not. We add the probabilities of mutually exclusive events (A or B). You are asking about their joint probability (A and B) which is a product if the events are [independent](https://en.wikipedia.org/wiki/Independence_(probability_theory)) or conditional probability (depending on the interpretation, as described in the previous paragraphs).
null
CC BY-SA 4.0
null
2023-05-31T09:14:51.280
2023-05-31T09:14:51.280
null
null
35989
null
617412
1
null
null
0
24
I am struggling with (/failing at) the following proof in one of my theorems (Economics): $$\int_a^b f(x)^2F(x)^{n-2}\bigg(1+(n-1)\ln(F(x)\bigg)dx \leq \int_a^b g(x)^2G(x)^{n-2}\bigg(1+(n-1)\ln(G(x)\bigg)dx $$ where $n>1, ~0\leq a<b$ and G first order stochastically dominates F i.e. $ G(x)\leq F(x)~\forall x$, and f(x) and g(x) are the respective densities. No assumptions on the CDFs but I am willing to make simplifying assumptions if they help. I've tried to showing that at every point the integrand on the RHS should be larger than LHS by arguing that at a generic $x_0$ $G(x_0)=c\cdot F(x_0) $ for some $c\leq 1$ but this doesn't lead to anywhere since I don't know what's happening with the PDFs. Any ideas, leads are appreciated.
Help with proof regarding first order stochastic dominance
CC BY-SA 4.0
null
2023-05-31T09:29:52.620
2023-05-31T13:46:46.160
2023-05-31T13:46:46.160
367658
367658
[ "stochastic-ordering" ]
617413
1
null
null
0
18
I'm trying to calculate the expectation of the following numerically: $$\mathbb{E}[V(\theta)]$$ where $\theta\sim N(\mu,\sigma^2)$ and $V(\theta)$ is strictly increasing. I'm struggling to understand what would be the efficient grid points where there are no upper and lower bounds. If I imagine 1000 uniform grid points in $[-M,M] $ for some finite positive number $M$, then it will be Riemann integral, but I'm losing a lot of points for the extreme values. Is there a nice by-pass or convention that people do in terms of numerical analysis for this case? Any help would be tremendously helpful.
How to numerically get expectation of a non-linear function of a normally distributed random variable
CC BY-SA 4.0
null
2023-05-31T09:32:12.270
2023-05-31T09:32:12.270
null
null
196444
[ "expected-value", "integral", "numerical-integration" ]
617414
1
null
null
0
15
I am trying to plot fitted values (fitted.sens.early) of the models below as a line over the original values (reading) as points. But I changed from using `lme()` to `lmer()` and the produced graphs don't match up. I cant figure it out myself, so if anyone could help out, that would be greatly appreciated! An exerpt from my data: ``` row column plot bed treatment species cultivar replicate time date calibration reading doy row2 indNr fitted.sens.early fitted.sens.late 73 1 J 1J 1 Control Wheat ANSC.2759 RED 15:43:34 2022-03-16 Wheat 348.6 75 1 2 419.1706 NA 74 2 F 2F 1 Control Wheat ANSC.2759 RED 15:46:24 2022-03-16 Wheat 544.3 75 2 2 494.3673 NA 75 12 R 12R 6 Control Wheat ANSC.2759 RED 15:55:21 2022-03-16 Wheat 587.1 75 1 2 492.8279 NA 76 21 E 21E 11 Control Wheat ANSC.2759 RED 15:56:26 2022-03-16 Wheat 668.2 75 1 2 537.8255 NA 77 37 Q 37Q 19 Early drought Wheat ANSC.2759 RED 16:10:25 2022-03-16 Wheat 376.5 75 1 2 447.3767 NA 78 38 N 38N 19 Early drought Wheat ANSC.2759 RED 16:12:37 2022-03-16 Wheat 621.5 75 2 2 513.3009 NA 79 48 N 48N 24 Early drought Wheat ANSC.2759 RED 16:20:30 2022-03-16 Wheat 311.8 75 2 2 510.9710 NA 80 57 F 57F 29 Early drought Wheat ANSC.2759 RED 16:28:04 2022-03-16 Wheat 577.4 75 1 2 500.3733 NA 81 1 J 1J 1 Control Wheat ANSC.2759 BLUE 16:38:55 2022-03-16 Wheat 292.7 75 1 1 318.9814 NA 82 2 F 2F 1 Control Wheat ANSC.2759 BLUE 16:49:51 2022-03-16 Wheat 341.2 75 2 1 420.6160 NA 83 12 R 12R 6 Control Wheat ANSC.2759 BLUE 16:57:41 2022-03-16 Wheat 354.8 75 1 1 522.2038 NA 84 21 E 21E 11 Control Wheat ANSC.2759 BLUE 17:00:01 2022-03-16 Wheat 685.2 75 1 1 530.4388 NA 85 37 Q 37Q 19 Early drought Wheat ANSC.2759 BLUE 17:15:12 2022-03-16 Wheat 307.0 75 1 1 375.7587 NA 86 38 N 38N 19 Early drought Wheat ANSC.2759 BLUE 17:17:52 2022-03-16 Wheat 303.3 75 2 1 495.8043 NA 87 48 N 48N 24 Early drought Wheat ANSC.2759 BLUE 17:27:46 2022-03-16 Wheat 677.8 75 2 1 608.8890 NA 88 57 F 57F 29 Early drought Wheat ANSC.2759 BLUE 17:38:13 2022-03-16 Wheat 552.8 75 1 1 631.1126 NA 89 1 J 1J 1 Control Wheat ANSC.2759 RED 11:15:54 2022-03-18 Wheat 335.3 77 1 2 411.5344 NA 90 2 F 2F 1 Control Wheat ANSC.2759 RED 11:19:05 2022-03-18 Wheat 411.9 77 2 2 486.7311 NA 91 12 R 12R 6 Control Wheat ANSC.2759 RED 11:30:06 2022-03-18 Wheat 465.8 77 1 2 485.1917 NA 92 21 E 21E 11 Control Wheat ANSC.2759 RED 11:31:56 2022-03-18 Wheat 665.1 77 1 2 530.1893 NA 93 37 Q 37Q 19 Early drought Wheat ANSC.2759 RED 11:51:32 2022-03-18 Wheat 514.9 77 1 2 439.1340 NA 94 38 N 38N 19 Early drought Wheat ANSC.2759 RED 11:54:44 2022-03-18 Wheat 535.6 77 2 2 505.0581 NA 95 48 N 48N 24 Early drought Wheat ANSC.2759 RED 12:02:49 2022-03-18 Wheat 680.5 77 2 2 502.7282 NA 96 57 F 57F 29 Early drought Wheat ANSC.2759 RED 12:21:03 2022-03-18 Wheat 540.6 77 1 2 492.1305 NA etc. ``` I started with the first model, ``` mod.chlo.early.sens <- lme( reading ~ treatment*cultivar*doy, random = ~ 1|plot/replicate, data = subset(chlo, date >= start.drought_early & date <= end.drought_early & treatment != "Late drought"), na.action = na.omit) ``` Then I add the fitted values to my dataframe: ``` chlo$fitted.sens.early[with(chlo, !is.na(reading) & date >= start.drought_early & date <= end.drought_early & treatment != "Late drought")] <- fitted(mod.chlo.early.sens, level = 0) ``` And then plot: ``` chlo2<-subset(chlo, cultivar=="ANSC.2759") ggplot(data = subset(chlo2, date >= start.drought_early & treatment != "Late drought"), aes(x = date,y = reading, color = treatment)) + geom_point(size = 0.8, alpha = 0.25)+ geom_line(data=chlo2[!is.na(chlo2$fitted.sens.early),], aes(y=fitted.sens.early,color=treatment),size = 1.5) + facet_wrap(~cultivar, ncol=8)+ labs(title = "Chlorophyll content", y = expression("Chlorophyll content ( "*mu*mol/m^2*")")), theme(axis.text.x = element_text(angle = -60, hjust = 0), legend.position = "bottom"), scale_x_date(breaks = "weeks", date_labels = "%d %b") + labs(subtitle = "Period-specific models, early drought",color= "Treatment")+ scale_colour_manual(values=c("#F8766D","#00BA38")) ``` Which gives me this graph: [](https://i.stack.imgur.com/gyrYC.png) However, I wanted to include another random effect `(1|bed)` in the initial model, which I did not manage to do with `lme()` and so I used `lmer()` instead. ``` mod.chlo.early.sens <- lmer( reading ~ treatment*cultivar*doy + (1|bed) + (1|plot/replicate), data = subset(chlo, date >= start.drought_early & date <= end.drought_early & treatment != "Late drought"), na.action = na.omit) ``` Folowing the exact same subsequent code then produces this graph, where the line is jagged instead of linear: [](https://i.stack.imgur.com/EEYqe.png) Im not sure whether this is simply caused by switching from `lme` to `lmer` or how to fix it. In any case, using `fitted()` on a `lme` model seems to extract the fitted values differently than when I use `fitted()` on a `lmer` model. `fitted()` on the `lme` model gives me the same fitted value for each replication (I have 4 replicate plots with two individual plants that were measured inside them) `fitted()` on the `lmer` model seems to fit each individual plant seperately, as the fitted values differ per replication which causes the jagged line in the plot. A simple fix might be to include the `(1|bed)` term in the `lme` model, but I can't figure out how to formulate it properly. I keep getting errors. The more complex fix, I suppose, would be to extract the fitted values properly. Thanks in advance for your help!
R: Plotting fitted values with lme versus lmer
CC BY-SA 4.0
null
2023-05-31T09:44:08.007
2023-05-31T09:44:08.007
null
null
389253
[ "r", "lme4-nlme", "fitting" ]
617415
1
null
null
1
29
As part of my master's thesis, I am currently investigating the impact of a law in the area of tax law on municipalities. Usually, according to the literature I found, the differences-in-differences approach is used in such cases. I have data from about 70 municipalities over an eight-year period. Now in my case I don't have anything to use as a control group, since the law basically applies to every municipality. Can I modify the DiD method to use a municipality as a control group to which the law applies in principle, but which does not apply in practice due to other characteristics? Do you have other ideas how I can proceed here? Is a regression discontinuity design a possible alternative approach?
Differences-in-Differences without Control-Group
CC BY-SA 4.0
null
2023-05-31T09:58:12.710
2023-05-31T14:28:55.977
2023-05-31T14:28:55.977
388706
388706
[ "panel-data", "difference-in-difference", "causalimpact" ]
617416
1
null
null
0
10
I have a group of patients that I want to match on a number of variables: age, sex, BMI in category (5 categories), year and comorbidity score. In order to be computationnaly efficient I'd like to do an exact matching on `year`, `sex` and `BMI category`, then nearest neighbor matching or optimal matching on the other 2 variables (`age` and `comorbidity score`). Does anyone have an idea of how to implement this with matchit or another package? or do I have to do my NN matching on each stratum separately?
How to implement exact matching on some variables and nearest-neighbor or optimal matching on others?
CC BY-SA 4.0
null
2023-05-31T10:51:53.617
2023-05-31T10:51:53.617
null
null
269691
[ "r", "matching" ]
617417
1
null
null
0
33
I am doing a research for my thesis in which I want to analyse gender differences in negotiation. I made a questionnaire and managed to collect 60 responses. The answers to the questions are in the form of a five point likeart scale (1.strongly disagree...5 strongly agree). I "coded" the responses, e.g. gender (Male=1, Female=2; Strongly disagree=1, Strongly agree=5). Now I want to analyse the data using SPSS, but I don't understand which type of test (t-test; ANOVA etc) to conduct. I need some advice. PS I have no statistical background.
Research for thesis. Advice needed!
CC BY-SA 4.0
null
2023-05-31T10:57:27.283
2023-05-31T10:57:27.283
null
null
389256
[ "anova", "t-test", "descriptive-statistics", "variable" ]
617418
2
null
616920
3
null
As @whuber has pointed out, the expressions in your post are true for non-negative random variables, not in general. For a random variable $X$ with distribution function $G$, one has in general $$E(X)=\int_0^\infty (1-G(x))\,\mathrm dx-\int_{-\infty}^0 G(x)\,\mathrm dx\,,$$ whenever the expectation exists. As for your question, if $X,Y$ have distribution functions $G,F$ respectively, and $(X,Y)$ has distribution function $H$, then $$E|Y-X|=\int (F(x)+G(x)-2H(x,x))\,\mathrm dx \,, \tag{$\star$}$$ whenever the expectation exists. [An elegant proof of this result (no non-negativity assumption here) is shown in this Math.SE [post](https://math.stackexchange.com/a/2921866/1074816).] In essence, this is based on the fact that \begin{align} |Y-X|&=(Y-X)\mathbf1_{\{Y\ge X\}}+(X-Y)\mathbf1_{\{Y< X\}} \\&=\int \mathbf1_{\{X\le x\le Y\}}\,\mathrm dx + \int \mathbf1_{\{Y\le x\le X\}}\,\mathrm dx \end{align} Taking expectation on both sides, and using Fubini/Tonelli's theorem yields $(\star)$. In particular, when $X$ and $Y$ are independent, we have $$E|Y-X| = \int F(x)(1-G(x))\,\mathrm dx + \int G(x)(1-F(x))\,\mathrm dx $$ And when $X$ and $Y$ are i.i.d, this is just $$E|Y-X|=2\int F(x)(1-F(x))\,\mathrm dx$$
null
CC BY-SA 4.0
null
2023-05-31T10:59:18.183
2023-05-31T19:03:14.673
2023-05-31T19:03:14.673
119261
119261
null
617419
1
617464
null
2
133
Let $X$ be a random variable with p.d.f.: $$f(X|\theta) = \frac{e^{x-\theta}}{(1+e^{x-\theta})^2}$$ where $-\infty<x<\infty$ and $-\infty<\theta<\infty$ Use the pivotal method to verify that if $0<\alpha_1<0.5$ and $0<\alpha_2<0.5$, then $$[X-\log(\frac{1-\alpha_2}{\alpha_2}),X-\log(\frac{\alpha_1}{1+\alpha_1})]$$ is a confidence interval for $\theta$ with coverge probability $1-(\alpha_1+\alpha_2)$ here my attempt if I set $Y = X-\theta$ I get this $$\begin{equation} \begin{aligned} F_Y(y) \equiv \mathbb{P}(Y \leqslant y) &= \mathbb{P}({X-\theta} \leqslant y) \\ &= \mathbb{P}(X \leqslant Y+\theta) \\ &= \int \limits_{-\infty}^{y-\theta} \frac{e^{x-\theta}}{(1+e^{x-\theta})^2} dx \\ &= \Bigg[ \frac{-1}{1+e^{x-\theta}} \Bigg]_{x=-\infty}^{x=y+\theta} \\ &= 1-\frac{1}{1+e^{y}}. \\ \end{aligned} \end{equation}$$ from this I derived the pdf equal to $$f_Y(y) = \frac{e^x}{(1+e^x)^2}$$ from this I show that the distribution does not depend on the parameter anymore. how can I demonstrate the confidence interval for $\theta$? Any help or suggestion would be appreciated.
pivotal quantity and confidence interval
CC BY-SA 4.0
null
2023-05-31T10:59:56.597
2023-05-31T19:19:57.517
2023-05-31T17:07:14.223
362147
362147
[ "mathematical-statistics", "pivot" ]
617420
1
null
null
0
14
Thank you very much for your attetion! I am working on an observation study with time-to-event data. The data has multiple covariates say V1-V5. I want to evaluate the treatment effect, so I used IPTW (weightit package) to balance V1-V5. Balance was achieved between treatment group after IPTW. However, the KM curve and log-rank P doesn't quite match, which confused me a lot. Below is how I perform the analysis: - I first evaluate the treatment effect without balancing. I draw the KM curve and calculated the log-rank p value, I found the curve overlap each other and P is not significant ``` library(survival) library(survminer) fit_surv <- survfit(Surv(time, event) ~ treat, data = data) ggsurvplot(fit_surv, data = data, pval = T, pval.method = T) ``` [](https://i.stack.imgur.com/ta4TG.png) 2. Then I performed IPTW weighting using WeightIt package, again, I draw the KM curve and calculated the log-rank p value. However, I found the curve well-separated to each other but P remain unchanged! ``` library(WeightIt) W.out <- weightit(treat ~ V1 + V2 + V3 + V4 + V5, data = data, estimand = "ATT", method = "ebal") data$weight <- W.out$weights fit_surv <- survfit(Surv(time, event) ~ treat, data = data, weights = data$weight) ggsurvplot(fit_surv, data = data, pval = T, pval.method = T) ``` [](https://i.stack.imgur.com/BgcYA.png) Why a separated KM curve yield exact the same P value? I'm afraid of using the wrong test method. When analysing weighted samples in a time-to-event data, what is the correct way of testing the survival difference? Should I use `survival::coxph(weight = ...)` instead? Any suggestions and comments are highly welcome!
How to perform log-rank test correctly on IPTW weighted groups?
CC BY-SA 4.0
null
2023-05-31T11:04:45.963
2023-05-31T13:45:57.150
null
null
388935
[ "r", "statistical-significance", "survival", "sample-weighting" ]
617422
1
null
null
0
11
I have a multiclass timeseries classification problem with 11 classes. The class 0 is my negative class, and the classes 1~10 are the positives and they are generated (using some equations) based on the samples from class 0. Those timeseries are hourly measurements of energy consumption in kWh. One of those positives classes (class 10 from the picture), is just the negative class (class 0) timeseries in the reverse order ([flip)](https://numpy.org/doc/stable/reference/generated/numpy.flip.html). Check this picture for example: [](https://i.stack.imgur.com/TZtMk.png) At first i was using only the timeseries itself as input to my model. Then, i added the mean value of the timeseries, and it increased the performance detection in some classes, but of course it does not help improve the detection of class 10. What feature I could add to help detect this kind of reverse timeseries?
Feature to detect a reverse timeseries
CC BY-SA 4.0
null
2023-05-31T11:36:22.190
2023-05-31T11:36:22.190
null
null
346317
[ "time-series", "feature-engineering" ]
617423
1
null
null
1
38
as you can see on the figure i linked i am currently fitting my data (which looks like a sigmoid) using a function i defined with 5 parameters. ``` def sigmoid(x, L, x0, k, b, e): y = (L + e*x) / (1 + np.exp(-k*(x-x0))) + b return y ``` then as i said i'd like to have an idea of how good my fit is (kolmogorov, chi2) so i went to determine what's the critical chi square value: here my degrees of freedom are the number of points i consider for my fit minus the numbers of parameters = 25000 - 5 that gives a chisquare crit = 25384. The value I have with my fit is 0.6... which is way smaller than the critical one but is it normal to have that much of a difference ? Which criterion could I put in order to know wether the fit is great or not. Thanks for any help. [](https://i.stack.imgur.com/PERdJ.png)
Test of good fitting on a sigmoid
CC BY-SA 4.0
null
2023-05-31T11:46:16.753
2023-05-31T12:02:09.490
2023-05-31T12:02:09.490
389259
389259
[ "chi-squared-test", "fitting", "curve-fitting", "sigmoid-curve" ]